Welcome to Less Wrong! (2012)
post by orthonormal · 2011-12-26T22:57:21.157Z · LW · GW · Legacy · 1440 commentsContents
A few notes about the site mechanics A few notes about the community A list of some posts that are pretty awesome None 1440 comments
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- Your Intuitions are Not Magic
- The Apologist and the Revolutionary
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- The Planning Fallacy
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
- That Alien Message
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from orthonormal: MBlume and other contributors wrote the original version of this welcome message, and I've stolen heavily from it.)
1440 comments
Comments sorted by top scores.
comment by Lara · 2011-12-27T00:20:42.996Z · LW(p) · GW(p)
Hello everyone!
Thank You for this site and for sharing your thoughts, for genuinely trying to find out what is true. What is less wrong. This has brightened my view of humanity. :)
My name is Lara, I’m from Eastern Europe, 18 years old, currently studying physics, reading a lot and painting in my free time. For about a year and a half now I’ve been atheist; before then- devout and sincere christian, religious nerd of the church. A lot of things in the doctrine bothered me as compltely illogical, unfair and just silly, and somehow I tried to reason it all out, I truly believed, that the real Truth will be with God and that he will help me understand it better. As it turned out, truth seeking and religiosity were incompatible.
Now I’m fairly ‘recovered’- getting used to the new way of thinking about the world, but still care about what is really true and important, worth devouting my life to(fundamentalist upbringing :)). As I still live with my family, it is hard to pretend all the time, knowing they will have no contact with me whatsoever, when I come out; it is really good to find places like this, where people are willing to dig as deep as possible, no matter what, to understand better.
So thanks and sorry for my english. I hope someday I’ll be able to add something useful here and learn much more.
Replies from: orthonormal, Dustin, MichaelVassar, thomblake, None↑ comment by orthonormal · 2011-12-27T01:14:08.346Z · LW(p) · GW(p)
Welcome! Your English is excellent, don't worry on that count.
...also, that's a really tough predicament (hiding your atheism from your fundamentalist family), and I don't have anything wise to say about it, except that it isn't the end of the world when they do find out, and that often people will break their religious commitments rather than really abandon their children (so long as they can think of a religiously acceptable excuse to do so). But I'm not really qualified to give that advice. Hang in there!
↑ comment by Dustin · 2012-01-05T03:42:48.127Z · LW(p) · GW(p)
I sympathize with you as I'm an atheist with a fundamentalist family who would cut me out of their lives if they found out.
I also envy you, as you had your enlightenment happen at such an early age. I didn't have mine until I was pushing middle age and had created a family of my own...all whom were also fundamentalist. I still live "in the closet" so to speak...
Replies from: Raemon↑ comment by MichaelVassar · 2011-12-27T09:57:10.512Z · LW(p) · GW(p)
I'm sorry to hear that your family try to control you like this. Do you expect to physically live near them for long? If not, you may not need to tell them. Surely they have behaviors that they don't tell you about too, and don't honestly expect you to actually act as if you believed (just as they probably don't act that way themselves and expected you to grow out of the confused phase in your life when you were doing all that weird stuff that you did as a result of being a sincere and devout christian who expects things to be logical fair and non-silly once understood)
Replies from: Lara↑ comment by Lara · 2011-12-27T17:14:14.499Z · LW(p) · GW(p)
Thank you all for support, it is incredibly important.
Unfortunately it is a church norm to cut off everyone who leaves, and the doctrine is such that there is no way to be ‘inbetween’. The community is quite closed and one’s whole life is determined- from the way we dress(girls especially), to the way we make carriers (or stay at home and raise children). So in the beginning I decided not to tell anyone at all, knowing how painful it would be for everyone, but after some time I realised that I could not live like that my whole life; though egoistically, after I earn enough money to leave, I will.
Replies from: MichaelVassar, None↑ comment by MichaelVassar · 2011-12-29T10:28:48.942Z · LW(p) · GW(p)
There are really a lot of possibilities for finding work if you need it, at least if you are a US citizen. I can help you with that if you want. If nothing else, http://lesswrong.com/lw/43m/optimal_employment/ is available. I bet that if a few LWers could get together to do this (possibly after absorbing some of our West Coast or NYC contingent culture first) and build an amazing community there. email me.
↑ comment by thomblake · 2011-12-27T17:34:05.245Z · LW(p) · GW(p)
To expand on orthonormal's point, note impact bias. If you do end up having to be truthful with them, whatever consequences you're imagining now are probably far worse that what you will actually go through. People tend to carry on just fine.
And remember that the virtue of honesty does not require telling all truths, but rather not communicating falsely. If telling your parents you are an atheist will mean to them that you are an amoral person, maybe you should not say so unless that is also true.
↑ comment by [deleted] · 2012-02-08T14:31:32.013Z · LW(p) · GW(p)
Hey Lara,
being a Slovenian student of physics from a very devout Catholic family (which I actually occasionally still accompany to Sunday mass) I can definitely relate to your story. I coped by sharing my doubts with less religious family members, eventually sharing with my sister that I considered myself atheist. I mostly let my extended family think what they will, but I don't really work to hide my non-belief in any serious way any more. I don't however try to argue with them about it. Mostly because de-converting my family members in a mostly secular country didn't really feel like a top priority, but also because I saw it would be very hard to get them interested in rationality. And without that in my mostly secular country, it didn't really seem worth it since I've come to realize that non-religious delusion is as widespread as religious delusion. I was for a time somewhat conflicted on this, but my general attitude since then is that I love my my family because they are my family not because I think they are good at rational debate or hold true beliefs. I think most parents feel the same way about their children.
I'd heartily recommend reading the sequences, since atheism is just the beginning. :)
Best wishes, Konkvistador
comment by obfuscate · 2011-12-27T05:05:09.243Z · LW(p) · GW(p)
Hi; I'm a lurker of about one year, and recently decided to stop lurking and create an account.
I'm an undergraduate in Portland-area Oregon. I study mathematics and computer science at Pacific University. I've been interested in rationality for a very long time, but Less Wrong has really provided the formalism necessary to defend certain tactics and strategies of thought over others, which has been very...helpful. :)
Speaking of Portland, it seems that there are many Portland Less-Wrongians and yet there is no meetup. I would like to start a meetup, so I need a bit of Karma to get one started.
comment by gyokuro · 2011-12-26T19:31:18.697Z · LW(p) · GW(p)
Hi, I'm 15, so sadly cannot say much of my education yet, but at least I've read a fair deal. I find the ideas on this site somewhat unappreciated among my age group, but fascinating for me. I've lurked here for close to a year, but I'm irrationally shy of speaking over the internet. I hope to contribute if I find what I think interesting, regardless of my adverseness to commenting. Thank you for the welcome!
Replies from: KPiercomment by _ozymandias · 2011-12-27T17:32:24.444Z · LW(p) · GW(p)
Hi everyone! I'm Ozy.
I'm twenty years old, queer, poly, crazy, white, Floridian, an atheist, a utilitarian, and a giant geek. I'm double-majoring in sociology and psychology; my other interests range from classical languages (although I am far from fluent) to guitar (although I suck at it) to Neil Gaiman (I... can't think of a self-deprecating thing to say about my interest in Neil Gaiman). I use zie/zir pronouns, because I identify outside the gender binary; I realize they're clumsy, but English's lack of a good gender-neutral pronoun is not my fault. :)
One of my big interests is the intersection between rationality and social justice. I do think that a lot of the -isms (racism, sexism, ableism, etc.) are rooted in cognitive biases, and that we're not going to be able to eliminate them unless we understand what quirks in the human mind cause them. I blog about masculism (it is like feminism! Except for dudes!) at No Seriously What About Teh Menz; right now it's kind of full of people talking about Nice-Guy-ism, but normally we have a much more diverse front page. I believe that several of the people here read us (hi Nancy! hi Doug! hi Hugh, I like you, when you say I'm wrong you use citations!).
I've lurked here for more than a year; I got here from Harry Potter and the Methods of Rationality, just like everyone else. I've made my way through a lot of the Sequences, but need to set aside some time to read through all of them. I don't know much about philosophy, math, science, or computers, so I imagine I will be lurking here a lot. :)
Replies from: MBlume, MileyCyrus, NancyLebovitz, CronoDAS, HughRistik, windmil, MixedNuts↑ comment by MBlume · 2012-01-02T21:25:26.675Z · LW(p) · GW(p)
Hi Ozy, it's really good to see you here, I enjoy the blog a lot. I remember reading one of your first social justice 101 posts, finding it peppered with LW links, and thinking "holy crap, somebody's using LW as a resource to get important background information out of the way while talking about something-really-important-that-isn't-itself-rationality -- this is awesome and totally what LW should be for", so that made me happy =)
Replies from: _ozymandias↑ comment by _ozymandias · 2012-01-02T22:05:02.850Z · LW(p) · GW(p)
Thanks! LW actually helped me crystallize that a lot of the stuff social-justice-types talk about is not some special case of human evil, but the natural consequence of various cognitive biases (that, in this case, serves to disadvantage certain types of people).
↑ comment by MileyCyrus · 2011-12-29T03:50:29.115Z · LW(p) · GW(p)
Her blog is good. Instead of blindly cheering for a side in the feminism vs men's-rights football game, Ozymandias actually tries to understand the problem and recommend workable solutions.
Replies from: _ozymandias↑ comment by _ozymandias · 2011-12-29T04:20:52.355Z · LW(p) · GW(p)
Thank you very much, Miley! I tend to view feminism and men's rights as being inherently complementary: in general, if we make women more free of oppressive gender roles, we will tend to make men more free of oppressive gender roles, and vice versa. However, in the great football game of feminists and men's rights advocates, I'm pretty much on Team Feminism, which is why I get so upset when it's clearly doing things wrong.
Also, my pronoun is zie, please. :)
Replies from: MileyCyrus↑ comment by MileyCyrus · 2011-12-29T04:34:34.965Z · LW(p) · GW(p)
However, in the great football game of feminists and men's rights advocates, I'm pretty much on Team Feminism, which is why I get so upset when it's clearly doing things wrong.
What I meant is that you actually demand results from your team, instead of giving them a free pass just because they have a certain label.
Replies from: _ozymandias↑ comment by _ozymandias · 2011-12-29T04:59:01.615Z · LW(p) · GW(p)
Ah, thank you. I misunderstood. :) I've had a few problems with people being confused about why my blog uses so much feminist dogma if it's a men's rights blog, so I'm hyper-sensitive about being mistaken for a non-feminist.
↑ comment by NancyLebovitz · 2011-12-29T05:26:55.351Z · LW(p) · GW(p)
Hi, Ozy!
I've enjoyed your writing at No Seriously What About Teh Menz; so it's good to see you here.
↑ comment by HughRistik · 2011-12-28T10:08:16.750Z · LW(p) · GW(p)
Hi Ozy!
↑ comment by windmil · 2011-12-28T03:59:55.622Z · LW(p) · GW(p)
The only LWer that I've noticed was from Florida! (Of course, people don't too frequently pepper their posts with particulars of their placement.)
Replies from: _ozymandias↑ comment by _ozymandias · 2011-12-28T05:21:36.926Z · LW(p) · GW(p)
Where are you? I'm in Fort Lauderdale and the Tampa area. If we're near each other maybe we could arrange one of those meetup thingies...
Replies from: daenerys, khafra, windmil↑ comment by daenerys · 2011-12-31T21:16:30.591Z · LW(p) · GW(p)
Hi ozy!
I am really happy to see you on here! I enjoy your blog.
This map shows that as of last week-ish there were at least four Floridians on LW. Unfortunately, their identity is unknown, and you guys seem to be spread out. But if you post a meetup, you can see who responds. Good luck!
↑ comment by MixedNuts · 2012-01-02T17:41:01.326Z · LW(p) · GW(p)
turns into a raving fanboy, squees, explodes
Replies from: _ozymandias↑ comment by _ozymandias · 2012-01-02T21:07:00.716Z · LW(p) · GW(p)
Dammit, could someone clean the fanboy off the ceiling? The goop is getting in my hair. :)
comment by Kallio · 2011-12-27T01:03:34.451Z · LW(p) · GW(p)
Hi; I've been reading LessWrong for more than a year and a half, now, but I never quite got around to making an account until today.
So, introduction: I'm eighteen years old, female, transgender. I live in California, USA. I don't have a lot of formal education; I chose to be homeschooled as a little kid because my parents were awesome and school wasn't, and due to disability I've not yet entered college.
The road to rationalism was fairly smooth for me. I'm a weirdo in enough ways that I learned early on not to believe things just because everyone else believed them. It took a little bit longer for me to learn not to believe things just because I had always believed them.
I guess my major "Aha!" moment came when I was fourteen, after I finally admitted to myself that I was transgender. I had lied to myself, not to mention everyone else, for almost a decade and a half. I had shied away from the truth every time I'd had the opportunity to see it. And while I'd had pretty good reasons for doing so (Warning: Big-ass PDF), the truth felt better. Not only that, but knowing the truth was better, in measurable ways; it allowed me to begin to move my life in a direction I actually liked.
Avoiding the truth had hurt me enough that I began systematically examining every belief I could think of, and some I would rather not have thought about at all. And thus was a rationalist born.
I found LessWrong in spring of 2010, through Harry Potter and the Methods of Rationality. I haven't had the time to read all, or even most, of the sequences yet, but I've made a good start on them: so far I've read all of Map and Territory, Mysterious Answers to Mysterious Questions, Reductionism, and A Human's Guide to Words. I've also read large parts of How To Actually Change Your Mind, as well as bits and pieces of other sequences, and various independent articles. They've helped a lot, both with teaching me things about rationalism which I didn't already know, and making me more sure of the things I'd worked out for myself.
Since I'm interested in not only rationalism, but also in probability theory, transhumanism, and both human and machine intelligence, this has been pretty much my favorite site to read ever since I found it. Thanks for being awesome.
Replies from: None↑ comment by [deleted] · 2012-02-08T14:20:25.042Z · LW(p) · GW(p)
Welcome to the site Kallio!
The road to rationalism was fairly smooth for me. I'm a weirdo in enough ways that I learned early on not to believe things just because everyone else believed them. It took a little bit longer for me to learn not to believe things just because I had always believed them.
I don't think you are alone in your experience of this. People here are pretty contrarian, metacontrarian even. I hope that in the month since you've posted this you've continued to gain utility from the site. :)
I haven't had the time to read all, or even most, of the sequences yet, but I've made a good start on them: so far I've read all of Map and Territory, Mysterious Answers to Mysterious Questions, Reductionism, and A Human's Guide to Words. I've also read large parts of How To Actually Change Your Mind, as well as bits and pieces of other sequences, and various independent articles. They've helped a lot, both with teaching me things about rationalism which I didn't already know, and making me more sure of the things I'd worked out for myself.
While I have long ago read most of them, there are still sequences that I haven't read in a systematic fashion and I don't think I'm that exceptional among long time readers in that regard, so once you feel you've gotten a good grasp on issues don't be afraid to post. Also if you have a question about the material, need a beta reader for a contribution or would just like to discuss stuff with someone, please feel free to PM me.
All the best, Konkvistador
comment by lisa · 2012-02-07T22:17:24.409Z · LW(p) · GW(p)
Hello!
I'm a 20 year old student at Georgia Tech, double majoring in Industrial Engineering and Psychology, and am spending the current semester studying abroad at the University of Leeds in the UK.
I read HPMOR this weekend on a bus trip to London and as soon as I returned I found this site and have been enthralled by the Sequences, which I am slowly working my way through.
All of my life I have loved to read and learn new things and think through them, but last year I came to the realization that my curiosity had started to die in my late high school years. I found myself caring about getting a good grade and then abruptly forgetting the information. Much of what I was "learning" I never truly understood and yet I was still getting praise from teachers for my good grades, so I saw no reason to invest more effort. Early last year, I realized that this was happening and attempt to rededicate myself to finding things that again made me passionate about learning. This was a major contribution to adding Psychology as a second major.
This semester of new classes in a new educational system combined with the past few days of reading the Sequences have sparked my interest in many subjects. I'm itching to go to the school library and start picking up anything that catches my interest now that the the thirst to learn has been reawakened. I'm especially interested in Evolutionary Psychology, Social Psychology, and Statistics. I have absolutely no idea what I would like to do as a future career, but have this reoccurring thought that I would love to do some sort of work which involved restructuring the education system. (Every person at my University that I have mentioned that thought to gives me a strange look and says either "Education? You???' or "But then you wouldn't make any money!')
Anyways, I am extremely glad to have found this site and community.
Replies from: None, juliawise, None, TimS, Swimmer963, JohnEPaton, daenerys, thomblake↑ comment by [deleted] · 2012-02-08T14:02:11.433Z · LW(p) · GW(p)
Welcome to the site!
Early last year, I realized that this was happening and attempt to rededicate myself to finding things that again made me passionate about learning. This was a major contribution to adding Psychology as a second major.
Since I suspect you may find it interesting, have you read anything on spaced repetition so far? Also since I'm linking there I just want to warmly recommend gwern's site in general, he has a great knack for finding relevant information and presenting it well (good enough to get him a job at the Singularity Institute!)
I found myself caring about getting a good grade and then abruptly forgetting the information.
I've come to know and grown to dislike this feeling in the past few years of university. It is why I spend more effort than needed to try and make knowledge I learn become truly a part of me. Of course sometimes you just need to jump through hoops ...
This semester of new classes in a new educational system combined with the past few days of reading the Sequences have sparked my interest in many subjects.
Consider asking around for a chavruta. The sequences are loooong (which is good since they are mostly well written) and talking to people about what you read is always fun. Taking up daenerys on her offer also sounds like a good idea indeed.
Cheers, Konkvistador
Replies from: lisa↑ comment by lisa · 2012-02-08T22:38:19.476Z · LW(p) · GW(p)
Hi, thank you for both the welcome and the wealth of helpful knowledge!
I did find the info on spaced repetition, as well as everything else you linked me to, very interesting! I think my problem now is that my interest in so many different things has been sparked, and I'm having a hard time prioritizing what to read and research first!
↑ comment by juliawise · 2012-02-08T00:05:41.163Z · LW(p) · GW(p)
Hi! I also loved finding a place where people were really excited about ideas.
You might be interested in 80,000 Hours, a site on choosing careers that improve the world (and they're very much in favor of making money as a way to do this, though also in favor of education as a career!)
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-02-08T14:31:53.775Z · LW(p) · GW(p)
I'm itching to go to the school library and start picking up anything that catches my interest now that the the thirst to learn has been reawakened.
That's a dangerous idea! Books in the library that are more interesting than your textbooks tend to result in "waking up" four hours later to realize you've read an entire book on [interesting subject x] and are still no closer to researching [boring essay topic y].
Good luck though! Your classes do sound pretty interesting. Hopefully you can stay engaged.
I would love to do some sort of work which involved restructuring the education system.
I think that's a brilliant idea, and it really needs to be done. The "but then you wouldn't make any money!" people are pretty annoying, but you can ignore them.
↑ comment by JohnEPaton · 2012-07-30T01:49:55.754Z · LW(p) · GW(p)
That's cool that your studying a combination of Psychology and Engineering. I'm doing something similar and it seems to be very rare to find someone who is working in both of those fields. I'm sure that in the UK people would be even less understanding of this. It seems like over there you just choose one subject and that's all you do for the next three years. Keep on looking at those library books. I think the most important thing as an undergrad is to follow your interests even if this means dialling back on the effort you put into class work.
↑ comment by daenerys · 2012-02-08T01:07:39.175Z · LW(p) · GW(p)
Hi Lisa! Welcome to Less Wrong!
You're studying some really interesting stuff. What's your semester abroad been like?
You seem really awesome, so I hope you continue to post on here. If you need any one-on-one question answering or discussions, feel free to shoot me a PM :)
Replies from: lisa↑ comment by lisa · 2012-02-08T12:53:49.761Z · LW(p) · GW(p)
Hi! Thanks for the welcome!
Studying abroad has been amazing - it's really making me think about all sorts of things I've never thought of and I'm loving noticing the subtle cultural differences!
If I have any questions, I'll be sure to PM you - thank you so much for the offer! :)
comment by Brigid · 2012-05-01T23:01:39.176Z · LW(p) · GW(p)
Hi, I’m Brigid. I’ve been reading through the Sequences for a few weeks now, and am just about to start the Quantum Section (about which I am very excited). I found out about this site from an email the SIAI sent out. I’m an Signals Intelligence officer in the Marine Corps and am slated to get out of the military in a few months. I’m not too sure what I am going to do yet though; as gung-ho as I originally was about intel, I’m not sure I want to stay in that specific field. I was a physics and political science major in college, with a minor in women’s studies. I’ve been interested in rationality for a few years now and have thoroughly enjoyed everything I’ve read so far here (including HPMOR) . Also, if there is anyone who is interested in starting a Meetup group in Hawaii (Oahu) let me know!
Replies from: Eliezer_Yudkowsky, shminux↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-02T06:07:45.993Z · LW(p) · GW(p)
Hi, Brigid! Pleased to have you here! Experience has shown that by far the best way to find out if anyone's interested in starting an LW group is to pick a meeting place, announce a meetup time, and see if anyone shows up - worst-case scenario, you're reading by yourself in a coffeeshop for an hour, and this is not actually all that bad.
↑ comment by Shmi (shminux) · 2012-05-01T23:14:57.388Z · LW(p) · GW(p)
Welcome!
am just about to start the Quantum Section (about which I am very excited).
A warning: while the QM sequence in general is very readable and quite useful for the uninitiated, the many-worlds advocacy is best taken with a mountain of salt. Consider skipping the sequence on the first pass, and returning to it later, after you've covered everything else. It is fairly stand-alone and is not relevant to rationality in general.
Replies from: fubarobfusco, Eliezer_Yudkowsky↑ comment by fubarobfusco · 2012-05-02T01:15:49.996Z · LW(p) · GW(p)
Well, there are a couple of things going on in the QM sequence. One of them is MWI. The other is the general debunking of the commonly-held idea that QM is soooooooo weeeeeeeeird.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-02T02:00:39.929Z · LW(p) · GW(p)
Yes, that's the good part.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-02T06:11:27.364Z · LW(p) · GW(p)
A meta-warning: Take shminux's "mountain of salt" advice with an equally large mountain of salt plus one more grain - as will become starkly apparent, there's a reason why the current QM section is written the way it is, it's not meant to be skipped, and it's highly relevant to rationality in general.
Replies from: Richard_Kennaway, thomblake, thomblake↑ comment by Richard_Kennaway · 2012-05-02T06:38:24.893Z · LW(p) · GW(p)
How would the Sequences be different, other than in the QM parts, if we lived in a classical universe, or if we had not yet discovered QM?
Replies from: None↑ comment by [deleted] · 2012-05-02T07:22:38.970Z · LW(p) · GW(p)
Wild Mass Guessing: in a classical universe, particles are definable individuals. This breaks a whole mess of things; a perfect clone of you is no longer you, and etc.
Replies from: JGWeissman, DanArmak, Richard_Kennaway↑ comment by JGWeissman · 2012-05-02T17:21:26.396Z · LW(p) · GW(p)
a perfect clone of you is no longer you
The lack of identity of individual particles is knock down argument against our identities being based on the identities of individual particles. However, if there was identity of individual particals, this does not require that the identity of individual particles contribute to our identities, it would just remove a knock down argument against that idea.
Replies from: DanArmak, thomblake↑ comment by DanArmak · 2012-05-02T17:45:30.638Z · LW(p) · GW(p)
(Almost) all the particles in our bodies are replaced anyway, on the scale of a few years. Replacement here means a period of time when you're without the molecule, and then another comes in to take its place; so it's real whether or not particles have identities. This applies to quite large things like molecules. Once we know that, personal identity rooted in specific particles is shaky anyway.
↑ comment by thomblake · 2012-05-02T17:29:19.180Z · LW(p) · GW(p)
An important point.
Heraclitus probably didn't believe in lack of identity of individual particles, but he did believe we are patterns of information, not particular stuff.
EDIT: On second thought, he'd probably work out lack of identity of individual particles if pressed, following from that.
↑ comment by DanArmak · 2012-05-02T17:50:04.619Z · LW(p) · GW(p)
a perfect clone of you is no longer you
Not necessarily. "What/who is you" is a matter of definition to a large extent. If particles have identities (but are still identical to all possible measurements), that doesn't stop me from defining my personhood as rooted in the pattern, and identifying with other sufficiently similar instances of the pattern.
↑ comment by Richard_Kennaway · 2012-05-02T07:54:31.580Z · LW(p) · GW(p)
That minds are physical processes seems discoverable without knowing why matter is made of atoms and what atoms are made of. That elimination of mentalism seems sufficient to justify the ideas of uploading, destructive cryonics, artificial people, and so on.
But I'm actually more interested in what implications there are, if any, for practical rationality here and now. (I will be unmoved by the answer "But FAI is the most practical thing to work on, we'll all die if it's done wrong!!!")
↑ comment by thomblake · 2012-05-02T15:41:47.774Z · LW(p) · GW(p)
it's not meant to be skipped, and it's highly relevant to rationality in general.
A few people have asserted this, but how is it actually relevant? Is it just a case study, or is there something else there? As RichardKennaway asks, how does QM make a difference to rationality itself?
Replies from: ArisKatsaris, shminux↑ comment by ArisKatsaris · 2012-05-02T16:18:14.121Z · LW(p) · GW(p)
Speaking from a non-physicist perspective, much of what the QM sequence helped teach me is helping see the world from bottom-up; QM is regular, but it adds up to normality, and it's normality that's weird. Delving down into QM is going up the rabbit's hole away from weirdness and normality, and into mathematical regularity.
By analogy, normal people are similarly weird because they're the normality that was produced as the sum of a million years of evolution. Which in turn helps you realize that a random mind plucked out of mindspace is unlikely to have the characteristics we attribute to humanlike normality. Because normality is weird.
Once you go from bottom-to-top, you also help dissolve some questions like problems of identity and free will (though I had personally dissolved the supposed contradiction between free will and determinism many years before I encountered LessWrong) -- I still think that many knots people tie themselves over regarding issues like Quantum Suicide or Doomsday Dilemmas, are caused by insufficient application of the bottoms-up principle, or worse yet a half-hearted application thereof.
Replies from: DanArmak, thomblake, shminux↑ comment by DanArmak · 2012-05-02T16:38:44.512Z · LW(p) · GW(p)
Because normality is weird.
It's bad enough that we've got people talking about things not being weird, as if weirdness is an objective property rather than something in the mind of the observer. Your words which I quoted are even worse; they're a self-contradiction.
If you're not willing to let the word "weird" have its dictionary definition, please, please just taboo it and let the subject die, rather than trying to redefine it as the opposite of the original meaning.
Replies from: chaosmosis, ArisKatsaris↑ comment by chaosmosis · 2012-05-02T16:42:37.321Z · LW(p) · GW(p)
The commenter was saying "our intuitive understanding of reality" is weird, I think. That's why the commenter was able to noncontradictorily say that Quantum Mechanics fixed some problems and made things less weird.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-02T17:09:03.857Z · LW(p) · GW(p)
"our intuitive understanding of reality" is weird
Let's unpack what that means, because I feel we might be disagreeing over the meaning of the word. I'll use Wiktionary but if you don't like the definitions given feel free to substitute.
Weird (when used as adjective):
- Strange.
- Deviating from the normal, bizarre. (And some unrelated meanings.)
"Strange" in turn unpacks to: 1a. Not normal; odd, unusual, surprising, out of the ordinary. 1b. Unfamiliar, not yet part of one's experience. (Ex: a strange town.)
For completeness I looked up normal. I believe the only relevant meaning is "usual; ordinary".
Summing up, I define "weird" as meaning "not normal; irregular, exceptional; unexpected". And a secondary meaning of "strange, unfamiliar".
In light of this, what does it mean to say that:
"our intuitive understanding of reality" is weird
Is our intuitive understanding not "normal", exceptional, or unexpected? It's certainly normal among humans; and we have no concrete examples of a larger reference class of conscious beings. It's been argued that other life-forms would form different intuitions, but at least all Earth life except maybe microbes operates on classical-mechanics intuitions. Arguing that this isn't "normal" requires more than just saying something different is possible in principle.
As for the secondary meaning, quantum mechanics (and relativity for that matter) certainly describes behavior which is strange and unfamiliar to our intuitions. But then the correct use of the word "weird" is precisely to say that QM is weird. Not that we are.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-02T18:56:43.102Z · LW(p) · GW(p)
I don't think those definitions really capture some of the relevant connotations that weirdness has related to accuracy and consistency. I personally didn't even realize the exact problem you had with the commenter because the way zhe used "weird" made perfect sense to me.
I also don't like prescriptivist theories of grammar very much and think that the original comment was clearly understandable and was perhaps less clearly intended to subvert the common belief that "QM is weird", which is a belief that has been criticized in multiple places on this website, and I appreciated the creative attempt to get rid of the flawed belief by reframing "normalcy".
My initial overview of these comments made me believe there was a lack of communication, now I see the initial hints I missed that show that you're upset because words like "weird" are used informally. My bad for the initial comment, then.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-02T19:04:37.646Z · LW(p) · GW(p)
I also don't like prescriptivist theories of grammar very much
Me neither. I'm bringing up dictionary definitions as descriptions, not as prescriptions. I happen to agree with the dictionary (and it's not my native language anyway), and since you seem to use a different meaning/definition, please tell me what it is!
and think that the original comment was clearly understandable
I, at least, apparently still don't understand it.
Or rather, I understand the intent (because it's been explained) but can't understand how that intent can be read from the original words.
↑ comment by ArisKatsaris · 2012-05-02T16:52:16.058Z · LW(p) · GW(p)
My whole point was about being helped to gain an additional perspective; seeing something from bottoms up.
When you say that weirdness is "in the mind of the observer", you're quite obviously correct in the most plain sense, but you seem to be assuming that a mind can have only one point of view, and not intentionally attempt or even manage to shift between different point of views.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-02T17:34:17.999Z · LW(p) · GW(p)
I understand your point about the POVs. In light of that, here's what bothers me about saying "normality is weird".
If we look at a quantum-mechanical system from the classical POV, we notice that no classical laws (even classical-style laws we don't know yet) can explain its behavior. So it looks weird to us. That's fine.
If we look at a classical system from the quantum POV, we can't calculate its behavior on the quantum level, it's too complex. But if we could - and in principle the laws of physics tell us how to do it - then we expect to predict its behavior correctly. So why should it seem weird?
The two situations aren't symmetrical. We used to believe in classical mechanics, and then we discovered quantum phenomena, and we saw that they were weird. This was because the laws of physics we used were wrong! Now that we use the right ones (hopefully), nothing should look weird anymore, including "classical" systems.
It's true that QM is at best incomplete, and we can't yet use it correctly in some relativistic situations. So those situations still look weird from a QM POV. But this doesn't apply to our normal lives.
↑ comment by Shmi (shminux) · 2012-05-02T16:52:07.768Z · LW(p) · GW(p)
I'm curious how you used this approach to resolve the Quantum Suicide argument.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-05-02T17:06:02.414Z · LW(p) · GW(p)
I should perhaps make a fuller post about this at some point, but in brief: "Individuals" are in reality quite divisible (pun intended). Quantum Suicide makes sense to me only if you have a top-down pespective on identity that either persists as a whole or is destroyed as a whole and nothing in between.
If you instead view the self as some bizarre arbitrary conglameration of qualia-producing processes (including whatever processes produce self-awareness, however they do it), then the very concept of destruction or persistence must be applied to actually individual thought-processes, and is meaningless when applied to whole people.
Replies from: DanArmak↑ comment by Shmi (shminux) · 2012-05-03T16:27:52.599Z · LW(p) · GW(p)
I have dutifully gone through the entire sequence again, enjoying some cute stories along the way, and my best guess of what EY means is that it is relevant not in any direct sense ("QM is what rationality is built on"), but more as a teaching tool: it brings "traditional Science" in conflict with "Bayesian rationality". (The Bayesianism wins, of course!) The MWI also lends some support to the EY's preferred model, Barbour's timeless physics, and thus inspires the TDT.
Replies from: thomblake↑ comment by thomblake · 2012-05-03T16:41:33.464Z · LW(p) · GW(p)
That still doesn't seem like enough to justify the reversal from "not relevant" to "highly relevant".
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-03T18:23:49.982Z · LW(p) · GW(p)
What reversal? I still think that it detracts from the overall presentation of "modern rationality" by getting people sidetracked into learning open problems in physics at a pop-sci level. Whatever points EY was trying to make there can surely be made better without it.
Replies from: thomblake↑ comment by thomblake · 2012-05-03T19:34:20.619Z · LW(p) · GW(p)
It looks like Eliezer answers my question in this post.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-03T19:59:39.259Z · LW(p) · GW(p)
Have you noticed any confusion?
It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can't.
Hmm, "too good to be true"... Does this suggest anything?
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports.
So why bother with an example where Bayes works the worst and is most confusing? [EDIT: What I mean is that the scientific principle works so much better in physics compared to other fields mentioned, Bayes clearly is not essential there]
Bayes-Goggles on: The simplest quantum equations that cover all known evidence don't have a special exception for human-sized masses. There isn't even any reason to ask that particular question. Next!
This is an actual testable prediction. Suppose such an exception is found experimentally (for example, self-decoherence due to gravitational time dilation, as proposed by Penrose, limiting the quantum effects to a few micrograms or so). Would you expect EY to retract his Bayesian-simplest model in this case, or "adjust" it to match the new data? Honestly, what do you think is likely to happen?
Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?
Have you noticed that this is a straw-Copenhagen, and not the real thing?
Replies from: dlthomas↑ comment by dlthomas · 2012-05-03T20:42:37.637Z · LW(p) · GW(p)
This is an actual testable prediction. Suppose such an exception is found experimentally (for example, self-decoherence due to gravitational time dilation, as proposed by Penrose, limiting the quantum effects to a few micrograms or so). Would you expect EY to retract his Bayesian-simplest model in this case, or "adjust" it to match the new data? Honestly, what do you think is likely to happen?
Honestly, when the first experiment shows that we don't see quantum effects at some larger scale when it is otherwise believed that they should show up, I expect EY to weaken, but not reverse, his view that MWI is probably correct - expecting that there is an error in the experiment. When it has been repeated, and variations have shown similar results, I expect him to drop MWI, because it now longer explains the data. I don't have a specific prediction regarding just how many experiments it would take; this probably depends on several factors, including the nature and details of the experiments themselves.
This is from my personal model of EY, who seems relatively willing to say "Oops!" provided he has some convincing evidence he can point to; this model is derived solely from what I've read here, and so I don't ascribe it hugely high confidence, but that's my best guess.
comment by Benedict · 2012-07-22T19:52:49.268Z · LW(p) · GW(p)
Hey, I'm -name withheld-, going by Benedict, 18 years old in North Carolina. I was introduced to Less Wrong through HPMoR (which is fantastic) and have recently been reading through the Sequences (still wading through the hard science of the Quantum Physics sequence).
I'm here because I have a real problem- dealing with the consequences of coming out as atheist to a Christian family. For about a year leading up to recent events, I had been trying to reconcile Christian belief with the principles of rationalism, with little success. At one point I settled into an unstable equilibrium of "believing in believing in belief" and "betting" on the truth of religious doctrine to cover the perceived small-but-noteworthy probability of its veracity and the proposed consequences thereof. I'd kept this all secret from my family, putting on a long and convincing act.
This recently fell apart in my mind, and I confronted my dad with a shambling confession and expression of confusion and outrage against Christianity. I'm... kinda really friggin' bad at communicating clearly through spoken dialogue, and although I managed to comport myself well enough in the conversation, my dad is unconvinced that the source of my frustrations is a conflicting belief system so much as a struggle with juvenile doubts. This is almost certainly why I haven't yet faced social repercussions, as my dad is convinced he can "fix" my thinking. He's a paid pastor and theologian, and has connections to all the really big names in contemporary theology- having an apostate son would damage both his pride and social status, and as such he's powerfully motivated to attempt to "correct" me.
After I told him about this, he handed me a book (The Reason for God by Timothy Keller) and signed himself up as a counselor for something called The Clash, described as a Christian "worldview conference". Next week, from July 30 to August 3, he's going to take me to this big huge realignment thing, and I'm worried I won't be able to defend myself. I've been reading through the book I mentioned, and found its arguments spectacularly unconvincing- but I'm having trouble articulating why. I haven't had enough experience with rationalism and debate to provide a strong defense, and I fear I'll be pressured into recanting if I fail.
That's why I'm here- in the upcoming week, I need intensive training in the defense of rationality against very specific, weak but troubling religious excuses. I really need to talk to people better trained than me about these specific arguments, so that I can survive the upcoming conference and assert my intellectual independence. Are there people I can be put in touch with, or online meetups where I can talk to people and arm myself? Should I start a discussion post, or what? I'm unfamiliar with the site structure here, so I could use some help.
Oh but dang if there aren't like over a thousand comments here, jeez i don't want to sound like i'm crying for attention but i'm TOTALLY CRYING FOR ATTENTION, srsly i need help you dudes
Replies from: wedrifid, Kawoomba, MixedNuts, Vaniver, TimS, OnTheOtherHandle, John_Maxwell_IV, Bundle_Gerbe, Grognor, Zaine, Desrtopa, shminux, beoShaffer, Ezekiel, Xenophon↑ comment by wedrifid · 2012-07-23T00:52:09.930Z · LW(p) · GW(p)
my dad is unconvinced that the source of my frustrations is a conflicting belief system so much as a struggle with juvenile doubts.
That is roughly speaking what juvenile doubts are. The "juvenile" mind tackling with conflicts in the relevant socially provided belief system prior to when it 'clicks' that the cool thing to do is to believe that you have resolved your confusion about the 'deep' issue and label it as a juvenile question that you do not have to think about any more now that you are sophisticated.
Next week, from July 30 to August 3, he's going to take me to this big huge realignment thing,
You clearly do not want to go. His forcing you is a hostile act (albeit one he would consider justified) but you are going along with it. From this, and from your age, I infer that he has economic power over you. That is, you live with him or he is otherwise your primary source of economic resources. I will assume here that your Best Alternative To Negotiated Agreement (BATNA) sucks and you have essentially no acceptable alternative to submission to whatever power plays your father uses against you. Regardless of how the religious thing turns out, developing your potential for independence is something that is going to be worthwhile for you. Being completely in the power of another sucks! Having other options---even if it turns out that you don't take them---raises the BATNA and puts a hard limit on how much bullshit you have to put up with.
Now, the following is what I would do. It may or not be considered acceptable advise by other lesswrong participants since it abandons some favourite moral ideals. Particularly the ones about 'lying' and 'speaking the truth no matter the cost'.
I haven't had enough experience with rationalism and debate to provide a strong defense
Providing a 'defense' would be a mistake, for the reasons Kawoomba describes. The people you are dealing with are not interested in rational discussion or Aumann agreement and you are not obliged to try yourself. They are there to overwhelm you with social and economic pressure into submitting to the tribe's belief system. Providing resistance just gives them a target to attack.
Honesty and trust is something people earn. These people have not earned your respect and candor. Giving people access to your private and personal beliefs makes you vulnerable and can allow them to use your words to do political and social damage to you, in this case by making everyday life difficult for you and opening you up to constant public shaming. Fortunately that is better than being stoned to death as an apostate but even so there is no rule of the universe that you must confess or profess beliefs when they will be used against you. It is usually better to keep things to yourself unless there is some specific goal you have that involves being forthright (even if that goal is merely convenience and a preference for openness in cases where the consequences are less dramatic than you face.)
Religion is not about literal beliefs about physics. They lie to themselves then lie to you. You can lie too! You understand belief in belief already. You understand that religious belief (and all equivalent tribal beliefs) are about uttering the correct in-group signals. Most people convince themselves that they believe the right thing and then say that thing they 'believe' out loud. Your main difference is that you haven't lied to yourself as successfully. But why should thinking rationally be a disadvantage? Who says that you must self sabotage just because you happened to let your far mode beliefs get all entangled with reality? Sincerity is bullshit. Say what is most beneficial to say and save being honest for people who aren't going to be dicks and use your words against you.
Brainwashing is most effective against those who most strongly resist. While it can take longer to brainwash people who firmly stake their identity on sticking to a contradicting belief, it is those people who resist strongest are most likely to remain brainwashed. Those that change their mind quickly to make the torture stop (where torture includies shaming and isolation from like minded people) tend to quickly throw off the forced beliefs soon after the social pressure to comply is removed. (Forget the source, is it in Caldini?) If you make confessing the faith some sort of big deal that must be fought then if your brain is more likely to rationalise that it must have been properly convinced if it was willing to make such a dramatic confession. The hazing effect is stronger.
Precommit to false confessions. Go into the brainwashing conference with the plan to say all the things that indicate you are a devout Christian who has overcome his doubts. Systematically lying isn't all that much of a big deal to humans and while it is going to change your beliefs somewhat in the direction of the lies the effect will be comparatively far, far weaker given that you know you are lying out of contempt and convenience.
Fogging is amazing. Have you ever tried to have a confrontation with someone who isn't resisting? I've tried, even roleplaying with that as the explicit goal and I found it ridiculously difficult. It takes an extremely talented and dedicated persuader to be able to continue to apply active pressure when you are giving them nothing to fight against. Frankly, none of the people you are likely to encounter, including your father, would be able to do that even if they tried. They just aren't that good. You don't want to be barraged with bullshit. Saying the bullshit back to them a couple of times makes the bullshit stop. No brainer.
Are there people I can be put in touch with, or online meetups where I can talk to people and arm myself?
Sure, but I suggest meeting with the likeminded people for your own enjoyment and so you don't develop the unhealthy identity of the lone outsider. That and rationalists know cool stuff and have some useful habits that rub off. Where do you live? Are there lesswrong meetups around?
↑ comment by Kawoomba · 2012-07-22T20:44:57.930Z · LW(p) · GW(p)
Hi Benedict!
Bad news first: You will not be able to defend yourself. This is not because you're 18, it's not because you can't present your arguments in a spectacular fashion.
It is because noone will care about your arguments, they will wait for the first chance to bring some generic counter-argument, probably centering on how they will be there for you in your time of implied juvenile struggle, further belittling you.
And - how aggravating - this is actually done in part to protect you, to protect the relationship with your dad. With the kind of social capital, pride and identity that's on the line for your father, there is no way he could acknowledge you being right - he'd have to admit to himself that he's a phony in his own eyes, and a failure as a parent and pastor in the eyes of his peers.
To him it may be like you telling him he wasted his life on an imaginary construct, while for you it's about him respecting your intellectual reasoning.
Maybe the rational thing to do is not strive for something that's practically unattainable - being respected as an atheist on the basis of your atheist arguments - but instead focus on keeping the relationship with your parent intact while you go do your own thing anyways. Mutual respect of one's choices is great in a family, but it may not be a realistic goal given your situation, at least in respect to discussing god.
Good news: While this is such a defining issue for your father, is it a defining issue for you to tell your father publicly your new stance? How hard/easy would it be to let him continue with his shtick, retain the relationship, and still live your life as an open atheist for all intents and purposes - other than when with your family, where you can always act with mild disinterest?
Rational in this forum is mostly construed as "the stuff that works in optimising your terminal values". It is possible for you to be the "bigger man" here, depending on which of the above you value higher. But make no mistake - I doubt that you'll change anyone's opinion on god regardless.
↑ comment by MixedNuts · 2012-07-22T23:58:20.839Z · LW(p) · GW(p)
Go in panic mode.
This conference is not just making a case that Christianity is correct and debating about it. It's bombarding you with arguments for six days, where you won't hear an argument against Christianity or if you do it'll be awkward rude dissent from people in inferior positions, where you won't be able to leave or have time alone to think, and where you're going against your will in the first place. This is time for not losing your mind, not time for changing it. Don't keep an open mind, don't listen to and discuss arguments, don't change your mind because they're right, don't let the atmosphere influence you. If it helps you can think of it as like being undercover among huge patriots and resisting the temptation to defect (and their ideology may be better than yours), or like being in a psychiatrist hospital and watching out for abuse when you know the nurses will try to convince you your reactions are psychiatrist symptoms (and they may well be).
So don't see anything at the conference as a social interaction or exchange of ideas. Your goals are to get out of there, to block everything out, to avoid attention, and to watch sharply for anything fishy. Block out the speakers, just watch the audience. If there's a debate be quiet and don't draw attention. If you're asked to speak, voice weak agreement, be vague, or pick peripheral nits. If you're asked to participate in group activities go through the motions as unremarkably as you can. At the socials be a bit distant but mostly your usual self when making small talk, but when someone starts discussing one of the conference topics pretend to listen and agree, smile and nod and say "Yes" and "Go on" and "Oh yeah, I liked that part" a lot. Lie like a rug if you must. Watch the social dynamics and the attitudes of everyone and anything that looks like manipulative behavior. You'll be bored, but don't try to think about any kind of deep topic, even unrelated (doing math and physics problems in your head are ok, anything with a social or personal component is not). Try to get enough sleep and to eat well. Enjoy the ice cream. Don't think about anything related to the conference for a couple weeks afterward.
This is only short-term, and it won't help with your father; you probably want to handle that afterwards separately.
↑ comment by Vaniver · 2012-07-22T21:10:25.505Z · LW(p) · GW(p)
Hey! I've got a pastor father too, but thankfully my atheism doesn't seem to be a big deal for him. (It helps that I don't live nearby.)
I think the "conflicting belief system" is, as I understand it, the right model. There's a Christian worldview, which has some basic assumptions (God exists, the Bible is a useful source for learning about God, etc.), and there's a reductionist worldview, which has some basic assumptions (everything can be reduced to smaller parts, experiments are a useful source for learning about reality, etc.), and the picture you can build out of the reductionist worldview matches the world better than the picture you can build out of the Christian worldview. (There are, of course, other possible worldviews.)
I would not put much hope into being able to convince the people at this event that they should be atheists; I wouldn't even hope to convince them that you should be an atheist. And so the question becomes what your goals are.
If you're concerned about recanting your atheism and meaning it, the main thing I can think of that might be helpful is the how to change your mind sequence. You can keep that model in mind and compare the experience you're undergoing to it- it's unlikely that they'll be using rational means of persuasion, and you can point out the difference.
Are there people I can be put in touch with, or online meetups where I can talk to people and arm myself? Should I start a discussion post, or what? I'm unfamiliar with the site structure here, so I could use some help.
Starting a post in discussion is an alright idea; it'll work well if you mention specific arguments that you want to have responses to.
↑ comment by TimS · 2012-07-23T00:38:32.896Z · LW(p) · GW(p)
Welcome. I'm sorry that you are in such an awkward situation with you family. In terms of dealing with this conference, I can only echo what MixedNuts said (except for the panicking part). I've always found this quote interesting:
Adulthood isn't an award they'll give you for being a good child. You can waste . . . years, trying to get someone to give that respect to you, as though it were a sort of promotion or raise in pay. If only you do enough, if only you are good enough. No. You have to just . . . take it. Give it to yourself, I suppose. Say, I'm sorry you feel like that, and walk away. But that's hard
We have every reason to think that children's beliefs have no momentum - the evidence is right in front of us, they change their minds so often for such terrible reasons. By contrast, the fact that I disagree with another adult is not particular strong evidence that the other person is wrong.
In other words, try to free yourself from feeling obligated to defend anything or feeling guilty for not engaging with those who wish to change your beliefs. You might consider explicitly saying "Social pressure is not evidence that you are right (or wrong)." If the people talking with you assert that they aren't using social pressure, then ask them to stop continuing the debate. Their willingness to leave is a victory for your emotional state, and their refusal is strong evidence that arriving at true beliefs is not really their goal - but the proper reaction to that stance is to leaving the conversation yourself, not try to win the "you are being rude" debate.
In short, maximizing your positive emotional state doesn't rely on winning debates. Your goal should be to avoid having them at all. (If you hadn't already read the book your father found, I would have suggested declining to do so).
↑ comment by OnTheOtherHandle · 2012-07-23T03:09:55.126Z · LW(p) · GW(p)
I'm not sure how much specific atheist reading you've done, but I found this list to be very helpful at articulating and formalizing all those doubts, arguments and wordless convictions that "this makes no sense." This is also a handy look at what would be truly convincing evidence of the truth of a particular religion's claims. The rest of that author's website is also wonderful.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-08-02T01:41:47.972Z · LW(p) · GW(p)
Hey, I agree with what wedrifid said. I fell in to the same trap of trying to beat religious nonsense out of people as a kid. It's a very sexy thing to think about but it doesn't really get you anywhere, in my experience. My only additional advice is that you consider trying to make your "recapitulation" to Christianity convincing. For example, don't give in right away, and make up a story for where you went wrong and why you're a Christian again, e.g. "I thought that x, but now I see that y and z, so x is wrong. I guess maybe God exists after all."
Something to keep in mind when arguing with your dad (internally only): your dad is presenting you with evidence and arguments in favor of God's existence, but these amount to a biased sample. If you really want to know the truth, you should spend an equal amount of time hearing arguments from both Christians and atheists, or something like that.
Also, you can check internally if any of his arguments hold up to this test: http://commonsenseatheism.com/?p=8854
Replies from: wedrifid↑ comment by wedrifid · 2012-08-02T04:08:14.879Z · LW(p) · GW(p)
Also, you can check internally if any of his arguments hold up to this test: http://commonsenseatheism.com/?p=8854
Hey! It's Luke!
↑ comment by Bundle_Gerbe · 2012-07-23T02:11:57.469Z · LW(p) · GW(p)
It does not sound to me like you need more training in specific Christian arguments to stay sane. You have already figured things out despite being brought up in a situation that massively tilted the scales in favor of christianity. I doubt there is any chance they could now convince you if they had to fight on a level field. After all, it's not like they've been holding back their best arguments this whole time.
But you are going to be in a situation where they apply intense social pressure and reinforcement towards converting you. On top of that, I'm guessing maintaining your unbelief is very practically inconvenient right now, especially for your relationship with your dad. These conditions are hazardous to rationality, more than any argument they can give. You have to do what MixedNuts says. Just remember you will consider anything they say later, when you have room to think.
I do not think they will convert you. I doubt they will be able to brainwash you in a week when you are determined to resist. Even if they could, you managed to think your way out of christian indoctrination once already, you can do it again.
If you want to learn more about rationality specific to the question of Christianity, given that you've already read a good amount of material here about rationality in general, you might gain the most from reading atheist sites, which tend to spend a lot of effort specifically on refuting Christianity. Learn more about the Bible from skeptical sources, if you haven't before you'll be pretty amazed how much of what you've been told is blatantly false and how much about the bible you don't know (for instance, Genesis 1&2 have different creation stories that are quite contradictory, and the gospels' versions of the resurrection are impossible to reconcile. Also, the gospels of Matthew and Luke are largely copied from Mark, and the entire resurrection story is missing from the earliest versions of Mark.) I unfortunately don't know a source that gives a good introduction to bible scholarship. Maybe someone else can suggest one?
↑ comment by Grognor · 2012-07-23T01:41:08.968Z · LW(p) · GW(p)
Hello, friend, and welcome to Less Wrong.
I do think you should start a discussion post, as this seems clearly important to you.
My advice to you at the moment is to brush up on Less Wrong's own atheism sequence. If you find that insufficient, then I suggest reading some of Paul Almond's (and I quote):
If you find that insufficient, then it is time for the big guy, Richard Dawkins:
If you are somehow still unsatisfied after all this, lukeprog's new website should direct you to some other resources, of which the internet has plenty, I assure you.
Edit: It seems I interpreted "defend myself" differently from all the other responders. I was thinking you would just say nothing and inwardly remember the well-reasoned arguments for atheism, but that's what I would do, not what a normal person would do. I hope this comment wasn't useless anyway.
↑ comment by Zaine · 2012-07-23T09:39:20.589Z · LW(p) · GW(p)
While wading through all these responses for the very specific response you are looking for (which some charitable LW'er will probably provide if this thread is commented upon frequently enough), you might want to read "How to Win Every Argument - An Introduction to Critical thinking" by Nicholas Capaldi. It offers a brief overview of logic and rational argumentation, and touches upon fallacies and what this site calls the 'Dark Arts', which should help in arming you against common attacks. If you are mathematically minded, but don't want to go into too much depth, you might want to check out "Sherlock's Logic".
Mind, the former text is more of a survey course, whereas the latter is more of an introductory course.
I have read that Luke Muehlhauser has worked through a dilemma similar to yours, and his blog you may find valuable.
↑ comment by Desrtopa · 2012-07-23T02:11:28.915Z · LW(p) · GW(p)
Should I start a discussion post, or what? I'm unfamiliar with the site structure here, so I could use some help.
I'm sure some people will offer other counsel than preparing yourself and giving the most persuasive arguments you can, which may be worth taking seriously, but if you make such a discussion thread I'm confident that you will receive responses to your queries, and think it is highly probable that the post will receive positive karma.
↑ comment by Shmi (shminux) · 2012-07-23T01:54:58.011Z · LW(p) · GW(p)
have recently been reading through the Sequences (still wading through the hard science of the Quantum Physics sequence).
The value of this particular sequence is a topic of open debate on LW, so don't get stuck on it, skip it on the first reading, you can revisit it later, after you cover more relevant stuff.
having an apostate son would damage both his pride and social status
While this would be one way to confront him, by pointing out that he is committing mortal sins of wrath and pride, your odds of success are not good. He is a trained professional heavy-weight who has control over you and is not interested in playing by the rules, except for his own. If you play by his rules, you lose. Think about how you can redefine the game, Kirk-like, to your advantage.
As for the meetups, there is one in NC, not sure if this is close enough to you.
↑ comment by beoShaffer · 2012-07-23T02:04:26.364Z · LW(p) · GW(p)
You might want to wipe this site from your search and browsing history. Also, is it possible for you to feign/induce illness?
↑ comment by Ezekiel · 2012-07-23T00:57:44.584Z · LW(p) · GW(p)
I agree (in general) with Xenophon's advice: Calm down, do whatever you're comfortable with spiritually, and in the worst case scenario call it "God" to keep the peace with whoever you want to keep the peace with.
With that said, if you still want advice, I deconverted myself a year ago and have since successfully corrupted others, and I've been wanting to codify the fallacies I saw anyway. Before I start: bear in mind that you might be wrong. I find it very unlikely that any form of Abrahamic theism is true, but if you care about the truth you have to keep an open mind.
Here are some common fallacious arguments and argumentative techniques I've seen used by religion (and other ideologies, of course). They include exercises which I think you'd benefit from practising; if you get stuck on any of 'em, send me a PM and I'll be glad to help out.
- Abuse and Neglect of Definitions
Whenever anyone tries to convince you of the truth or falsehood of some claim, make sure to ask them exactly what that means - and repeat the question until it's totally clear. You'd be amazed how many of the central theological tenets of Abrahamism are literally meaningless, since almost no-one can define them, and among those who can no two will give the same definition.
For example: God created the Universe. Pretty important part of the theology, right? So what does it mean, exactly?
A smart theist will say: God caused the Universe to exist.
Okay, great. What does "cause" mean?
Seriously? You know what "cause" means; it's a word you use all the time.
(This is a classic part of this fallacy. In our own minds we have definitions that work in everyday life, but not for talking about something as abstract as God. In this specific case, the distinction is as follows:)
When I say "X caused Y" (where X and Y are events) I mean: within the laws of nature as I know them Y wouldn't have happened if X hadn't. But God created the Universe outside (or "before") any laws of nature, so what does "cause" mean?
... and I've got no idea what an Abrahamist theist would answer, since I've yet to hear one who could. Although of course I'd love to.
For homework: Play the same game, in your head (I assume your old religious self is still knocking around up there) or with a smart religious friend, on some of the other basic tenets of Abrahamism: God is all-powerful, God is all-knowing, God is (all-)good, God is formless. Similarly with any statement of the form "God loves X", "God wants X", or even "God did X" or "God said X" (how can the Cause of Everything be said to have "said" any statement more than any other?)
- Intellectual Package Deals
Most religious doctrines are comprised of a huge number of logically independent statements. In Abrahamic theism, we have the various qualities of God mentioned above, as well as a bunch of moral axioms, beliefs regarding the afterlife, and so on. "Proofs" of the doctrines as a whole will often treat the whole collection as a unit, so they only need to bother proving a small fraction.
For instance: A proof of Judaism one of my teachers was fond of was based on proving the Revelation at Mt Sinai - God made a thundering announcement to six hundred thousand families, announcing Its existence and several commandments (there's a dispute as to how many).
Okay, let's say I accept the proof that the Revelation happened. This points to a very powerful speaker, but does it indicate that the speaker is all-powerful? That it is good? That it is telling the truth when it claims to be the being that brought us out of Egypt? That I am morally obligated to do what it wants?
For homework: Write down as many of the axioms of Christianity as you can think of. Once you have a list, look at the behaviour of practising Christians you know, and try to see if it actually follows from the axioms you've got. Add axioms and repeat. (I did this with a religious friend of mine about Orthodox Judaism, and we got to at least fifteen before we got bored.)
Query your memory, Google, your books, and whichever humans you feel comfortable for proofs of Christianity. Check off which of the axioms on your list they actually address - before you even bother to check the proofs for coherence.
- X is not satisfactorily explained by modern science... therefore God/soul/etc.
(Including the specific cases where X=the existence of the universe, complex life, or consciousness.)
Aside from almost always falling under #2 (and sometimes #1 as well), arguments of this form are mathematically fallacious. To understand why, though, you have to do the maths. You can find it on this site as “Bayes's Rule” and it's well worth reading the full-length articles about it, but the short version is as follows:
We have two competing models, A and B, and an observation E. Then E will constitute evidence for A over B if and only if A predicts E with higher probability than B predicts E – that is, if I were to query an A-believer and a B-believer before I ran the experiment, the former would be more likely to get it right than than the latter.
This is easiest to see in cases where the models predict outcomes with very high or low probability. For example: If I ask a believer in Newtonian mechanics whether a rock will keep moving after I throw it (in a vacuum), he'll say “yes” (probability 1). If I ask an Aristotelian physicist, he'll say “no” (probability 0). And lo, the rock did keep moving. Therefore, the Newtonian assigned a higher probability to (what we now know is) the correct outcome than the Aristotelian, so this experiment is evidence for Newtonianism over Aristotelianism.
Got that? Then let's take a specifically religious example: as far as I know, modern science does not have a good explanation for the origin of life. We have a vague idea, but our best explanation is based on some pretty astounding coincidences. Religion, on the other hand, has: God created life. There's your explanation.
But translating into maths we get: if atheist science were true, the probability of life arising would be low, since it would take some unlikely coincidences. If theist science (normal laws of physics + God) were true, the probability of life arising would be...
Wait a second. What's the probability of God deciding to create life? We might say we have no idea, since God is inscrutable, in which case the argument obviously can't continue. But the clever apologist might say: God is good, which is to say It wants happiness. Therefore, it must create minds. So the probability of it creating life is actually quite high.
Except that God, being all-powerful, is perfectly capable of making happiness without life – a bunch of super-happy abstract beings like Itself, for example. So what's the probability of It “bothering” to create life? It has no reason not too, having infinite time and energy, but It has an infinite number of courses of action – what's the probability of It picking the specific one we observed happening?
I'm tempted to say that 1/(infinity) = 0, but that's not mathematically sound, so we'll leave it at “I don't know”. Regardless, the point is that arguments of this form fail once you actually look for numbers.
This answer is already long enough to qualify as a post in itself, so I'll leave off here (although there's lots more to talk about). Feel free to ask if I wasn't clear, or once you've finished all the exercises.
↑ comment by Xenophon · 2012-07-22T22:16:07.571Z · LW(p) · GW(p)
Hey Benedict,
My name is Wes and I am a new member here as well. I read your intro and all I have to say is just don't let anything bother you. Adopt your own form of spirituality, and let it be non-passive resistance, Zen, or following Jesus' Third Way. There needs to be nothing theistic about it, simply rational and philosophical. When you come into an argument with your old man or your family, just don't be perturbed. If they love you, they should let you make decisions for yourself. A teacher of mine once told me, "Making up your own mind is the only freedom we really have."
If you realize what all religions really strive for, then I think a compromise can be reached. You can have a spiritual side, you can admire and stand in awe of the infinite, the eternal, and the beauty of nature and what they call 'God'. Yet you do not need to call it under the name of the Christian God or give it any one singular definition. Recognize that there is a Higher Power, and your father will agree and will understand. When he prays, you meditate. It will simply be 'God', as you understand him. This power greater than yourself can simply be a group of humanist and rationalist people who gather on-line to share each other's wisdom. This collective here at LW is more powerful than you or me, and any one of us on our own.
Or it can be something deistic, pantheistic, or non-theistic - the choice is yours, and shall always be.
Just know that your way is ultimately the right one for you, and one day they might realize the inadequacies of anthropomorphic or cultural-specific monotheism. Practice turning the other cheek (Jesus was a philosopher- such a good one that weaker men deified him). They will see your enlightenment, whether you call it spiritual or not, through not your words, but your deeds. In the end, I'm not qualified to say this and mean no offense, but I'm guessing LW is not the spot for overcoming religion. Nor for overcoming family issues. Check out r/atheism or PM me at http://www.reddit.com/r/futurology/ my friend.
W
Replies from: Oscar_Cunningham, MixedNuts↑ comment by Oscar_Cunningham · 2012-07-23T17:35:54.540Z · LW(p) · GW(p)
If they really love you, they'll let you make decisions for yourself.
This isn't actually true. If your parents don't let you do what you want you shouldn't modus tollens to thinking they don't love you. That would be terrible.
Replies from: Xenophon↑ comment by Xenophon · 2012-07-25T05:59:25.815Z · LW(p) · GW(p)
It seems like my words are changed in your comment. Isn't there a difference between what you want, and the decisions you decide yourself ?
I decide that it is not worth our discourse whether or not Benedict's parents really love him or not.
I think we're ending up doing this;
|Oh but dang if there aren't like over a thousand comments here, jeez i don't want to sound like i'm crying for attention but i'm TOTALLY CRYING FOR ATTENTION, srsly i need help you dudes
↑ comment by MixedNuts · 2012-07-23T00:14:24.507Z · LW(p) · GW(p)
How do you know that Jesus was a philosopher?
Replies from: TimS, Xenophon↑ comment by TimS · 2012-07-23T00:42:27.898Z · LW(p) · GW(p)
Assuming he was real, not divine (and knew it), and his ideas (e.g. Sermon on the Mount) were accurately depicted in the Bible, what would you call him?
The Jesus I'm describing is fervently Jewish, in case that wasn't clear.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-07-23T04:55:29.076Z · LW(p) · GW(p)
Street preacher? Movement organizer? Dissident rabbi?
Replies from: Desrtopa↑ comment by Desrtopa · 2012-07-23T05:36:50.937Z · LW(p) · GW(p)
I'd lean towards cult leader.
Edit in response to downvote: while I can certainly see how this could be interpreted as a simple attack on Christianity, considering that the figure in question apparently encouraged followers to give up their belongings to live in communes and made statements strongly indicative of encouraging followers to regard family members who were not followers as outgroup members, I think this is a fair descriptor.
Replies from: wedrifid↑ comment by wedrifid · 2012-07-23T05:56:15.129Z · LW(p) · GW(p)
I'd lean towards cult leader.
He (whether fictional or otherwise) seemed more like a celebrity than a cult leader. The real cult leader was Saul/Paul.
Replies from: Nornagest, Desrtopa↑ comment by Nornagest · 2012-07-23T06:12:56.978Z · LW(p) · GW(p)
It's really hard to say, considering that practically everything recorded about him seems to have been filtered through Paul at some stage. You can take a stab at it with the help of some pretty sophisticated textual analysis methods (I think the Jesus Seminar did a pretty good, though not unimpeachable, job of this), but ultimately an analysis always depends as much on readers' preconceptions as it does on the actual text. Kind of like trying to get an handle on Socrates' ideas when all we've got to base them on is Plato and a handful of contemporary commentaries -- except worse, since analogous commentaries don't exist in this case.
I'd lean toward "dissident rabbi" based on the charitable version of my reading of the New Testament, but readings of the New Testament are notoriously idiosyncratic for the same reasons.
↑ comment by Xenophon · 2012-07-25T06:10:26.328Z · LW(p) · GW(p)
Hindsight. How do you know he wasn't? No matter what label you choose to give (H)im, that isn't the point though, if you ask me.
By discussing this, we're only giving in to this;
| Oh but dang if there aren't like over a thousand comments here, jeez i don't want to sound like i'm crying for attention but i'm TOTALLY CRYING FOR ATTENTION, srsly i need help you dudes
Replies from: wedrifid↑ comment by wedrifid · 2012-07-25T06:35:11.140Z · LW(p) · GW(p)
By discussing this, we're only giving in to this;
| Oh but dang if there aren't like over a thousand comments here, jeez i don't want to sound like i'm crying for attention but i'm TOTALLY CRYING FOR ATTENTION, srsly i need help you dudes
What do you mean "only"? In the context of a thorough introduction, and a relevant request for advice lampshading his degree of desire for an answer like this is certainly excusable.
It's not "giving in" when you choose to do something you reflectively endorse doing without being subject to any more manipulation than a forthright request.
Replies from: Xenophon↑ comment by Xenophon · 2012-07-25T06:42:27.414Z · LW(p) · GW(p)
I do not presume to know. I am a novel LWian.
Indeed, I hoped to not give in to Benedict's "totally crying for attention". Yet, here we are discussing it even further. I am new to the site, and assumed it was not the place for paternal issues or internal conflicts with your God/deity of choice.
comment by Bakkot · 2012-01-01T07:55:21.543Z · LW(p) · GW(p)
Replies from: Alejandro1, occlude, Emile, drethelin, wedrifid, MileyCyrus, TimS, Multiheaded, Solvent, Strange7, Multiheaded, ArisKatsaris, None, Multiheaded, orthonormal, None, occlude, Vaniver↑ comment by Alejandro1 · 2012-01-01T21:48:50.887Z · LW(p) · GW(p)
Several people have alreadt given good answers to your position on infanticide, but they haven't mentioned what is in my opinion the crucial concept involved here: Schelling points.
We are all agreed that is is wrong to kill people (meaning, fully conscious and intelligent beings). We agree that adult humans beings are people (perhaps excluding those in irreversible coma). The law needs to draw a bright line separating those beings which are people, and hence cannot be killed, from those who are not. Given the importance of the "non-killing" rule to a functioning society. this line needs to be clear and intuitive to all. Any line based on some level of brain development does not satisfy this criterion.
There are only two Schelling points, that is obvious, intuitive places to draw the line: conception and birth. Many people support the first one, and the strongest argument for the anti-abortion position is that conception is in fact in many ways a better Schelling point than birth, since being born does not affect the intrinsic nature of the infant. However, among people without a metaphysical commitment to fetus personhood, most agree that the burdens that prohibition of abortion place on pregnant women are enough to outweigh these considerations, and make birth the chosen Schelling point.
There is no other Schelling point at a later date (your ten-month rule seems arbitrary to me), and a rule against baby infanticide does not place so strong burdens on mothers (giving for adoption is always an option). So there is no good reason to change the law in the direction you propose. Doing it would undermine the strengh of the universal agreement that "people cannot be killed", since the line separating people from non-people would be obscure and arbitrarily drawn.
Replies from: Bakkot, Oligopsony, daenerys↑ comment by Bakkot · 2012-01-01T22:06:02.866Z · LW(p) · GW(p)
Replies from: Emile, Alejandro1↑ comment by Emile · 2012-01-02T00:25:53.066Z · LW(p) · GW(p)
But there is no universal agreement on the "age of informed consent", it varies from country to country! And yes, the fact that the limit is arbitrary does undermine its strength; there are often scenarios of "reasonable" sex (in that most people don't consider it as wrong) that would be consider statutory rape or whatnot if the law was taken at the letter.
(Also, heck, 10 months is a pretty crappy limit, why not 8 months five days and 42 minutes? 12 months would be much cleaner)
Replies from: Bakkot↑ comment by Alejandro1 · 2012-01-02T00:53:26.825Z · LW(p) · GW(p)
This only holds in a society where people aren't sufficiently intelligent for "is obviously not a person" not to work as the criterion. We probably live in such a society, but I hope we don't forever.
People disagree about obviousness of such things. For some people, a fetus is obviously a person too. For others, even a mentally deficient adult might not qualify as being obviously a person. Unlike you, I don't expect these disagreements to disappear anytime soon, and they are the reason that the law works better with bright Schelling point lines, if such exist.
This was the reason age was chosen, rather than neurological development.
Age is non-ambiguous, but not non-arbitrary.
Re your final objection, I agree that there are cases such as sexual consent where there are no clear Schelling points, and we need arbitrary lines. This does not mean that it is not best to use Schelling points whenever they exist. In the case of sexual consent, the arbitrariness of the line does have some unfortunate effects: for example, since the lines are drawn differently in different jurisdictions, people who move accross jurisdictions and are not epecially well informed might commit a felony without being aware. There are also problems with people not being aware of their partner's age, etc.
Such problems are not too big and in any case unavoidable, but consider the following counterfactual: if all teenagers underwent a significant and highly visible discrete biological event at exactly age 16, it would make sense (and be an improvement over current law) to have an universal law using this event as trigger for the age of consent, even if the event had no connection to sexual and mental development and these were continuous. The event would be a Schelling point, such as birth is for personhood.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T02:08:03.436Z · LW(p) · GW(p)
Replies from: Alejandro1↑ comment by Alejandro1 · 2012-01-02T02:50:38.481Z · LW(p) · GW(p)
This is a very good response, that allows us to make our disagreement more precise. I agree that choosing menstruation, or its hypothetical unisex counterpart, is unreasonable because it is too early. I disagree that birth is too early in the same way. Pretty much everyone in our society agrees that 12-year olds cannot meaningfully consent to sex (especially with adults), whereas many believe 6-month old children to be people -- in fact, many believe fetuses to be people! You might say that they are obviously wrong, but the "obviously" is suspicious when so many disagree with you, at the very least for Aumann reasons.
To put it in another way: What makes you so certain that birth is so far off from what is reasonable as a line for personhood, when you are willing to draw your line at 10 months? That is much closer to birth than 17 is to 12 years old.
Also, I think your analogy needs a bit of amending: the relevant question is, if there was a visible unisex menstruation happening at 17 years old, and an established tradition of taking that as the age of consent, why on earth would a society change the law to make it 16 years and 2 months instead?
Replies from: Bakkot, prase↑ comment by Bakkot · 2012-01-02T03:00:34.944Z · LW(p) · GW(p)
Replies from: Alejandro1↑ comment by Alejandro1 · 2012-01-02T03:35:35.402Z · LW(p) · GW(p)
While true, I suspect most or all of those people would have a hard time giving a good definition of "person" to an AI in such a way that the definition included babies, adults, and thinking aliens, but not pigs or bonobos. So yes, the claim I am implicitly making with this (or any other) controversial opinion is that I think almost everyone is wrong about this specific topic.
One rough effort at such definition would be: "any post-birth member of a species whose adult members are intelligent and conscious", where "birth" can be replaced by an analogous Schelling point in the development in an alien species, or by an arbitrary chosen line at a similar stage of development, if no such Schelling point exists.
You might say that this definition is an arbtrary kludge that does not "carve Nature at the joints". My reply would be that ethics is adapted for humans, and does not need to carve Nature at intrinsic joints but at the places that humans find relevant.
Your point about different rates of development is well taken, however. I am also not an expert in this topic, so we'll have to let it rest for the moment.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T03:46:55.169Z · LW(p) · GW(p)
Replies from: Alejandro1, Multiheaded↑ comment by Alejandro1 · 2012-01-02T15:08:44.087Z · LW(p) · GW(p)
For computers, hardware and software can be separated in a way that is not possible with humans (with current technology). When the separation is possible, I agree personhood should be attributed to the software rather than the hardware, so your machine should not be considered a person. If in the future it becomes routinely possible to scan, duplicate and emulate human minds, then killing a biological human will probably also be less of a crime than it is now, as long as his/her mind is preserved. (Maybe there would be a taboo instead about deleting minds with no backup, even when they are not "running" on hardware).
It is also possible than in such a future where the concept of a person is commonly associated with a mind pattern, legalizing infanticide before brain development seats in would be acceptable. So perhaps we are not in disagreement after all, since on a different subthread you have said you do not really support legalization of infanticide in our current society.
I still think there is a bit of a meta diagreement: you seem to think that the laws and morality of this hypothetical future society would be better than our current ones, while I see it as a change in what are the appropriate Schelling points for the law to rule, in response to technological changes, without the end point being more "correct" in any absolute sense than our current law.
↑ comment by Multiheaded · 2012-01-02T08:33:32.409Z · LW(p) · GW(p)
Should this machine be considered a person?
Well, yes. This seems obvious to me.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T18:49:05.191Z · LW(p) · GW(p)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T20:53:14.690Z · LW(p) · GW(p)
Oh, of course. I've taken it that you were asking about a case where such software had indeed been installed on the machine. The potential of personhood on its own seems hardly worth anything to me.
↑ comment by prase · 2012-01-02T14:20:14.197Z · LW(p) · GW(p)
Pretty much everyone in our society agrees that 12-year olds cannot meaningfully consent to sex (especially with adults)
As a data point for your statistics, I think that a 12-year old can meaningfully consent to sex. When it comes to issues of pregnancy and having children, the consequences are greater and I don't think such yound people can consent to this, but fortunately sex and children can be kept separate today with only weak side effects.
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T02:05:07.751Z · LW(p) · GW(p)
I think that a 12-year old from a society with sensible policies would be able to give meaningful consent, but for some reason an enormous amount of work has been put into keeping American 12-year olds dangerously ignorant. That needs to be fixed first.
↑ comment by Oligopsony · 2012-01-02T03:30:23.349Z · LW(p) · GW(p)
I do think there are some advantages to setting the cutoff point just slightly later than birth, even if by just a few hours:
*evaluations of whether a person should come into existence can rest on surer information when the infant is out of the womb
- non-maternal reproductive autonomy - under the current legal personhood cutoff, I can count this as an acceptable loss, as I consider maternal bodily autonomy and the interests of the child to be more important, but with infanticide all three can be reconciled
- psychologically, parents (especially fathers) might feel more buy-in to their status, even if almost none actually end up choosing otherwise, and if infant non-personhood catches on culturally infant deaths very close to births might cause less grief among parents
(All this assumes that late-term abortions are a morally acceptable choice to make in their own right, of course, rather than something which must be legally tolerated to preserve maternal bodily autonomy.)
Replies from: Strange7↑ comment by daenerys · 2012-01-02T01:44:07.721Z · LW(p) · GW(p)
Mild updating of my original position due to this conversation:
I still don't have many moral qualms about allowing parents to kill children, but realize that actually legalizing it in our current society would lead to some unintended consequences, due to considerations such as the Schelling point, and killing infants as a gateway to further sociopathic behaviours.
Part of my difficulty is that some humans, such as infants, have less blicket than animals. If its ok to kill animals, then there's no reason to say it's not ok to kill blicket-less humans. Then I realize that even though it's legal to kill animals, it's still something I can't do for anything except certain bugs. Even spiders I let be, or take outside.
So maybe a wiser way to reconcile these would be to say that since infants have less blicket than animals, and we don't kill infants, that we also shouldn't kill animals. It's what I live by anyway, and seems to cause less disturbance than saying that since infants have less blicket than animals and we kill animals, that it's ok to kill infants.
Replies from: wedrifid, Zetetic, FAWS, Bakkot↑ comment by wedrifid · 2012-01-02T01:59:35.715Z · LW(p) · GW(p)
Part of my difficulty is that some humans, such as infants, have less blicket than animals. If its ok to kill animals, then there's no reason to say it's not ok to kill blicket-less humans. Then I realize that even though it's legal to kill animals, it's still something I can't do for anything except certain bugs. Even spiders I let be, or take outside.
Don't worry, there would probably be a baby killing service if it were legal. Just like we have other people to kill animals for us.
↑ comment by Zetetic · 2012-01-02T07:56:11.226Z · LW(p) · GW(p)
If its ok to kill animals, then there's no reason to say it's not ok to kill blicket-less humans.
I just want to point out this alternative position: Healthy (mentally and otherwise) babies can gain sufficient mental acuity/self-awareness to outstrip animals in their normal trajectory - i.e. babies become people after a while.
Although I don't wholeheartedly agree with this position, it seems consistent. The stance that such a position would imply is that babies with severe medical conditions (debilitating birth defects, congenital diseases etc.) could be killed with parental consent, and fetuses likely to develop birth defects can be aborted, but healthy fetuses cannot be aborted, and healthy babies cannot be killed. I bring this up in particular because of your other post about the family with the severely disabled 6-year-old.
I think it becomes a little more complicated when we're talking about situations in which we have the ability to impart self-awareness that was previously not there. On the practical level I certainly wouldn't want to force a family to either face endless debt from an expensive procedure or a lifetime of grief from a child that can't function in day to day tasks. It also brings up the question of whether to make animals self-aware, which is... kind of interesting but probably starting to drift off topic.
↑ comment by occlude · 2012-01-01T11:12:51.516Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth. Reason: extremely young babies aren't yet people.
I would recommend against expressing this opinion in your OKCupid profile.
Replies from: Bakkot↑ comment by Emile · 2012-01-01T12:43:09.018Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth. Reason: extremely young babies aren't yet people.
Arbitrary limits like "ten months" don't make for good rules - especially when there's a natural limit that's much more prominent: childbirth.
What exactly counts as "people" is a matter of convention; it's best to settle on edges that are as crisp as possible, to minimize potential disagreement and conflict.
Also "any reason other than sadism", eh? Like "the dog was hungry" would be okay?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T13:20:57.366Z · LW(p) · GW(p)
EDIT: in the ensuing discussion, we came to an agreement that the psychopathy argument is only true of our present society, and, while strengthening our reasons to keep infanticide illegal right now, wouldn't apply to someplace without a strong revulsion to infanticide in the first place. I've updated my stance and switched to other arguments against infanticide-in-general.
Replies from: Emile↑ comment by Emile · 2012-01-01T17:22:00.311Z · LW(p) · GW(p)
I'm sorry, I just can't parse your sentence, especially "anyone who seriously doesn't understand why punishing all parents able to kill their infant is an incredibly good idea". I suspect you chained too many clauses together and ended up saying the opposite of what you meant.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T17:24:29.690Z · LW(p) · GW(p)
Followed up with a clarification here.
↑ comment by drethelin · 2012-01-01T20:32:58.342Z · LW(p) · GW(p)
I broadly agree that babies aren't people, but I still think infanticide should be illegal, simply because killing begets insensitivity to killing. I know this has the sound of a slippery slope argument, but there is evidence that desire for sadism in most people is low, and increases as they commit sadistic acts, and that people feel similarly about murder.
From The Better Angels of Our Nature: "Serial killers too carry out their first murder with trepidation, distaste, and in its wake, disappointment: the experience had not been as arousing as it had been in their imaginations. But as time passes and their appetite is rewhetted, they find the next on easier and more gratifying, and then they escalate the cruelty to feed what turns into an addiction."
Similarly, cathartic violence against non-person objects (http://en.wikipedia.org/wiki/Catharsis#Therapeutic_uses) can lead to further aggression in personal interactions.
I don't think we want to encourage or allow killing of anything anywhere near as close to people as babies. The psychological effects on people who kill their own children and on a society that views the killing of babies as good are too potentially terrible. Without actual data, I can say I would never want to live in a society that valued people as little as Sparta did.
Replies from: Bakkot, FiftyTwo, None, Bakkot, Multiheaded↑ comment by Bakkot · 2012-01-01T21:27:36.652Z · LW(p) · GW(p)
Replies from: drethelin↑ comment by drethelin · 2012-01-01T21:35:07.981Z · LW(p) · GW(p)
We're not talking about making new laws, and we're certainly not encouraging the government to make in-discriminatory laws about things that are possibly bad. This is a law that already exists, where changing it would lead to a worse world. Feel free to campaign against those other laws you talked about coming into existence if someone tries to make them happen, but you shouldn't be trying to get baby killing legalized.
Replies from: Bakkot↑ comment by FiftyTwo · 2012-01-02T01:32:32.457Z · LW(p) · GW(p)
I don't think we want to encourage or allow killing of anything anywhere near as close to people as babies.
By what criterion do you consider babies sufficiently "close to people" that this is an issue, but not late term fetuses or adult animals? Specific example, an adult bonobo seems to share more of the morally relevant characteristics of adult humans than a newborn baby but are not afforded the same legal protection.
Replies from: drethelin↑ comment by drethelin · 2012-01-02T04:18:00.695Z · LW(p) · GW(p)
I don't think killing bonobos should be particularly legal.
As far as fetuses, since my worry is psychological, I don't think there's a significant risk of desensitization to killing people since the action of going under surgery or taking plan b is so vastly removed from the act of murder.
Replies from: None↑ comment by [deleted] · 2012-01-02T09:37:14.587Z · LW(p) · GW(p)
What if only surgeons are licensed for infanticide on request, which must be done in privacy away from parent's eyes?
That way desensitisation isn't worse than with surgeons or doctors who preform abortion, especially if aesthetics or poison is used. Before anyone raises the Hippocratic oath as an objection, let me give them a stern look and ask them to consider the context of the debate and figure out on their own why it isn't applicable.
Replies from: drethelin, Multiheaded↑ comment by Multiheaded · 2012-01-02T09:51:20.692Z · LW(p) · GW(p)
What if only surgeons are licensed for infanticide on request, which must be done in privacy away from parent's eyes?
The damage would've been already done elsewhere by that point. The parent would likely have already
1) seen their born, living infant, experiencing what their instincts tell them to (if wired normally in this regard)
2) made the decision and signed the paperwork
3) (maybe) even taken another look at the infant with the knowledge that it's the last time they see it
I feel that every one of those little points could subtly damage (or totally wreck) a person.
Replies from: None↑ comment by [deleted] · 2012-01-02T10:04:46.089Z · LW(p) · GW(p)
I'm afraid you may have your bottom line written already. In the age of ultrasound and computer generated images or even better in the future age of transhuman sensory enhancement or fetuses being grown outside the human body the exact same argument can be used against abortion.
Especially once you remember the original context was a 10 month old baby, not say a 10 year old child.
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-01-02T10:14:01.023Z · LW(p) · GW(p)
In the age of ultrasound and computer generated images or even better in the future age of transhuman sensory enhancement or fetuses being grown outside the human body the exact same argument can be used against abortion
Then I might well have to use it against abortion at some point, for the same reason: we should forbid people from overriding this part of their instincts.
Replies from: jaimeastorga2000, FiftyTwo↑ comment by jaimeastorga2000 · 2012-01-02T22:28:58.284Z · LW(p) · GW(p)
Upvoted for bullet-biting.
↑ comment by FiftyTwo · 2012-01-02T17:42:46.327Z · LW(p) · GW(p)
Why is overriding of instincts inherently bad?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T10:10:57.300Z · LW(p) · GW(p)
'm afraid you may have your bottom line written already.
First, I'm understandably modeling this on myself, and second, it doesn't really make this speculation any less valid in itself.
↑ comment by [deleted] · 2012-01-02T09:32:58.436Z · LW(p) · GW(p)
Can't this same be said of last trimester abortions?
In any case much like we find pictures or videos of abortion distasteful, I'm sure future baby-killing society would still find videos of baby killings distasteful. We could legislate infanticide needs to be done by professionals away from the eyes of parents and other onlookers to avoid psychological damage. Also forbid media depicting it except for educational purposes.
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-01-02T10:35:07.837Z · LW(p) · GW(p)
We could legislate infanticide needs to be done by professionals away from the eyes of parents and other onlookers to avoid psychological damage.
For legal reasons, there'd just have to be a clear procedure where parents would take or refuse the decision, probably after being informed of the baby's overall condition and potential in the presence of a witness. I can't imagine how it could be realistically practiced without one. Such a procedure could ironically wind up more psychologically damaging than, say, simply distracting one's parental instinct with something like intoxication and personally abandoning/suffocating/poisoning the baby.
Also forbid media depicting it except for educational purposes.
Potential for tension and cognitive dissonance. Few things in our culture are censored this way, not even executions and torture. Would feel unusually hypocritical.
Replies from: None↑ comment by [deleted] · 2012-01-02T10:41:14.920Z · LW(p) · GW(p)
For legal reasons, there'd just have to be a clear procedure where parents would take or refuse the decision, probably after being informed of the baby's overall condition and potential in the presence of a witness. I can't imagine how it could be realistically practiced without one.
Humans are pretty ok with making cold decisions in the abstract that they could never carry out themselves due to physical revulsion and/or emotional trauma.
The number of people that would sign a death order is greater than the number of people that would kill someone else personally.
Potential for tension and cognitive dissonance. Few things in our culture are censored this way, not even executions and torture.
Does society feel conflicted bothered that child pornography is censored? We can even extend existing child pornography laws with a few good judicial decisions to cover this.
Would feel unusually hypocritical.
Read more Robin Hanson.
Replies from: wedrifid, Multiheaded↑ comment by wedrifid · 2012-01-02T10:54:54.881Z · LW(p) · GW(p)
Does society feel conflicted bothered that child pornography is censored? We can even extend existing child pornography laws with a few good judicial decisions to cover this.
Good point. If they aren't even people...
Replies from: None↑ comment by [deleted] · 2012-01-02T11:01:40.855Z · LW(p) · GW(p)
In my own country pornography involving animals is illegal. It shows no signs of being legalized soon. And I live in a pretty liberal central European first world country.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T11:29:25.846Z · LW(p) · GW(p)
I live in Russia and here the legal status of all pornography is murky but no law de facto prosecutes anything but production and distribution of child porn, and simple possession of child porn is not illegal. There's nothing about animals, violence, or such.
↑ comment by Multiheaded · 2012-01-02T10:46:36.954Z · LW(p) · GW(p)
The number of people that would sign a death order is greater than the number of people that would kill someone else personally.
Much greater? I think that people signing death orders for criminals could generally execute those criminals themselves if forced to choose between that and the criminal staying alive.
Does society feel conflicted bothered that child pornography is censored?
4chan could be an argument that it's beginning to feel so :) Society just hasn't thought it through yet.
↑ comment by Multiheaded · 2012-01-02T09:41:05.305Z · LW(p) · GW(p)
Don't think so, because
1) such foetuses would likely only be seen by a surgeon if the abortion is done properly
2) they probably instinctively appear much less "person-like" or "likely to become a human" even if the mother sees one while doing a crude abortion on her own - maybe even for an evolutionary reason - so that she wouldn't be left with a memory of killing something that looks like a human.
Replies from: None↑ comment by [deleted] · 2012-01-02T10:12:45.600Z · LW(p) · GW(p)
they probably instinctively appear much less "person-like" or "likely to become a human" even if the mother sees one while doing a crude abortion on her own - maybe even for an evolutionary reason - so that she wouldn't be left with a memory of killing something that looks like a human.
blinks
How can a LWer even think this way? I suggest you reread this. I'm tempted to ask you to think 4 minutes by the physical clock about this, but I'll rather just spell it out.
Lets say you are 8 months pregnant in the early stone age. What is a better idea for you, fitness wise, wait another month to terminate reproduction attempt or try to do it right now?
I'm even tempted to say there is a reason women kill their own children more often than men.
Replies from: wedrifid↑ comment by wedrifid · 2012-01-02T10:31:15.185Z · LW(p) · GW(p)
I'm even tempted to say there is a reason women kill their own children more often than men.
Higher expected future resource investment per allelle carried?
Replies from: None↑ comment by [deleted] · 2012-01-02T10:43:00.777Z · LW(p) · GW(p)
More or less. I'm pretty sure that controlling for certainty of the child being "yours" and time spent with them, men would on average find killing their children a greater psychological burden in the long run than women.
Replies from: wedrifid↑ comment by wedrifid · 2012-01-02T10:53:00.757Z · LW(p) · GW(p)
More or less. I'm pretty sure that controlling for certainty of the child being "yours" and time spent with them, men would on average find killing their children a greater psychological burden in the long run than women.
Because after all that time spent with them some start to find them really damn annoying?
Replies from: None↑ comment by [deleted] · 2012-01-02T11:04:40.376Z · LW(p) · GW(p)
We get attached to children and lovers with exposure due to oxytocin. Only when the natural switches for releasing it are shut off does exposure cease to have this effect.
Finding them annoying is a separate effect.
Replies from: wedrifid↑ comment by wedrifid · 2012-01-02T14:03:03.241Z · LW(p) · GW(p)
We get attached to children and lovers with exposure due to oxytocin. Only when the natural switches for releasing it are shut off does exposure cease to have this effect.
I'm trying to relate this to your theory that men find it harder to kill their infants than women do. The influence of oxytocin discourages killing of those you are attached to and mothers get more of this than fathers if for no other reason than a crap load getting released during childbirth.
↑ comment by Multiheaded · 2012-01-01T21:07:34.267Z · LW(p) · GW(p)
Thanks a lot. I fully support your line of thinking, all of your points and your conclusion.
↑ comment by wedrifid · 2012-01-01T08:07:29.010Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth. Reason: extremely young babies aren't yet people.
They're just p-zombies pretending to be people. They only get their soul at 10 months and thereafter are able to detect qualia.
I would vote against this law. I'd vote with guns if necessary. Reason: I like babies. Tiny humans are cute and haven't even done anything to deserve death yet (or indicate that they aren't valuable instances of human). I'd prefer you went around murdering adults (adults being the group with the economic, physical and political power to organize defense.)
Replies from: Bakkot, Solvent↑ comment by Bakkot · 2012-01-01T08:21:45.924Z · LW(p) · GW(p)
Replies from: wedrifid, wedrifid, wedrifid↑ comment by wedrifid · 2012-01-01T10:03:16.853Z · LW(p) · GW(p)
Extremely young children are lacking basically all of the traits I'd want a "person" to have.
Most adults don't have traits I'd want a "person" to have. At least with babies there is a chance they'll turn out as worthwhile people.
Replies from: None, None↑ comment by [deleted] · 2012-01-02T11:34:04.999Z · LW(p) · GW(p)
Most adults don't have traits I'd want a "person" to have. At least with babies there is a chance they'll turn out as worthwhile people.
Adults have a small chance of acquiring those traits too. Due to selection effects adults that don't have traits have a much lower probability than a fresh new baby of turning out this way.
In a few decades genetic technology and better psychology and sociology may let us make decent probabilistic predictions about how they will turn out as adults. Are you ok with babies with very low probabilities of getting such traits being killed?
Replies from: wedrifid↑ comment by wedrifid · 2012-01-02T13:56:18.149Z · LW(p) · GW(p)
Adults have a small chance of acquiring those traits too. Due to selection effects adults that don't have traits have a much lower probability than a fresh new baby of turning out this way.
As well as, of course, as having far less malleable minds that have yet to crystallize the habits their upbringing gives them.
Are you ok with babies with very low probabilities of getting such traits being killed?
Far less averse, particularly in an environment where negative externalities cannot be easily prevented. Mind you I would still oppose legalization of killing people (whether babies or adults) just because they are Jerks. Not because of the value of the Jerks themselves (which is offset by their effects on others) but because it isn't just Jerks that would be killed. I don't want other people to have the right to choose who lives and who dies and I'm willing to waive that right myself by way of cooperation in order to see it happen.
↑ comment by [deleted] · 2012-01-02T11:35:55.350Z · LW(p) · GW(p)
I'm not sure why this is getting down voted. "Person" is basically LW speak for "particular kind of machine that has value to me in of itself". I don't see any good reason why I personally should value all people equally. I can see some instrumental value in living in a society that makes rules that operate on this principle.
But generally I do not love my enemies and neighbours like myself. I'm sorry, I guess that's not very Christian of me. ;)
↑ comment by wedrifid · 2012-01-01T08:46:42.252Z · LW(p) · GW(p)
Would you really prefer it to be legal to murder adults than to murder ten-month-old children?
Yes. The explanation given was significant.
Ten-month-old children can be replaced in a mere twenty months. It takes forty one years to make a new forty-year-old.
It takes a 110 years to make a 110 year old . In most cases I'd prefer to keep a 30 year old than either of them. More to the point I don't intrinsically value creating more humans. The replacement cost of a dead human isn't anything to do with the moral aversion I have to murder.
Replies from: Bakkot, wedrifid↑ comment by Bakkot · 2012-01-01T09:09:07.764Z · LW(p) · GW(p)
Replies from: Estarlio↑ comment by Estarlio · 2012-01-01T13:16:39.766Z · LW(p) · GW(p)
Babies aren't people by any measure I can see
Do you really think it's wise to have a precedent that allows agents of Type X to go around killing off all of the !X group ? Doesn't bode well if people end up with a really sharp intelligence gradient.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T19:01:39.279Z · LW(p) · GW(p)
Replies from: wedrifid, TheOtherDave, Estarlio, Solvent, daenerys↑ comment by wedrifid · 2012-01-02T07:34:24.912Z · LW(p) · GW(p)
ETA: I hate that I have to say this, but can people respond instead of just downvoting? I'm honestly curious as to why this particular post is controversial - or have I missed something?
I haven't downvoted, for what it is worth. Sure, you may be an evil baby killing advocate but it's not like l care!
Replies from: Solvent↑ comment by TheOtherDave · 2012-01-02T05:02:02.054Z · LW(p) · GW(p)
I haven't seen anyone respond to your request for feedback about votes, so let me do so, despite not being one of the downvoters.
By my lights, at least, your posts have been fine. Obviously, I can't speak for the site as a whole... then again, neither can anyone else.
Basically, it's complicated, because the site isn't homogenous. Expressing conventionally "bad" moral views will usually earn some downvotes from people who don't want such views expressed; expressing them clearly and coherently and engaging thoughtfully with the responses will usually net you upvotes.
↑ comment by Estarlio · 2012-01-01T22:35:39.701Z · LW(p) · GW(p)
I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.
My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says 'Kill everything that looks unlike yourself'? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?
In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?
If it is unwise, then it would make sense to weaken that strand of thought in society - to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T22:45:02.103Z · LW(p) · GW(p)
Replies from: Estarlio↑ comment by Estarlio · 2012-01-02T00:42:58.672Z · LW(p) · GW(p)
You did not answer me on the human question - how we’d like powerful humans to think .
No. But we do want a rule that says something like "the closer things are to being people, the more importance should be given to them". As a consequence of this rule, I think it should be legal to kill your newborn children.
This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you've defined people in terms of how closely they realise your ideology:
Extremely young children are lacking basically all of the traits I'd want a "person" to have.
You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you - maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity - and then what happens to those who exceed that capacity? And surely the AI itself would do so....
There are a lot of ways it can go wrong.
I'm observably a person.
You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.
Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So... pretty sure.
The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.
Oh, and I'm never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).
-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T01:53:28.825Z · LW(p) · GW(p)
Replies from: wedrifid, Multiheaded, Estarlio↑ comment by wedrifid · 2012-01-02T01:56:55.897Z · LW(p) · GW(p)
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies.
How did I misinterpret? I read that you don't include babies and I said that I do include babies. That's (preference) disagreement, not a problem with interpretation.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T02:21:27.557Z · LW(p) · GW(p)
Replies from: wedrifid↑ comment by wedrifid · 2012-01-02T02:34:28.811Z · LW(p) · GW(p)
This line gave me the impression that you thought I was saying I want my definition of "person", for the moral calculus, to include things like "worthwhile".Which was not what I was saying -
Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)
Replies from: Bakkot↑ comment by Multiheaded · 2012-01-02T08:58:50.576Z · LW(p) · GW(p)
If you don't understand the distinction between "legal" and "encouraged", we're going to have a very difficult time communicating.
"Encouraged" is very clearly not absolute but relative here, "somewhat less discouraged than now" can just be written as "encouraged" for brevity's sake.
↑ comment by Estarlio · 2012-01-02T21:21:23.847Z · LW(p) · GW(p)
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don't have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn't at all mean that just because I would like people to be nice to each other, and so on, I wouldn't consider people who aren't nice not to be people. I'd intended to convey this distinction by the quotation marks.
How are you deciding whether your definition is reasonable?
Obviously. There's a lot of ways any AI can go wrong. But you have to do something. Is your rule "don't kill humans"? For what definition of human, and isn't that going to be awfully unfair to aliens? I think "don't kill people" is probably about as good as you're going to do.
‘Don’t kill anything that can learn,’ springs to mind as a safer alternative - were I inclined to program this stuff in directly, which I'm not.
I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society - or at least an idealised model of part of it - will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.
In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.
I don't want the rule to be "don't kill people" for whatever values of "kill" and "people" you have in your book. For all I know you're going to interpet this as something I'd understand more like "don't eat pineapples". I want the rule to be "don't kill people" with your definitions in accordance with mine.
Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.
It’d be great if I could just say ‘I want you to do good - with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions - AIs may well grow up with different definitions - and if you've got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.
Replies from: Bakkot, NancyLebovitz↑ comment by Bakkot · 2012-01-04T19:19:42.021Z · LW(p) · GW(p)
Replies from: dlthomas, TheOtherDave, Strange7, Estarlio↑ comment by dlthomas · 2012-01-04T19:37:20.823Z · LW(p) · GW(p)
If you do think both of the above things, then my task is either to understand why you don't feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.
I'm not certain whether or not it's germane to the broader discussion, but "think X is immoral" and "think X should be illegal" are not identical beliefs.
Replies from: Bakkot↑ comment by TheOtherDave · 2012-01-04T20:19:33.587Z · LW(p) · GW(p)
I was with you, until your summary.
Suppose hypothetically that I think "don't kill people" is a good broad moral rule, and I think babies are people.
It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.
If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think "don't kill people" is a good law, then all else being equal I should think "don't kill babies" is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.
It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-04T20:34:17.022Z · LW(p) · GW(p)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-04T20:36:50.290Z · LW(p) · GW(p)
Oh good! I don't usually nitpick about such things, but you had me genuinely puzzled.
↑ comment by Strange7 · 2012-06-05T04:52:40.743Z · LW(p) · GW(p)
Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?
If I were programming an AI to be a perfect world-guiding moral paragon, I'd rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.
↑ comment by Estarlio · 2012-06-03T17:52:38.755Z · LW(p) · GW(p)
Somewhat late, I must have missed this reply agessss ago when it went up.
there's a bunch of things in my mind for which the label "person" seems appropriate [...] There's also a bunch of things for which said label seems inappropriate
That's not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you're doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we're not doing anything particularly logical or reasonable here - we're not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don't.
If we try and agree on a common list. Well, you're agreeing that aliens and powerful AIs go on the list - so biology isn't the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can - they're just not particularly good at it yet.
Conversely, what do all your other examples have in common that infants don't?
Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it's arguably better at it. Have I committed a serious moral wrong if I delete its source code?
Arguably that would be a good heuristic to keep around. I don't know I'd call it a moral wrong – there's not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.
[...] odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren't conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.
I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn't have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there's something emotionally maladjusted in you – by the standards of the needs of society. If we'd not had the precept, and magically appeared out of nowhere, I think we'd have invented it pretty quick.
You think I'm going to try to program an AI in English?
Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.
Replies from: Bakkot↑ comment by Bakkot · 2012-06-04T22:47:58.710Z · LW(p) · GW(p)
Replies from: Estarlio, wmorgan↑ comment by Estarlio · 2012-06-05T03:23:51.631Z · LW(p) · GW(p)
Five months later...
Yeah, a lack of reply notification's a real pain in the rear.
It seems to me that this thread of the debate has come down to "Should we consider babies to be people?" There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define 'people' in terms of other, broader terms (this being the former case) or by defining 'people' via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category 'babies' belong.
Edit: You can skip to the next break line if you're not interested in reading about the methodological component so much as you are continuing the infants argument.
What we're doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I'm talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we'd have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about - properties - it's a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you're talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns - with our language that allows us to do this compression - that you can't come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance - "Most numerous group of bipedal tools users on Earth." That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn't have to lead to more confusion - as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures - while my terms would refer exactly, while I'd even be able to stick you in front of a machine and point to it directly - would you even recognise it without going to a computer? I wouldn't. The reference falls beyond the level of my experience.
I don't see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn't exactly define what I mean by human. Even by reference to genetic structure I've no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let's go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It's the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
Replies from: Bakkot↑ comment by Bakkot · 2012-06-06T03:10:59.811Z · LW(p) · GW(p)
Replies from: Estarlio↑ comment by Estarlio · 2012-06-09T03:43:57.556Z · LW(p) · GW(p)
I can come up with a rough definition, but rough definitions fail in exactly those cases where there is potential disagreement.
Eh, functioning is a very rough definition and we've got to that pretty quickly.
So will we rather say that we include mentally disabled humans above a certain level of functioning? The problem then is that babies almost certainly fall well below that threshold, wherever you might set it.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don't, I've got the dilemma as to whether to stop eating animals or start eating babies.
And it's not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I've not seen one do anything like that. Predatory animals - wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I'd resolve the dilemma if it did occur, I'm leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don't want to put a precedent in place that might get me eaten at some point.
I assume you've granted that sufficiently advanced AIs ought to be counted as people.
By fiat - sufficiently advanced for what? But I suppose I'll grant any AI that can pass the Turing test qualifies, yes.
Am I killing a person if I terminate this script before compilation completes? That is, does "software which will compile and run an AI" belong to the "people" or the "not people" group?
That depends on the nature of the script. If it's just performing some relatively simple task over and over, then I'm inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I'm inclined to think it belongs in the people group.
Really? It seems to me that someone did invent the taboo[1] on, say, slavery.
I suppose, what I really mean to say is they're taboos because that taboo has some desirable effect on society.
The point I'm trying to make here is that if you started with your current set of rules minus the rule about "don't rape people" (not to say your hypothetical morals view it as acceptable, merely undecided), I think you could quite naturally come to conclude that rape was wrong. But it seems to me that this would not be the case if instead you left out the rule about "don't kill babies".
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common - as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up - and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they'd have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
Replies from: Bakkot↑ comment by Bakkot · 2012-06-11T14:48:01.170Z · LW(p) · GW(p)
Replies from: Estarlio, MileyCyrus, TimS↑ comment by Estarlio · 2012-06-14T02:06:07.685Z · LW(p) · GW(p)
Regardless, I have no doubt that pigs are closer to functioning adult humans than babies are. You'd best give up pork.
I suppose I had, yes. It never really occurred to me that they might be that intelligent - but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
I'd be interested in what standard of "functional" you might propose that newborns would meet, though. Perhaps give examples of things which seem close to to line, on either side? For example, do wolves seem to you like people? Should killing a wolf be considered a moral wrong on par with murder?
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There's not to say that I think it should result in legal punishment on par with killing a human, mind, it's easier to work out that humans are people than it is to work out that wolves are - it's a reasonable mistake.
Insects like wasps and flies don't seem like people. Red pandas do. Dolphins do. Cows... don't. But given what I've discovered about pigs that bears some checking --- and now cows do. Hnn. Damn it, now I won't be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you're imaging would have, at what point does your ability to empathise with the imaginary-you break down?
I have to ask, at this point: have you seriously considered the possibility that babies aren't people?
As far as I know how, yes. If you've got some ways of thinking that we haven't been talking about here, feel free to post them and I'll do my best to run them.
If Babies weren't people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours - people kill people all the time, I don't expect them not to without other criteria being involved.
We are supposing that it's still on the first step, compilation. However, with no interaction on our part, it's going to finish compiling and begin running the sufficiently-advanced AI. Unless we interrupt it before compilation finishes, in which case it will not.
Not a person.
It is, for example, almost certainly maladaptive to allow all women to go into higher education and industry, because those correlate strongly with having fewer children and that causes serious problems. (Witness Japan circa now.) This is, as you put it, a poor gambling strategy. Does that imply it's immoral for society to allow women to be educated? Do reasonable people look at people who support women's rights and wonder what's wrong with them? Of course not.
No, because we've had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don't want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don't think one can seriously forward an argument that keeps women at home and doesn't create something that can be used against him in fairly horrifying ways. Even if you don't have a fairly inclusive definition of people, it seems unwise to treat other humans in that way - you, after all, are the other human to another human.
Replies from: Bakkot↑ comment by Bakkot · 2012-06-14T04:36:05.698Z · LW(p) · GW(p)
Replies from: Estarlio↑ comment by Estarlio · 2012-06-17T10:50:31.111Z · LW(p) · GW(p)
What about fish? I'm pretty sure many fish are significantly more functional than one-month-old humans, possibly up to two or three months. (Younger than that I don't think babies exhibit the ability to anticipate things. Haven't actually looked this up anywhere reputable, though.)
I don't know enough about them - given they're so different to us in terms of gross biology I imagine it's often going to be quite difficult to distinguish between functioning and instinct - this:
http://news.bbc.co.uk/1/hi/england/west_yorkshire/3189941.stm
Says that scientists observed some of them using tools, and that definitely seems like people though.
Also, separately, would you say that babies are around the lowest level of functioning that you can possess and still qualify as a person?
Yes.
Trying to narrow down where we differ here: what signs of being-a-person does a one-month-old infant display that, say, Cleverbot does not?
Shared attention, recognition, prediction, bonding -
Frequently. It's scary. But if I were in a body in which intelligence was not easy to express, and I was killed by someone who didn't think I was sufficiently functional to be a person, that would be a tragic accident, not a moral wrong.
The legal definition of an accident is an unforeseeable event. I don't agree with that entirely because, well everything's foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there's much room for doubt, there's so much suffering involved in killing and eating animals that we shouldn't do it even if we only argue ourselves to some low probability of their being people.
About age four, possibly a year or two earlier. I'm reasonably confident I had introspection at age four; I don't think I did much before that. I find myself completely unable to empathize with a 'me' lacking introspection.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
OK. So the point of this analogy is that newborns seem a lot like the script described, on the compilation step. Yes, they're going to develop advanced, functioning behaviors eventually, but no, they don't have them yet. They're just developing the infrastructure which will eventually support those behaviors.
I agree, if it doesn't have the capabilities that will make it a person there's no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven't killed a human.
I know the question I actually want to ask: do you think behaviors are immoral if and only if they're maladaptive?
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we're not rational fitness maximisers, we just tend that way on the more readily apparent issues.
↑ comment by MileyCyrus · 2012-06-11T15:40:52.512Z · LW(p) · GW(p)
Am I the only who bit the speciesist bullet?
It doesn't matter if a pig is smarter than a baby. It wouldn't matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
Replies from: Bakkot↑ comment by Bakkot · 2012-06-12T00:00:15.200Z · LW(p) · GW(p)
Replies from: Strange7, MileyCyrus↑ comment by Strange7 · 2012-06-12T01:55:23.519Z · LW(p) · GW(p)
do you get less and less preferential treatment as you become less and less human?
I'd say so, yeah. It's kind of a tricky function, though, since there are two reasons I'm logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it's descendants contributing to an environment in which creatures similar to myself would thrive.
↑ comment by MileyCyrus · 2012-06-12T14:51:04.263Z · LW(p) · GW(p)
Anyway, "species" isn't a hard-edged category built in to nature - do you get less and less preferential treatment as you become less and less human?
It's a lot more hard-edged than intelligence. Of all the animals (I'm talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
Also, what's the standard against which beings are compared to determine how "human" they are? Phenotypically average among the current population? Nasty prospects for the cryonics advocates among us. And the mind-uploading camp.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren't..
Also veers dangerously close to negative eugenics, if you're going to start declaring some people are less human than others.
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say "All objects that have X property have moral worth", I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
Replies from: Bakkot↑ comment by TimS · 2012-06-11T15:03:15.680Z · LW(p) · GW(p)
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don't think your example about women in higher education is as cut and dried as you present it.
Replies from: Bakkot↑ comment by Bakkot · 2012-06-12T00:08:10.613Z · LW(p) · GW(p)
Replies from: TimS↑ comment by TimS · 2012-06-12T00:53:14.835Z · LW(p) · GW(p)
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don't exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-06-12T01:01:32.461Z · LW(p) · GW(p)
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
Replies from: TimS↑ comment by TimS · 2012-06-12T02:06:34.865Z · LW(p) · GW(p)
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn't an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here - the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I'm a moral anti-realist and don't believe the evo. psych -> morality story.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-06-12T03:33:31.983Z · LW(p) · GW(p)
Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
↑ comment by wmorgan · 2012-06-05T00:09:41.906Z · LW(p) · GW(p)
Consider this set:
A sleeping man. A cryonics patient. A nonverbal 3-year-old. A drunk, passed out.
I think these are all people, they're pretty close to babies, and we shouldn't kill any of them.
The reason they all feel like babies to me, from the perspective of "are they people?", is that they're in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.
EDIT: That doesn't mean we have to pay any cost to follow that path -- the value we assign to a person's life can be high but must be finite, and sometimes the correct, moral decision is to not pay that price. But just because we don't pay that cost doesn't mean it's not a person.
I don't think the time frame matters, either. If I found Fry from Futurama in the cryostasis tube today, and I killed him because I hated him, that would be murder even though he isn't going to talk, learn, or have self-awareness until the year 3000.
Gametes are not people, even though we know how to make people from them. I don't know why they don't count.
EDIT: oh shit, better explain myself about that last one. What I mean is that it is not possible to murder a gamete -- they don't have the moral weight of personhood. You can, potentially, in some situations, murder a baby (and even a fetus): that is possible to do, because they count as people.
Replies from: Bakkot, Nornagest, Jayson_Virissimo, Strange7, Alicorn↑ comment by Bakkot · 2012-06-06T03:20:51.685Z · LW(p) · GW(p)
Replies from: wmorgan↑ comment by wmorgan · 2012-06-07T00:49:05.376Z · LW(p) · GW(p)
I've never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn't very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.
This is most interesting to me:
From these examples, I think "will become a person" is only significant for objects which were people in the past
I know we're talking about intuitions, but this is one description that can't jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can't ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot at the distinction between them?
Replies from: Bakkot↑ comment by Nornagest · 2012-06-05T01:09:30.716Z · LW(p) · GW(p)
This question is fraught with politics and other highly sensitive topics, so I'll try to avoid getting too specific, but it seems to me that thinking of this sort of thing purely in terms of a potentiality relation rather misses the point. A self-extracting binary, a .torrent file, a million lines of uncompiled source code, and a design document are all, in different ways, potential programs, but they differ from each other both in degree and in type of potentiality. Whether you'd call one a program in any given context depends on what you're planning to do with it.
↑ comment by Jayson_Virissimo · 2012-06-05T05:16:00.595Z · LW(p) · GW(p)
Gametes are not people, even though we know how to make people from them.
I'm not at all sure a randomly selected human gamete is less likely to become a person than a randomly selected cryonics patient (at least, with currently-existing technology).
↑ comment by Strange7 · 2012-06-05T02:42:21.976Z · LW(p) · GW(p)
Might be better to talk about this in terms of conversion cost rather than probability. To turn a gamete into a person you need another gamete, $X worth of miscellaneous raw materials (including, but certainly not limited to, food), and a healthy female of childbearing age. She's effectively removed from the workforce for a predictable period of time, reducing her probable lifetime earning potential by $Y, and has some chance of various medical complications, which can be mitigated by modern treatments costing $Z but even then works out to some number of QALYs in reduced life expectancy. Finally, there's some chance of the process failing and producing an undersized corpse, or a living creature which does not adequately fulfill the definition of "person."
In short, a gamete isn't a person for the same reason a work order and a handful of plastic pellets aren't a street-legal automobile.
↑ comment by NancyLebovitz · 2012-06-13T02:59:49.296Z · LW(p) · GW(p)
Figuring out how to define human (as in "don't kill humans") so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.
The hard question is deciding which transhumans-- including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way-- can reasonably be considered as entities which shouldn't be killed.
↑ comment by Solvent · 2012-01-02T07:45:18.115Z · LW(p) · GW(p)
Well, it sure looks like babies have a lot of things in common with people, and will become people one day, and lots of people care about them.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T18:46:47.285Z · LW(p) · GW(p)
Replies from: Solvent↑ comment by Solvent · 2012-01-03T04:06:37.106Z · LW(p) · GW(p)
babies have a lot of things in common with people
I meant humans, not people. Sorry.
And I agree that we should treat animals better. I'm vegetarian.
and will become people one day
I agree that this discussion is slightly complex. Gwern's abortion dialogue contains a lot of relevant material.
However, I don't feel that saying that "we should protect babies because one day they will be human" requires aggregate utilitarianism as opposed to average utilitarianism, which I in general prefer. Babies are already alive, and already experience things.
and lots of people care about them
This argument has two functions. One is the literal meaning of "we should respect people's preferences". See discussion on the Everybody Draw Mohammed day. The other is that other people's strong moral preferences are some evidence towards the correct moral path.
Replies from: Bakkot↑ comment by daenerys · 2012-01-02T18:56:10.187Z · LW(p) · GW(p)
ETA: I hate that I have to say this, but can people respond instead of just downvoting? I'm honestly curious as to why this particular post is controversial - or have I missed something?
I often "claim" my downvotes (aka I will post "downvoted" and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments), and people who disagree with your reason for downvoting will also downvote you.
Also, many people on this site are just a-holes. Sorry.
Replies from: MixedNuts, wedrifid, Nornagest, Prismattic, Multiheaded↑ comment by MixedNuts · 2012-01-02T21:49:10.726Z · LW(p) · GW(p)
Common reasons I downvote with no comment: I think the mistake is obvious to most readers (or already mentioned) and there's little to be gained from teaching the author. I think there's little insight and much noise - length, unpleasant style, politically disagreeable implications that would be tedious to pick apart (especially in tone rather than content). I judge that jerkishness is impairing comprehension; cutting out the courtesies and using strong words may be defensible, but using insults where explanations would do isn't.
On the "just a-holes" note (yes, I thought "Is this about me?"): It might be that your threshold for acceptable niceness is unusually high. We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness. People who want LW to be nicer usually do it by being especially nice, not by especially punishing meanness. I notice you're on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people, which is a bad thing if you love Postel's law. (Which, by Postel's law, nobody but me has to.) The only LessWronger I think is an asshole is wedrifid, and I think this is one of his good traits.
Replies from: None, Prismattic, daenerys↑ comment by Prismattic · 2012-01-02T22:25:10.584Z · LW(p) · GW(p)
We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness.
I think there is a difference between choosing bluntness where niceness would tend to obscure the truth, and choosing between two forms of expression which are equally illuminating but not equally nice. I don't know about anyone else, but I'm using "a-hole" here to mean "One who routinely chooses the less nice variant in the latter situation."
(This is not a specific reference to you; your comment just happened to provide a good anchor for it.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-02T23:25:36.293Z · LW(p) · GW(p)
Of course, if that's the meaning, then before I judge someone to be an "a-hole" I need to know what they intended to illumine.
↑ comment by daenerys · 2012-01-02T22:07:04.595Z · LW(p) · GW(p)
I notice you're on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people,
Would you mind discussing this with me, because I find it disturbing that I come off as having double-standards, and am interested to know more about where that impression comes from. I personally feel that I do not expect better behaviour from others than I practice, but would like to know (and update my behaviour) if I am wrong about this.
I admit to lowering my level of "niceness" on LW, because I can't seem to function when I am nice and no one else is. However MY level of being "not nice" means that I don't spend a lot of time finding ways to word things in the most inoffensive manner. I don't feel like I am exceptionally rude, and am concerned if I give off that impression.
I also feel like I keep my "punishing meanness" levels to a pretty high standard too: I only "punish" (by downvoting or calling out) what I consider to be extremely rude behavior (ie "I wish you were dead" or "X is crap.") that is nowhere near the level of "meanness" that I feel like my posts ever get near.
Replies from: MixedNuts↑ comment by MixedNuts · 2012-01-02T22:45:37.929Z · LW(p) · GW(p)
I come off as having double-standards
You come off as having single-standards. That is, I think the minimal level of niceness you accept from others is also the minimal level of niceness you practice - you don't allow wiggle room for others having different standards. I sincerely don't resent that! My model of nice people in general suggests y'all practice Postel's law ("Be liberal in what you accept and conservative in what you send"), but I don't think it's even consistent to demand that someone follow it.
extremely rude behavior (ie "I wish you were dead" or "X is crap.")
...I'm never going to live that one down, am I? Let's just say that there's an enormous amount of behaviours that I'd describe as "slightly blunter than politeness would allow, for the sake of clarity" and you'd describe as "extremely rude".
Also, while I've accepted the verdict that " is crap" is extremely rude and I shouldn't ever say it, I was taken aback at your assertion that it doesn't contribute anything. Surely "Don't use this thing for this purpose" is non-empty. By the same token, I'd actually be pretty okay with being told "I wish you were dead" in many contexts. For example, in a discussion of eugenics, I'd be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.
Maybe the lesson for you is that many people suck really bad at phrasing things, so you should apply the principle of charity harder and be tolerant if they can't be both as nice and as clear as you'd have been and choose to sacrifice niceness? The lesson I've learned is that I should be more polite in general, more polite to you in particular, look harder for nice phrasings, and spell out implications rather than try to bake them in connotations.
Replies from: Alicorn, daenerys, None↑ comment by Alicorn · 2012-01-02T23:07:07.631Z · LW(p) · GW(p)
For example, in a discussion of eugenics, I'd be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.
I'm fine with positions that imply I should never have been born (although I have yet to hear one that includes me), but I'd feel very differently about one implying that I should be dead!
Replies from: lessdazed, TheOtherDave↑ comment by lessdazed · 2012-01-02T23:25:42.451Z · LW(p) · GW(p)
Many people don't endorse anything similar to the principle that "any argument for no more of something should explain why there is a perfect amount of that thing or be counted as an argument for less of that thing."
E.g. thinking arguments that "life extension is bad" generally have no implications regarding killing people were it to become available. So those who say I shouldn't live to be 200 are not only basically arguing I should (eventually, sooner than I want) be dead, the implication I take is often that I should be killed (in the future).
↑ comment by TheOtherDave · 2012-01-02T23:22:18.369Z · LW(p) · GW(p)
Personally, I'd be far more insulted by the suggestion that I should never have been born, than by the suggestion that I should die now.
Replies from: Alicorn↑ comment by Alicorn · 2012-01-02T23:32:06.505Z · LW(p) · GW(p)
Why?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T01:38:47.331Z · LW(p) · GW(p)
If someone tells me I should die now, I understand that to mean that my life from this point forward is of negative value to them. If they tell me I should never have been born, I understand that to mean not only that my life from this point forward is of negative value, but also that my life up to this point has been of negative value.
Replies from: Alicorn↑ comment by Alicorn · 2012-01-03T02:12:56.650Z · LW(p) · GW(p)
Interesting. I don't read it as necessarily a judgment of value at all to be told that I should never have been born (things that should not have happened may accidentally have good consequences). Additionally, someone who doesn't think that I should have been born, but also doesn't think I should die, will not try to kill me, though they may push policies that will prevent future additions to my salient reference class; someone who thinks I should die could try to make that happen!
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T02:23:55.799Z · LW(p) · GW(p)
Interesting.
For my part, I don't treat saying things like "I think you should be dead" as particularly predictive of actually trying to kill me. Perhaps I ought to, but I don't.
↑ comment by daenerys · 2012-01-02T23:04:28.836Z · LW(p) · GW(p)
Upvoted, and thank you for the explanation.
I'm never going to live that one down, am I?
If it helps, I didn't even remember that one of the times I've called someone out on "X is crap" was you. So consider it "lived down".
taken aback at your assertion that it doesn't contribute anything.
You're right. How about an assertion that it doesn't contribute anything that couldn't be easily rephrased in a much better way? Your example of "Don't use this thing for this purpose", especially if followed by a brief explanation, is an order of magnitude better than "X is crap", and I doubt it took you more than 5 seconds to write.
↑ comment by wedrifid · 2012-01-02T21:08:12.708Z · LW(p) · GW(p)
I often "claim" my downvotes (aka I will post "downvoted" and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.
On the other hand if people agree with your reasons they often upvote it (especially back up towards zero if it dropped negative).
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments)
I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.
Also, many people on this site are just a-holes. Sorry.
*arch*
Replies from: None, TheOtherDave↑ comment by [deleted] · 2012-01-02T21:17:54.415Z · LW(p) · GW(p)
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant -1 on downvoting comments)
I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.
Do you think that's a good thing, or just a likely outcome?
Downvoting explanations of downvotes seems like a really bad idea, regardless how you feel about the downvote. It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.
I don't see how downvoting explanations and more explicit behavior is helpful for rational discourse in any way.
Replies from: MixedNuts, wedrifid↑ comment by MixedNuts · 2012-01-02T21:53:48.821Z · LW(p) · GW(p)
It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.
This is exactly the reaction I want to trolls, basic questions outside of dedicated posts, and stupid mistakes. Are downvotes of explanations in those cases also read as an incentive not to post explanations in general?
Replies from: None↑ comment by [deleted] · 2012-01-02T22:02:39.001Z · LW(p) · GW(p)
Speaking for myself, yes. I read it as "don't engage this topic on this site, period".
I agree with downvoting (and ignoring) the types of comments you mentioned, but not explanations of such downvotes. The explanations don't add any noise, so they shouldn't be punished. (Maybe if they got really excessive, but currently I have the impression that too few downvotes are explained, rather than too many.)
↑ comment by wedrifid · 2012-01-02T21:56:48.702Z · LW(p) · GW(p)
Do you think that's a good thing, or just a likely outcome?
Comments can serve as calls to action encouraging others to downvote or priming people with a negative or unintended interpretation of a comment - be it yours or that of someone else -that influence is something to be discouraged. This is not the case with all explanations of downvotes but it certainly describes the effect and often intent of the vast majority of "Downvoted because" declarations. Exceptions include explanations that are requested and occasionally reasons that are legitimately surprising or useful. Obviously also an exception is any time when you actually agree they have a point.
↑ comment by TheOtherDave · 2012-01-02T21:19:04.789Z · LW(p) · GW(p)
I might well consider an explanation of a downvote on a comment of mine to be a valuable contribution, even if I continue to disagree with the thinking behind it. Actually, that's not uncommon.
↑ comment by Nornagest · 2012-01-02T22:13:55.869Z · LW(p) · GW(p)
If I downvote with comment, it's usually for a fairly specific problem, and usually one that I expect can be addressed if it's pointed out; some very clear logical problem that I can throw a link at, for example, or an isolated offensive statement. I may also comment if the post is problematic for a complicated reason that the poster can't reasonably be expected to figure out, or if its problems are clearly due to ignorance.
Otherwise it's fairly rare for me to do so; I see downvotes as signaling that I don't want to read similar posts, and replying to such a post is likely to generate more posts I don't want to read. This goes double if I think the poster is actually trolling rather than just exhibiting some bias or patch of ignorance. Basically it's a cost-benefit analysis regarding further conversation; if continuing to reply would generate more heat than light, better to just downvote silently and drive on.
It's uncommon for me to receive retaliatory downvotes when I do comment, though.
↑ comment by Prismattic · 2012-01-02T20:32:54.999Z · LW(p) · GW(p)
Also, many people on this site are just a-holes. Sorry.
I think it's more that there are a few a-holes, but they are very prolific (well, that and the same bias that causes us to notice how many red lights we get stopped at but not how many green lights we speed through also focuses our attention on the worst posting behavior).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-02T21:22:00.401Z · LW(p) · GW(p)
Interesting. Who are the prolific "a-holes"?
Replies from: Prismattic↑ comment by Prismattic · 2012-01-02T21:31:01.281Z · LW(p) · GW(p)
Explicitly naming names accomplishes nothing except inducing hostility, as it will be taken as a status challenge. Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.
Replies from: TheOtherDave, wedrifid, MixedNuts↑ comment by TheOtherDave · 2012-01-02T21:43:01.290Z · LW(p) · GW(p)
I agree with you that naming names can be taken as a status challenge.
Of course, this whole topic positions you as an abjudicator of appropriate calibration, which can be taken as a status grab, for the excellent reason that it is one. Not that there's anything wrong with going for status.
All of that notwithstanding, if you prefer to diffuse your assertions of individual inappropriate behavior over an entire community, that's your privilege.
↑ comment by Prismattic · 2012-01-02T22:16:26.921Z · LW(p) · GW(p)
I care about my status on this site only to the extent that it remains above some minimum required for people not to discount my posts simply because they were written by me.
My interest in this thread is that like Daenerys I think the current norm for discourse is suboptimal, but I think I give greater weight to the possibility of that some of the suboptimal behavior is people defecting by accident; hence the subtle push for occasional recalibration of tone.
Replies from: wedrifid, TheOtherDave↑ comment by wedrifid · 2012-01-02T22:33:22.906Z · LW(p) · GW(p)
hence the subtle push for occasional recalibration of tone.
There was a subtle push? I must of missed that while I was distracted by the blatant one!
Replies from: Prismattic↑ comment by Prismattic · 2012-01-02T22:38:00.435Z · LW(p) · GW(p)
See, it's working!
↑ comment by TheOtherDave · 2012-01-02T23:32:25.851Z · LW(p) · GW(p)
Just to be clear: I'm fine with you pushing for a norm that's optimal for you. Blatantly, if you want to; subtly if you'd rather.
But I don't agree that the norm you're pushing is optimal for me, and I consider either of us pushing for the establishment of norms that we're most comfortable with to be a status-linked social maneuver.
Replies from: Prismattic↑ comment by Prismattic · 2012-01-03T00:02:19.082Z · LW(p) · GW(p)
But I don't agree that the norm you're pushing is optimal for me,
Why? (A sincere question, not a rhetorical one)
and I consider either of us pushing for the establishment of norms that we're most comfortable with to be a status-linked social maneuver.
I'm not sure how every post doesn't do this; many posts push to maintain a status-quo, but all posts implicitly favor some set of norms.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T02:17:22.096Z · LW(p) · GW(p)
I agree that pretty much all communication does this, yes. Sometimes explicitly, sometimes implicitly.
As to why... because I see the norm you're pushing as something pretty close to the cultural baseline of the "friendly" pole of the American mainstream, which I see as willing to trade off precision and accuracy for getting along. You may even be pushing for something even more "get along" optimized than that.
I mostly don't mind that the rest of my life more or less optimizes for getting along, though I often find it frustrating when it means that certain questions simply can't ever be asked in the first place, and that certain answers can't be believed when they're given because alternative answers are deemed too impolite to say. Still, as I say, I accept it as a fact about my real-life environment. I probably even prefer it, as I acknowledge that optimizing for precision and accuracy at the expense of getting along would be problematic if I could never get away from it, however tired or upset I was.
That said, I value the fact that LW uses a different standard, one that optimizes for accuracy and precision, and therefore efforts to introduce the baseline "get along" standard to LW remove local value for me.
Again, let me stress that I'm not asserting that you ought not make those efforts. If that's what you want, then by all means push for it. If you are successful, LW will become less valuable to me, but you're not under any kind of moral obligation to preserve the value of the Internet to me.
But speaking personally, I'd prefer you didn't insist as you did so that those efforts are actually in my best interests, with the added implication that I can't recognize my interests as well as you can.
↑ comment by wedrifid · 2012-01-02T23:30:49.231Z · LW(p) · GW(p)
Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.
It left me evaluating whether it was me personally that was being called an asshole or others in the community and whether those others are people that deserve the insult or not. Basically I needed to determine whether it was a defection against me, an ally or my tribe in general. Then I had to decide what, if any, was an appropriate, desirable and socially acceptable tit-for-tat response. I decided to mostly ignore him because engaging didn't seem like it would do much more than giving him a platform from which to gripe more.
Replies from: magfrump, dlthomas↑ comment by dlthomas · 2012-01-02T23:42:32.500Z · LW(p) · GW(p)
Why do you feel it's correct to interpret it as defection in the first place?
Replies from: wedrifid↑ comment by wedrifid · 2012-01-03T00:43:26.303Z · LW(p) · GW(p)
Why do you feel it's correct to interpret it as defection in the first place?
In case you were wondering the translation of this from social-speak to Vulcan is:
Calling people assholes isn't a defection, therefore you saying - and in particular feeling - that labeling people as assholes is a defection says something personal about you. I am clever and smooth for communicating this rhetorically.
So this too is a defection. Not that I mind - because it is a rather mild defection that is well within the bounds of normal interaction. I mean... it's not like you called me an asshole or anything. ;)
Replies from: dlthomas↑ comment by dlthomas · 2012-01-03T06:35:47.875Z · LW(p) · GW(p)
That is not a correct translation. Calling someone an asshole may or may not be defection. In this case, I'm not sure whether it was. Examining why you feel that it was may be enlightening to me or to you or hopefully both. Defecting by accident is a common flaw, for sure, but interpreting a cooperation as a defection is no less damaging and no less common.
↑ comment by MixedNuts · 2012-01-02T21:58:56.401Z · LW(p) · GW(p)
Am I an asshole?
I'm already working on not being an asshole in general, and on not being an asshole to specific people on LW. If someone answers "yes" to that I'll work harder at being a non-asshole on LW. Or post less. Or try to do one of those for two days then forget about the whole thing.
Replies from: wedrifid, Prismattic↑ comment by Prismattic · 2012-01-02T22:11:19.665Z · LW(p) · GW(p)
If you're already working on it, you're probably in the clear. Not being an a-hole is a high-effort activity for many of us; in this case I will depart from primitive consquentialism and say that effort counts for something.
Replies from: wedrifid↑ comment by Multiheaded · 2012-01-02T21:04:50.191Z · LW(p) · GW(p)
Yeah, I do retailate quite commonly (less than 60% retailation ITT though), but I've never been an asshole on LW until this thread. Not particularly planning on repeating this, but I'm not sorry at all. Forced civility just doesn't fit the mood of this topic at all in my eyes.
↑ comment by wedrifid · 2012-01-01T10:01:31.846Z · LW(p) · GW(p)
Tiny kittens are also cute and haven't even done anything to death yet. But if you accidentally lock one in a car and it suffocates, that's merely unfortunate, and should probably not be a crime. The same is true for infants and all other non-person life. If you kill a kitten for some reason other than sadism, well, it's unfortunate that you felt that was necessary, but again, they're not people.
Yeah, I get it, you don't consider babies people and I do. So pretty much we just throw down (ie. trying to reason each other into having the same values as ourselves would be pointless). You vote for baby killing, I vote against it. If there is a war of annihilation and I'm forced to choose sides between the baby killers and the non-baby killers and they seem evenly matched then I choose the non-baby killers side and go kill all the baby killers.
↑ comment by wedrifid · 2012-01-01T10:04:13.897Z · LW(p) · GW(p)
Tiny kittens are also cute and haven't even done anything to death yet. But if you accidentally lock one in a car and it suffocates, that's merely unfortunate, and should probably not be a crime. The same is true for infants and all other non-person life. If you kill a kitten for some reason other than sadism, well, it's unfortunate that you felt that was necessary, but again, they're not people.
Yeah, I get it, you don't consider babies people and I do. So pretty much we just throw down (ie. trying to reason each other into having the same values as ourselves would be pointless). You vote for baby killing, I vote against it. If there is a war of annihilation and I'm forced to choose sides between the baby killers and the non-baby killers and they seem evenly matched then I choose the non-baby killers side and go kill all the baby killers. If I somehow have the option to exclude all consideration of your preferences from the optimisation function of an FAI then I take it. Just a plain ol' conflict of terminal values.
Replies from: Bakkot, nshepperd, wedrifid↑ comment by Bakkot · 2012-01-01T19:03:42.401Z · LW(p) · GW(p)
Replies from: wedrifid↑ comment by wedrifid · 2012-01-01T19:52:41.634Z · LW(p) · GW(p)
Do, say, pigs also meet this definition?
If babies were made of bacon then I'd have to rerun the moral calculus all over again! ;)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T20:04:08.276Z · LW(p) · GW(p)
Well, they are made of eggs. Actual eggs and counterfactual bacon are an important part of this nutritious breakfast.
↑ comment by nshepperd · 2012-01-02T13:21:24.337Z · LW(p) · GW(p)
trying to reason each other into having the same values as ourselves would be pointless
How do you know?
Replies from: wedrifid, Multiheaded↑ comment by wedrifid · 2012-01-02T13:45:29.419Z · LW(p) · GW(p)
How do you know?
It is a core belief of Bakkot's - nothing is going to change that. His thinking on the matter is also self consistent. Only strong social or personal influence has a chance of making a difference (for example, if he has children, all his friends have children and he becomes embedded in a tribe where non-baby-killing is a core belief). For my part I understand Bakkot's reasoning but do not share his preference based premises. As such changing my mind regarding the conclusion would make no sense.
More succinctly I don't expect reasoning with each other to change our minds because neither of us is wrong (in the intellectual sense). We shouldn't change our minds based on intellectual arguments - if we do then we are making a mistake.
Replies from: nshepperd↑ comment by nshepperd · 2012-01-02T14:33:41.955Z · LW(p) · GW(p)
It is a core belief of Bakkot's - nothing is going to change that.
Yes, and my question is how do you know? Admittedly I haven't read the entire thread from the beginning, but in the large part I have, I see nothing to suggest that there is anything particularly immutable about either of your positions such that neither of you could possibly change your mind based on normal moral-philosophical arguments. What makes you so quick to dismiss your interlocutor as a babyeating alien?
Replies from: wedrifid↑ comment by wedrifid · 2012-01-02T17:41:54.134Z · LW(p) · GW(p)
Yes, and my question is how do you know?
I trust his word.
What makes you so quick to dismiss your interlocutor
You're spinning this into a dismissal, disrespect of Bakkot's intellectual capability or ability to reason. Yet disagreement does not equal disrespect when it is a matter of different preferences. It is only when I think an 'interlocutor' is incapable of understanding evidence and reasoning coherently (due to, say, biases or ego) that observing that reason cannot persuade each other is a criticism.
as a babyeating alien?
He is a [babykilling advocate]. He says he is a babykilling advocate. He says why. That I acknowledge that he is an advocate of infanticide rights is not, I would hope, offensive to him.
I note that while Bakkot's self expression is novel, engaging and coherent (albeit contrary to my values), your own criticism is not coherent. You asked "how do you know?" and I gave you a straight answer. Continued objection makes no sense.
Replies from: nshepperd↑ comment by nshepperd · 2012-01-03T00:44:17.984Z · LW(p) · GW(p)
I trust his word.
He said his mind could never be changed on this?
You're spinning this into a dismissal, disrespect of Bakkot's intellectual capability or ability to reason. Yet disagreement does not equal disrespect when it is a matter of different preferences.
Spinning? I'm not trying to spin anything into anything. You said this was a matter of different preferences before, and I understood the first time. You don't need to repeat it. My criticism is about why you think this a difference in values rather than a mere confusion of them. (Also, "dismissal" has connotations, but I can't think of a better word to capture "throwing up your hands and going to war with them")
He is a [babykilling advocate]. He says he is a babykilling advocate. He says why. That I acknowledge that he is an advocate of infanticide rights is not, I would hope, offensive to him.
Emphasis was meant to be on alien. Aliens are distinguished by, among other things, not living in our moral reference frame.
Replies from: wedrifid↑ comment by Multiheaded · 2012-01-02T14:10:10.346Z · LW(p) · GW(p)
Replies from: nshepperdAkon was resting his head in his hands. "You know," Akon said, "I thought about composing a message like this to the Babyeaters. It was a stupid thought, but I kept turning it over in my mind. Trying to think about how I might persuade them that eating babies was... not a good thing."
The Xenopsychologist grimaced. "The aliens seem to be even more given to rationalization than we are - which is maybe why their society isn't so rigid as to actually fall apart - but I don't think you could twist them far enough around to believe that eating babies was not a babyeating thing."
"And by the same token," Akon said, "I don't think they're particularly likely to persuade us that eating babies is good." He sighed. "Should we just mark the message as spam?"
↑ comment by nshepperd · 2012-01-02T14:44:07.029Z · LW(p) · GW(p)
The question was "how do you know?", not "what do you mean?". Aliens are almost certain to fundamentally disagree with humans in a variety of important matters, by simple virtue of not being genetically related to us. Bakkot is a human. Different priors are called for.
↑ comment by wedrifid · 2012-01-01T10:17:53.305Z · LW(p) · GW(p)
Oh, and to clarify the extent of my disagreement: When I say "You vote for baby killing, I vote against it" that assumes I don't live in some backwards country without compulsory voting. If voting is optional then I'm staying home. Other people killing babies is not my problem - because I don't have the power to stop a mob of humans from killing babies and I'm not interested in making the token gesture.
↑ comment by Solvent · 2012-01-02T03:16:20.644Z · LW(p) · GW(p)
What do you think of abortion?
Replies from: None, gwern, wedrifid↑ comment by [deleted] · 2012-01-02T09:53:43.079Z · LW(p) · GW(p)
Once we get artificial uteri I think it should be illegal except in cases of rape, but it should be legal to renounce all responsibility for it and put it up for adoption or let the other biological parent finance the babies coming to term. This has the neat and desirable effect of equalizing the position of the biological father and the biological mother.
Replies from: None↑ comment by [deleted] · 2012-01-02T09:59:15.112Z · LW(p) · GW(p)
uterus's
Uteri?
Replies from: None↑ comment by [deleted] · 2012-01-02T10:00:25.887Z · LW(p) · GW(p)
Not a native speaker. And uterus is a surprisingly sparingly used word.
Uterus. Uterus. Uterus.
Thanks for the correction! :)
Replies from: None↑ comment by [deleted] · 2012-01-02T10:01:55.209Z · LW(p) · GW(p)
Any time ;)
Just remember that if it ends with -us, it probably pluralizes to -i. That's only for latin-based words. Greek-based words, like octopus, can either be pluralized to octopuses or octopodes (pronounced Ahk-top-o-dees). And sometimes you have a new or technical latin-based word like "virus" which just pluralizes to "viruses." It's perfectly fine to pluralize uterus to uteruses, too, since it's so uncommon. English is a bitch.
[Edited to give a longer explanation]
↑ comment by gwern · 2012-01-02T04:06:42.264Z · LW(p) · GW(p)
I have to say, http://lesswrong.com/lw/47k/an_abortion_dialogue/ seems relevant to this entire comment tree.
Replies from: TimS, Solvent↑ comment by TimS · 2012-01-02T04:09:02.508Z · LW(p) · GW(p)
Your link (in the Discussion post) is broken.
Replies from: gwern↑ comment by gwern · 2012-01-02T04:33:41.266Z · LW(p) · GW(p)
! I didn't realize I'd broke all the old .html links - turned out that when I thought I was removing the gzip encoding, I also removed the Apache rewrite rules. I've fixed that and also pointed the Discussion at the most current URL, just in case.
↑ comment by Solvent · 2012-01-02T04:22:50.483Z · LW(p) · GW(p)
This link works. http://www.gwern.net/An%20Abortion%20Dialogue
↑ comment by MileyCyrus · 2012-01-01T08:32:00.431Z · LW(p) · GW(p)
Why is sadism worse than indifference? Are we punishing people for their mental states?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T08:33:43.356Z · LW(p) · GW(p)
Replies from: Solvent↑ comment by Solvent · 2012-01-01T08:35:18.548Z · LW(p) · GW(p)
Why does that seem like a reasonable thing to do? Isn't that just an incentive to lie about motives?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T08:40:49.070Z · LW(p) · GW(p)
Replies from: None, Solvent↑ comment by [deleted] · 2012-01-02T09:46:40.049Z · LW(p) · GW(p)
Allowing sadists to kill their babies creates incentive to produce babies for the sole purpose of killing them, which is a behavior which is long-run going to be very damaging to society.
Its illegal to torture an animal. Why wouldn't it be illegal to torture a baby while killing him? If a sadist can get jollies out of killing with painless poison his children and keeps making them for that purpose, I can't really see how this harms wider society if he pays for the pills and children himself.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T09:57:53.415Z · LW(p) · GW(p)
If a sadist can get jollies out of killing with painless poison his children and keeps making them for that purpose, I can't really see how this harms wider society if he pays for the pills and children himself.
Please rethink this. E.g. are you at all confident that this sadist wouldn't slip and go on to adults after their 10th child? Wouldn't you, personally, force people who practice this to wear some mandatory identification in public, so you don't have to wonder about every creepy-looking stranger? Don't you just have an intuition about the myriad ways that giving sadists such rights could undermine society?
Replies from: None↑ comment by [deleted] · 2012-01-02T10:01:28.287Z · LW(p) · GW(p)
E.g. are you at all confident that this sadist wouldn't slip and go on to adults after their 10th child?
Fine make it illegal for this to be done except by experts.
Wouldn't you, personally, force people who practice this to wear some mandatory identification in public, so you don't have to wonder about every creepy-looking stranger?
No, why?
Don't you just have an intuition about the myriad ways that giving sadists such rights could undermine society?
We already give sadists lots of rights to psychologically and physical abuse people when this is done with consent or when we don't feel like being morally consistent or when there is some societal benefit to be had.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T10:17:55.600Z · LW(p) · GW(p)
Wouldn't you, personally, force people who practice this to wear some mandatory identification in public, so you don't have to wonder about every creepy-looking stranger? - No, why?
For your own safety, in every regard that such people could threaten it.
We already give sadists lots of rights to psychologically and physical abuse people when this is done with consent or when we don't feel like being morally consistent or when there is some societal benefit to be had.
Well, I've always thought that it's enormously and horribly wrong of us.
Replies from: None↑ comment by [deleted] · 2012-01-02T10:31:37.322Z · LW(p) · GW(p)
For your own safety, in every regard that such people could threaten it.
I don't think society considers that a valid reason for discrimination.
Also please remember surgeons can do nasty things to me without flinching if they wanted to, people do also occasionally have such fears since we even invoke this trope in horror movies.
Well, I've always thought that it's enormously and horribly wrong of us.
I generally agree.
But on the other hand I think we should give our revealed preference some weight as well, remember we are godshatter, maybe we should just accept that perhaps we don't care as much about other people's suffering as we'd like to believe or say we do.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T10:39:19.406Z · LW(p) · GW(p)
I don't think society considers that a valid reason for discrimination.
Yes society might, if society takes into account that it loathes most people with those characteristics to begin with.
remember we are godshatter, maybe we should just accept that perhaps we don't care as much about other people's suffering as we'd like to believe or say we do.
Maybe if we do bother to self-modify in some direction along one of our "shard"'s vectors, it could as well be a direction we see as more virtuous? Making ourselves care as much as we'd privately want to, at least to try and see how it goes?
Replies from: None↑ comment by [deleted] · 2012-01-02T11:38:23.046Z · LW(p) · GW(p)
Making ourselves care as much as we'd privately want to, at least to try and see how it goes?
Revealed preferences are precisely what we end up doing and actually desire once we get in a certain situation. Why not work it out the other way around? How can you be sure maximum utility is going with this shard line and not the other?
Because it sounds good? To 21st century Westerners?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T11:47:06.046Z · LW(p) · GW(p)
How can you be sure maximum utility is going with this shard line and not the other?
My current values simply DO point in the direction of rewriting parts of my utility function like I suggest, and not like you suggest.
Because it sounds good? To 21st century Westerners?
Sure, might as well stick with this reason. I haven't yet seen an opposing one that's convincing to me.
Replies from: None↑ comment by [deleted] · 2012-01-02T11:57:57.009Z · LW(p) · GW(p)
My current values simply DO point in the direction of rewriting parts of my utility function like I suggest, and not like you suggest.
When currently thinking in far mode about this you like the idea, but seeing it in practice might easily horrify you.
In any case when I was talking about maximising utility, I was talking about you maximising your utility. You can easily be mistaken about what does and dosen't do that.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T12:20:05.976Z · LW(p) · GW(p)
When currently thinking in far mode about this you like the idea, but seeing it in practice might easily horrify you.
I say the same about the general shape of your modern-society-with-legalized-infanticide.
Replies from: None↑ comment by [deleted] · 2012-01-02T12:24:57.240Z · LW(p) · GW(p)
And you are right to say so!
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T12:29:37.312Z · LW(p) · GW(p)
Uh huh, thanks. The difference is, I'm quite a bit more distrustful of your legal infanticide's perspectives than you're distrustful of my personal self-modification's perspectives.
Replies from: None↑ comment by [deleted] · 2012-01-02T12:37:02.753Z · LW(p) · GW(p)
The difference is, I'm quite a bit more distrustful of your legal infanticide's perspectives than you're distrustful of my personal self-modification's perspectives.
I'm not sure this is so. We should update towards each other estimates of the other's distrustfulness. I'm literally horrified by the possibility of a happy death spiral around universal altruism.
↑ comment by Solvent · 2012-01-01T08:49:40.435Z · LW(p) · GW(p)
I don't understand your reasoning for either of those dot points.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T09:02:07.086Z · LW(p) · GW(p)
Replies from: soreff, None↑ comment by soreff · 2012-01-01T15:16:47.929Z · LW(p) · GW(p)
The idea is that a woman repeatedly getting pregnant and then killing the child is putting a lot of strain on society, both in terms of resources and in terms of comfort. We allow a lot of privileges for pregnant women and new mothers, with the expectation that they're trying to bring new people into society, something we encourage.
I'd think that that the bulk of the resource cost of a newborn is the physiological cost (and medical risks) the mother endured during pregnancy. The general societal cost seems small in comparison.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T15:21:29.954Z · LW(p) · GW(p)
Sure, that seems true. Note that Bakkot didn't say that the costs to everyone else outweighed the costs to the mother, merely that the costs to everyone else were also substantial.
↑ comment by [deleted] · 2012-01-02T09:50:58.451Z · LW(p) · GW(p)
This point is less important. The idea is that a woman repeatedly getting pregnant and then killing the child is putting a lot of strain on society, both in terms of resources and in terms of comfort. We allow a lot of privileges for pregnant women and new mothers, with the expectation that they're trying to bring new people into society, something we encourage. If you're killing your kid out of sadism, you're not doing this, and society will have to adjust how all pregnant women are treated.
We already treat accidental pregnant women basically the same as those who planned their pregnancy. Clearly we should distinguish and discriminate between them rather than lump them into the "pregnant woman" category (I take a lighter tone in some of my other posts here to provoke thought, but I'm dead serious about this).
Also many people are way to stuck in their 21st century Eurocentric frame of mind to comprehend the personhood argument for infanticide properly. Let me help:
This point is less important. The idea is that a woman repeatedly getting pregnant and then aborting the child is putting a lot of strain on society, both in terms of resources and in terms of comfort. We allow a lot of privileges for pregnant women and new mothers, with the expectation that they're trying to bring new people into society, something we encourage. If you're killing your fetus out of sadism, you're not doing this, and society will have to adjust how all pregnant women are treated.
↑ comment by TimS · 2012-01-01T20:02:54.067Z · LW(p) · GW(p)
On infanticide, is this a reasonable summary of your position:
Replies from: BakkotAdult humans have a moral quality (let's call it "blicket") that most animals lack. One major consequence of blicket is that morally acceptable killings require much more significant justifications when the victim is a blicket-creature ("I killed him in self-defense") than when the victim is not a blicket-creature ("Cows are delicious, and I was hungry"). Empirically, cows don't have blicket and never will without some extraordinary intervention. Six-month old babies lack blicket, but are likely to develop it during ordinary maturation.
↑ comment by Bakkot · 2012-01-01T20:12:13.142Z · LW(p) · GW(p)
Replies from: TimS↑ comment by TimS · 2012-01-01T20:36:23.910Z · LW(p) · GW(p)
Ok. I agree with you on the empirical assertions (I actually suspect that 10-month-olds also lack blicket). But my moral theory gives significant weight to blicket-potential (because blicket is that awesome), while your system does not appear to do so. Why not?
You mentioned to someone that the current system of being forced to provide for a child or place the child in foster care is suboptimal. I assume a substantial part of that position is that foster care is terrible (i.e. unlikely to produce high-functioning adults).
I agree that one solution to this problem is to end the parental obligation (i.e. allow infanticide). This solution has the benefit of being very inexpensive. But why do you think that solution is better than the alternative solution of fixing foster care (and low quality child-rearing practice generally) so that it is likely to produce high-quality adults?
Replies from: Bakkot, daenerys, nshepperd↑ comment by Bakkot · 2012-01-01T20:57:20.567Z · LW(p) · GW(p)
Replies from: TimS↑ comment by TimS · 2012-01-01T21:17:06.339Z · LW(p) · GW(p)
I agree there is a scale about how much weight to give blicket-potential. But I support a meta-norm about constructing a morality that the morality should add up to normal, absent compelling justification.
That is, if a proposed moral system says that some common practice is deeply wrong, or some common prohibition has relatively few negative consequences if permitted, that's a reason to doubt the moral construction unless a compelling case can be made. It's not impossible, but a moral theory that says we've all doing it wrong should not be expected either.
The fact that my calibration of my blicket-potential sensitivity mostly adds up to normal is evidence to me that the model is a fairly accurate description of the morality people say they are applying.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T21:34:06.116Z · LW(p) · GW(p)
Replies from: TimS↑ comment by TimS · 2012-01-02T00:48:56.439Z · LW(p) · GW(p)
making infanticide illegal is something which appears to be a very Judeo-Christian affection, rather than a moral universalism.
This is a historical claim that requires a bit more evidence in support. I don't doubt that infanticide has a rich historical pedigree. But I don't think infanticide was ever justified on a "human autonomy" basis, which seems to be your justification. For example, the relatively recent dynamic of Chinese sex-selection infanticide has not been based on any concept of personal autonomy, as far as I can tell.
In general, I suspect that most cultures that tolerated infanticide were much lower on the human-autonomy scale than our current civilization (i.e. valued individual human life less than we do).
Replies from: gwern, Bakkot↑ comment by gwern · 2012-01-02T00:54:38.373Z · LW(p) · GW(p)
I did some reading on the ancients and infanticide, and the picture is murky - the Christians were not responsible for making infanticide illegal, that seems to have preceded them, but they claimed the laws were honored mostly in the breach, so whether you give any credit to them depends on your theories of causality, large-scale trends, and whether the Christians made any meaningful difference to the actual infanticide rate.
↑ comment by Bakkot · 2012-01-02T01:59:08.060Z · LW(p) · GW(p)
Replies from: wedrifid, TimS↑ comment by wedrifid · 2012-01-02T02:01:55.144Z · LW(p) · GW(p)
It's difficult to make conclusions about this, because most historical cultures made fairly little effort to support their conventions at all. However, it's certainly been my impression that a lot more cultures were OK with casual infanticide than casual murder. This suggests strongly to me that the view of newborns as people is not universal.
Cultures are often fine with killing wives and children too, if they get too far out of line. They are yours after all.
Replies from: Bakkot↑ comment by TimS · 2012-01-02T04:03:12.343Z · LW(p) · GW(p)
Sigh. How did the post-modern moral nihilist become the defender of moral universalism? My argument is more that infanticide fits extremely poorly within the cluster of values that we've currently adopted.
most historical cultures made fairly little effort to support their conventions at all.
I am highly skeptical that this is true.
Replies from: CharlieSheen, Bakkot↑ comment by CharlieSheen · 2012-01-02T11:17:51.360Z · LW(p) · GW(p)
An uncle of mine who is a doctor said that SIDS is a codeword for infanticide and that many of his colleagues admit as much.
Replies from: TimS↑ comment by TimS · 2012-01-07T21:55:07.637Z · LW(p) · GW(p)
Either my model is false or this story is wrong.
Specifically, I can't understand why a coroner would not take actions to facilitate the prosecution of a crime (infanticide is murder), because that is one of the jobs of a coroner.
By contrast, I've heard that coroners are quite wiling to label a death as accidental when they believe it was suicide, because any legal violations are not punishable (suicide is generally illegal, but everyone agrees that prosecution is pointless).
Replies from: Multiheaded, Prismattic, None↑ comment by Multiheaded · 2012-01-08T09:07:38.310Z · LW(p) · GW(p)
Specifically, I can't understand why a coroner would not take actions to facilitate the prosecution of a crime (infanticide is murder), because that is one of the jobs of a coroner.
Because he, like some who have posted here, is sympathetic to the baby-killing mothers under certain circumstances and doesn't mind helping them avoid prosecution? I wouldn't judge him, heavens forbid. I'd likely do the opposite in his place, but I respect his position.
Replies from: TimS↑ comment by Prismattic · 2012-01-08T00:15:56.457Z · LW(p) · GW(p)
By contrast, I've heard that coroners are quite wiling to label a death as accidental when they believe it was suicide, because any legal violations are not punishable (suicide is generally illegal, but everyone agrees that prosecution is pointless).
Labelling a suicide as an accident isn't legally trivial. It is, at least in some cases, an action that favors the interests of the heirs of suicides and disfavors the interests of life insurance companies.
Replies from: TimS↑ comment by TimS · 2012-01-08T23:41:48.409Z · LW(p) · GW(p)
I agree that it isn't legally trivial. But the social consequences of labeling a death as suicide are much more immediate than any financial consequences from labeling a death as accidental. Also, I'm not sure what percentage of the suicidal have life insurance, so I'm not sure how much weight the hypothetical coroner would place on the life insurance issue.
I'm not saying the position is rational or morally correct, but it wouldn't surprise me that an influential person like a coroner held a position vaguely like "screw insurance companies." (>>75%)
By contrast, I would be extremely surprised to learn that a coroner was willing to ignore an infanticide, absent collusion (i.e. bribery) of some kind (<<<1%)
↑ comment by Prismattic · 2012-01-08T23:50:54.566Z · LW(p) · GW(p)
(I don't believe CharlieSheen's anecdote either. I was challenging the suicide point in isolation.)
But the social consequences of labeling a death as suicide are much more immediate than any financial consequences from labeling a death as accidental.
Say what now? Possibly it's because my background is Jewish, not Christian, but I don't buy that at all.
Replies from: TimS↑ comment by TimS · 2012-01-09T00:49:24.445Z · LW(p) · GW(p)
Normatively, suicide is shameful in modern society. By contrast, I don't think most suicide-victim families (or their social network) are thinking about the life insurance proceeds at the time (within a week?) that the coroner is determining cause of death.
I know I've heard of a survey of coroner in which some substantial percentage (20-50%, sorry don't remember better) of coroners reported that the following had ever occurred in their career: they believed the cause of death of the body they were examining was suicide, but listed the cause as accident.
I can't find that survey in a quick search, but this research result talks about the effect of elected coroners on cause of death determinations. Specifically, elected coroners were slightly less likely to declare suicide as the cause of death.
↑ comment by Bakkot · 2012-01-02T04:14:34.173Z · LW(p) · GW(p)
Replies from: TimS, Multiheaded↑ comment by TimS · 2012-01-02T16:04:56.191Z · LW(p) · GW(p)
most historical cultures made fairly little effort to support their conventions at all.
I am highly skeptical that this is true.
It looks like I misread you. I thought you were referring to moral conventions generally, while you seem to have been referring to moral conventions on infanticide. I agree that many historical cultures did not oppose infanticide as strongly as the current culture.
↑ comment by Multiheaded · 2012-01-02T08:54:17.415Z · LW(p) · GW(p)
our current standard seems to be "don't kill people"
Major objection. When talking about society at large and not the small cluster of "rationalist" utilitarians (who are ever tempted to be smarter than their ethics), the current standard is "don't kill what our instincts register as people". The distinction being that John Q. Public hardly reflects on the matter at all. I believe that it's a hugely useful standard because it strengthens the relevant ethical injunctions, regardless of any inconveniences that it brings from an act utilitarian standpoint.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-02T19:00:12.676Z · LW(p) · GW(p)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T20:08:37.023Z · LW(p) · GW(p)
The fact that infanticide has been practiced so widely suggests strongly that most people don't "instinctively" see babies as people.
NO! As you have yourself correctly pointed out, it is because most cultures, with ours being a notable exception, assign a low value to "useless" people or people who they feel are a needless drain on society. (mistake fixed)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-02T20:27:26.436Z · LW(p) · GW(p)
Hm. So what seems to follow from this is that most people don't actually consider killing people to be a particularly big deal, what they're averse to is killing people who contribute something useful to society... or, more generally, that most people are primarily motivated by maximizing social value.
Yes? (I don't mean to be pedantic here, I just want to make sure I'm not putting words in your mouth.)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T20:47:13.079Z · LW(p) · GW(p)
Blast me! I meant to say that our culture is an exception, not an "inclusion". So this statement is largely true about non-western cultures, but western ones mostly view the relatively recent concept of "individuality and personhood are sacred" as their main reason against murder.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-02T21:16:59.585Z · LW(p) · GW(p)
Ah, gotcha. That makes sense.
So is your position that we inherited an aversion to murder from earlier non-western cultures, and then when we sanctified personhood we made that our main reason for our pre-existing aversion?
Or that earlier cultures weren't averse to murder, and our sanctification caused us to develop such an aversion?
Or something else?
↑ comment by Multiheaded · 2012-01-02T21:53:42.217Z · LW(p) · GW(p)
Both, probably. We inherited all of their aversion (being a modest amount), and then we developed the sacredness, which, all on its own, added several times more aversion on top of that.
↑ comment by daenerys · 2012-01-01T20:46:26.365Z · LW(p) · GW(p)
But my moral theory gives significant weight to blicket-potential (because blicket is that awesome), while your system does not appear to do so. Why not?
If you say you don't want to kill an infant because of its potential for blicket, then you would also have to apply that logic to abortion and birth control, and come to the conclusion that these are just as wrong as killing infants, since they both destroy blicket-potential.
Fetus- does not have blicket, has potential for blicket - killing it is legal abortion
Infant- does not have blicket (you agreed with this), has potential for blicket - killing it is illegal murder
Does not compute. One or the other outcomes needs to be changes, and I'm sure not going to support the illegalization of birth control.
Note: I apologize if this is getting too close to politics, but it is a significant part of the killing babies debate, and not mentioning it just to avoid mentioning a political issue would not give accurate reasons.
Replies from: TimS↑ comment by TimS · 2012-01-01T20:55:46.823Z · LW(p) · GW(p)
At a certain level, all morality is about balancing the demands of conflicting blicket-supported desires. So the balance comes out different at different stages. Yes, the difference between stages is quite arbitrary (and worse: obviously historically contingent).
In short, I wish I had a better answer for you than I am comfortable with arbitrary distinctions (why is the speed limit 55 mph rather than 56?). From an outsider perspective, I'm sure it looks like I've been mind-killed by some version of "The enemy of my enemy (politically active religious conservatives) is my friend."
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T04:18:42.742Z · LW(p) · GW(p)
(why is the speed limit 55 mph rather than 56?)
Somebody did some math about reaction times, kinetic energy from impacts, and fuel economy. That turned out to be a good place to draw the line. For practical purposes, people can drive 60 in a 55 zone under routine circumstances and not get in trouble.
Replies from: Alejandro1, TheOtherDave↑ comment by Alejandro1 · 2012-06-05T04:24:35.527Z · LW(p) · GW(p)
Actually...
The 55 mph speed limit was a vain attempt by the Federal government to reduce gasoline consumption; initially passed in the 1974 Emergency Highway Energy Conservation Act the law was relaxed in 1987 and finally repealed in 1995 allowing states to choose their speed limits. Highways and cars are safer today than in the 1970s and on many highways speed limits were increased to 65 mph. Higher speed limits are often safer because what is worse than speed is variable speed, some people driving fast and some driving slow. When the speed limit is set too low you get lots of people who safely break the law and a few law-abiders who make the roads more dangerous.
Unfortunately vestiges of the 55mph limit remain, in part because police like the 55mph limit which lets them write tickets at will whenever they need an increase in revenues.
↑ comment by TheOtherDave · 2012-06-05T14:07:57.074Z · LW(p) · GW(p)
So, Alejandro's response is correct, but all of this seems rather tangential to the question you quote. The reason the speed limit is 55 rather than 56 or 54 is because we have a cultural preference for multiples of 5... which is also why all the other speed limits I see posted are multiples of 5. Seeing a speed limit sign that read "33" or something would cause me to do a potentially life-threatening double-take.
Replies from: othercriteria↑ comment by othercriteria · 2012-06-05T14:21:20.285Z · LW(p) · GW(p)
They're unusual but they do happen. The "19 MPH" one happens to be from the campus of my alma mater.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-05T14:24:35.319Z · LW(p) · GW(p)
Huh. Some of these I can understand, but I'm really curious about the 19mph one... is there a story behind that? (If I had to guess I'd say it relates to some more global 20mph limit.)
↑ comment by nshepperd · 2012-01-02T13:41:47.396Z · LW(p) · GW(p)
One day in the future, if we somehow survive the existential threats that await us and a Still More Glorious Dawn does, in fact, dawn, one day we might have machines akin to 3D printers that allow us to construct, atom-by-atom, anything we desire so long has we know its composition and structure.
Suppose I take one of these machines and program it to build me a human, then leave when it's half done. Does the construction chamber have blicket-potential?
Replies from: TimS↑ comment by TimS · 2012-01-02T16:08:56.671Z · LW(p) · GW(p)
Sure. Unborn babies have blicket-potential. Heck, the only reason I don't say that unconceived babies have blicket potential is that I'm not sure that the statement is coherent.
Blicket and blicket-potential are markers that special moral considerations apply. They don't control the moral decision without any reference to context.
↑ comment by Multiheaded · 2012-01-02T13:16:58.211Z · LW(p) · GW(p)
(Let's collect academic opinions here)
The utilitarian bioethicist Peter Singer claims that it's pretty much OK to kill a disabled newborn, but states that killing normal infants who are impossible for their parents to raise doesn't follow from that, and, while not being as bad as murdering an adult, is hardly justifiable. Note that he doesn't quite consider any wider social repercussions.
http://www.princeton.edu/~psinger/faq.html
Replies from: Bakkot, EE43026F↑ comment by Bakkot · 2012-01-02T19:22:19.270Z · LW(p) · GW(p)
Replies from: Vaniver↑ comment by Vaniver · 2012-01-02T19:57:46.081Z · LW(p) · GW(p)
I'm having trouble finding philosophers apart from Singer and Tooley who have written on this topic at all, and both seem to have come to roughly the same conclusions that I did.
Consider Heinlein:
Replies from: BakkotAll societies are based on rules to protect pregnant women and young children. All else is surplusage, excrescence, adornment, luxury, or folly, which can — and must — be dumped in emergency to preserve this prime function. As racial survival is the only universal morality, no other basic is possible. Attempts to formulate a "perfect society" on any foundation other than "Women and children first!" is not only witless, it is automatically genocidal. Nevertheless, starry-eyed idealists (all of them male) have tried endlessly — and no doubt will keep on trying.
↑ comment by Bakkot · 2012-01-02T20:18:28.329Z · LW(p) · GW(p)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-04T15:28:23.009Z · LW(p) · GW(p)
For one, the idea of basing morality on "racial survival" terrifies me
Eh heh heh. So you can be terrified by some kinds of utilitarian reasoning. Well, this one does terrify me too, but in the context of this conversation I'm tempted to cite my people's saying: "What's fine for a Russian would kill a German."
Replies from: Bakkot↑ comment by Bakkot · 2012-01-04T19:37:20.928Z · LW(p) · GW(p)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-04T22:16:31.114Z · LW(p) · GW(p)
It feels pretty complex, and I just self-report as undecided on some preferences, but, although a part of my function seems to be optimizing for LW-"fun" too, another, smaller part is a preference for "Niceness with a capital N", or "the world feeling wholesome".
I'm not good enough at introspection and self-expression to describe this value of "Niceness", but it seems to resonate with some Christian ideals and images ("love your enemies"), the complex, indirect ethical teachings seen in classical literature (e.g. Akutagawa or Dostoevsky; I love and admire both), and even, on an aesthetic level, the modern otaku culture's concept of "moe" (see this great analysis on how that last one, although looking like a mere pop culture craze to outsiders, can tie in into a larger sensibility).
So, there's an ever-present "minority group" in my largely LW-normal values cluster. I can't quite label it with something like "conservative" or "romantic", but I recognize it when I feel it.
...shit, I feel like some kind of ethical hipster now, lol.
Tl;dr: there might be some kind of "Niceness" (permitting "fun" that's not directly fun) a level or so above "fun" for me, just as there is some kind of "fun" above pleasure for most people (permitting "pleasure" that's not directly pleasant). If people don't wirehead so they can have "fun" and not just pleasure, I'm totally able not to optimize for "fun" so I can have "Niceness" and not just "fun".
↑ comment by EE43026F · 2012-03-01T13:27:12.572Z · LW(p) · GW(p)
More infanticide advocacy here :
Recently, Francesca Minerva published in the Journal of Medical Ethics arguing the case that :
"what we call ‘after-birth abortion’ (killing a newborn) should be permissible in all the cases where abortion is, including cases where the newborn is not disabled."
Random press coverage complete with indignant comments
Actual paper, pdf, freely available
Replies from: army1987, Multiheaded↑ comment by A1987dM (army1987) · 2012-03-02T13:43:55.006Z · LW(p) · GW(p)
"what we call ‘after-birth abortion’ (killing a newborn) should be permissible in all the cases where abortion is, including cases where the newborn is not disabled."
In many (most?) countries abortion is normally only allowed in the first few months of pregnancy. (Also, I can't imagine why anyone would want to carry a pregnancy nine months, give birth to a child and then kill it rather than just aborting as soon as possible, anyway.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-02T16:57:40.123Z · LW(p) · GW(p)
Can you imagine how the experiences of childbirth and being the primary caregiver for a newborn might alter someone's desires with respect to bearing and raising a child?
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-02T19:42:53.636Z · LW(p) · GW(p)
As for bearing, once the child is born that's a sunk cost; as for “being the primary caregiver for a newborn”... Wait. So we're not talking about killing a child straight after birth but after a while? (A week? A month? A year?)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-02T19:53:23.692Z · LW(p) · GW(p)
I can't see why that makes a difference in the context of my question, so feel free to choose whichever interpretation you prefer.
For my part, it seems entirely plausible to me that a person's understanding of what it means to be the primary caregiver for a child will change between time T1, when they are pregnant with that child, and time T2, when the child has been born... just as it seems plausible that a person's understanding of what a three-week stay in the Caribbean will be like will change between time T1, when they are at home looking at brochures, and time T2, when their airplane is touching down. That sort of thing happens to people all the time. So it doesn't seem at all odd to me that they might want one thing at T1 and a different thing at T2, which was the behavior you were expressing incredulity about. That seems even more true the more time passes... say, at time T3, when they've been raising the child for a month.
Incidentally, I certainly agree with you that bearing the child is a sunk cost once the child is born. If you're suggesting that, therefore, parents can't change their desires with respect to bearing the child once it's born, I conclude that our models of humans are vastly different. If, alternatively, you're suggesting that it's an error for parents to change their desires with respect to bearing the child once it's born, you may well be right, but in that case I have to conclude "I can't imagine why" was meant rhetorically.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-02T20:32:46.263Z · LW(p) · GW(p)
If, alternatively, you're suggesting that it's an error for parents to change their desires with respect to bearing the child once it's born, you may well be right, but in that case I have to conclude "I can't imagine why" was meant rhetorically.
More like I was assuming too much stuff in the implicit antecedent of the conditional whose consequent is “would want”, but yeah, what I meant is that it's an error for parents to change their desires with respect to bearing the child once it's born.
↑ comment by Multiheaded · 2012-03-02T13:17:27.317Z · LW(p) · GW(p)
Hmm. Maybe you could've picked out a more respectable source of "press coverage" than the goddamn Daily Mail.
↑ comment by Solvent · 2012-01-01T08:07:15.172Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth. Reason: extremely young babies aren't yet people.
You're not the first one to argue this on LW. I'll find you the link in a second. Why can't sadists kill their babies? Why ten months, precisely? More importantly, why can't we kill babies?
Why do you particularly bring up the "discrimination against youth" thing?
But yeah, welcome to LW and all that.
Replies from: wedrifid, Bakkot↑ comment by Strange7 · 2012-06-05T05:35:14.089Z · LW(p) · GW(p)
Would you approve of a man killing a child which his wife recently gave birth to, without the mother's permission, on the grounds that he does not believe himself to be the child's father? That's certainly not sadism.
Or, if genetic testing has been done and the child's biological father is known, would you say it should be legal for the father to kill the child... say, because he disagrees with the married couple's religious beliefs and wants to deny them an easy recruit?
Replies from: Bakkot↑ comment by Bakkot · 2012-06-06T03:35:00.769Z · LW(p) · GW(p)
Replies from: Strange7↑ comment by Strange7 · 2012-06-06T03:46:35.679Z · LW(p) · GW(p)
How would you define "parent," then? It's not a tangent, it's an important edge case. I'm trying to understand exactly where our views on the issue differ.
For what it's worth, I agree with you unreservedly on the age discrimination thing. In fact, I think it's the root of a lot of the current economic problems: a majority of the population is essentially being warehoused during their formative years, and then expected to magically transform into functional, productive adults afterward.
Replies from: Bakkot↑ comment by Multiheaded · 2012-01-03T17:47:40.268Z · LW(p) · GW(p)
We had a couple of fair-sized threads on infanticide before. I suggest that everyone who hasn't seen them yet skims through before posting further arguments.
http://lesswrong.com/lw/2l/closet_survey_1/1ou
http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1rmf
Also: http://lesswrong.com/lw/35h/why_abortion_looks_more_okay_to_us_than_killing/
↑ comment by ArisKatsaris · 2012-01-02T10:14:03.705Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth.
What benefit, other than satisfaction of sadism, do you see in infanticide of one's own children that wouldn't be satisfied by merely giving them up for adoption?
Replies from: Bakkot, juliawise↑ comment by Bakkot · 2012-01-02T19:08:12.976Z · LW(p) · GW(p)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-04T15:35:52.167Z · LW(p) · GW(p)
I don't think things should be illegal just because we can't think of a good reason for people to be doing them
This rule has to be examined very very closely. While it sounds good, it spawns so many strawmen against libertarianism and such, we ought to try and unscrew that applause light of "liberty" from there. Liberty is an applause light to me, too (a reflected one from freedom-in-general), and a fine value it is, but still we'd better clinically examine anything that allows us to sidestep our intuitions so much.
[fucking politics, watch out] *(note that I'm a socialist and rather opposed to libertarianism as well, but I'm very willing to examine and consider its ups and downs)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-04T16:09:05.767Z · LW(p) · GW(p)
Well, OK, let's examine it then.
We have some activity.
We see no particular reason to prevent people from doing that activity.
We see no good reason for people to do that activity.
We have a proposed law that makes that activity illegal.
Do I endorse that law?
The only case I can think of where I'd say yes is if the law also performs some other function, the benefit of which outweighs the inefficiencies associated with preventing this activity, and for some reason separating those two functions is more expensive than just preventing the activity. (This sort of thing happens in the real world all the time.)
Can you think of other cases?
I agree with you, by the way, that liberty-as-applause-light is a distraction from thinking clearly about these sorts of questions. Perhaps efficiency is as well, but if so it's one I have much more trouble reasoning past... I neither love that law nor hate it, but it is taking up energy I could use for something else.
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T01:52:22.707Z · LW(p) · GW(p)
Proposed law, or preexisting law?
As pointed out here, tribal traditions tend to have been adopted and maintained for some good reason or other, even if people can't properly explain what that reason is, and that goes double for the traditions that are inconvenient or silly-sounding.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-05T03:45:03.645Z · LW(p) · GW(p)
Pace Chesterton, I don't see that much difference, especially when the context changes significantly from decade to decade. If there's a pre-existing law preventing the activity, I will probably devote significantly more effort to looking for a good reason to prevent that activity than for a proposed law, but not an infinite amount of effort; at some point either I find such a reason or I don't endorse the law.
↑ comment by juliawise · 2012-01-02T19:59:17.411Z · LW(p) · GW(p)
Look at the youngest children in any adoption photolisting. The kids you usually see there are either part of a sibling group, or very disabled. (Example). There are children born with severe disabilities who are given up by their birth parents and are never adopted. (Example) The government pays foster parents to care for them. That's up to $2,000 per month for care, plus all medical expenses.
Meanwhile, other kids are dying for lack of cheap mosquito nets. This use of money does not seem right to me.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T21:19:42.977Z · LW(p) · GW(p)
At national level and above, the argument about "use of money" just plain fails. If you're looking for expenses to cut so that the money could be redirected for glaring needs like mosquito nets, foster care can't realistically appear on the cut list next to nuclear submarines and spaceflight.
Replies from: juliawise↑ comment by [deleted] · 2012-01-01T20:33:37.661Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth. Reason: extremely young babies aren't yet people.
Why not permit the killing of babies not your own, for the same reason?
Replies from: Bakkot, None, None↑ comment by Bakkot · 2012-01-01T20:39:09.640Z · LW(p) · GW(p)
Replies from: None↑ comment by [deleted] · 2012-01-01T20:51:04.028Z · LW(p) · GW(p)
It causes me a certain level of distress when a baby is harmed or killed, even if it is of no relation to me. Many people (perhaps almost all people) experience a similar amount of distress. Is it your point of view that the aggregate amount of harm caused in this way is not large enough to justify the prohibition on killing babies?
Perhaps what you mean to argue with the house analogy is not that the parent is harmed, but that his property rights have been violated.
Replies from: Bakkot, None↑ comment by Bakkot · 2012-01-01T21:04:55.262Z · LW(p) · GW(p)
Replies from: None↑ comment by [deleted] · 2012-01-01T21:09:46.752Z · LW(p) · GW(p)
Are those property rights transferable? Would you permit a market in infants?
Replies from: None, Bakkot, gwern↑ comment by [deleted] · 2012-01-02T09:23:15.615Z · LW(p) · GW(p)
Sure, adoption markets basically already exist, why not make them legal?
Not only are wealthier people better candidates on average because they can provide for the material needs much better and will on average have a more suitable psychological profile (we can impose legal screening of adopters too, so they need to match other current criteria before they can legally buy on the adoption market if you feel uncomfortable with "anyone can buy"). It also provides incentives for people with desirable traits to breed, far more than just subsidising them having kids of their own.
↑ comment by gwern · 2012-01-01T23:50:47.013Z · LW(p) · GW(p)
One of the standard topics in economic approaches to the law is to discuss the massive market failures caused by not permitting markets in infants; see for example, Landes and Richard Posner's "The Economics of the Baby Shortage". I thought their analysis pretty convincing.
↑ comment by [deleted] · 2012-01-02T09:20:46.686Z · LW(p) · GW(p)
It causes me a certain level of distress when a baby is harmed or killed, even if it is of no relation to me. Many people (perhaps almost all people) experience a similar amount of distress.
Don't worry, in the right culture and society this distress would be pretty minor.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T09:32:23.248Z · LW(p) · GW(p)
I disagree with that statement on at least two points.
1) How can you so easily predict others' level of distress if you don't feel much distress from that source in the first place?
2) Don't forget about scale insensitivity. Don't forget that some scale insensitivity can be useful on non-astronomical scales, as it gives bounds to utility functions and throws a light on ethical injunctions.
Replies from: None↑ comment by [deleted] · 2012-01-02T10:28:02.789Z · LW(p) · GW(p)
1) How can you so easily predict others' level of distress if you don't feel much distress from that source in the first place?
Looking at other humans. Perhaps even humans in actually existing different cultures.
2) Don't forget about scale insensitivity. Don't forget that some scale insensitivity can be useful on non-astronomical scales, as it gives bounds to utility functions and throws a light on ethical injunctions.
This is a good counter point. I just think applying this principle selectively is too easy to game a metric, to put too much weight to it in preliminary discussion.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T10:42:53.130Z · LW(p) · GW(p)
Perhaps even humans in actually existing different cultures.
Ah, but the culture you'd want and are arguing for here is way, way closer to our current culture than to any existing culture where distress to people from infanticide is "minor"!
Replies from: None↑ comment by [deleted] · 2012-01-02T10:49:03.469Z · LW(p) · GW(p)
How can you be so sure? Historically speaking, infanticide is the human norm.
It is just the last few centuries that some societies have gotten all upset over it.
In some respects modern society is closer in norms to societies that practised infanticide 100 years ago than to Western society of 100 years ago and we consider this a good thing. Why assume no future changes or no changes at all would go in this direction? And that likewise we'll eventually consider these changes good?
It is certainly weak evidence in favour of a practice being nasty that societies which practice it are generally nasty in other ways. But it is just that, weak evidence.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T10:51:23.140Z · LW(p) · GW(p)
In some respects modern society is closer in norms to societies that practised infanticide 100 years ago than to Western society of 100 years ago
Doesn't look that way to me at all, and never did. For every example you list (polyamory, etc) I bet I can find you a counterexample of equivalent strength.
Replies from: None, None↑ comment by [deleted] · 2012-01-02T10:57:48.884Z · LW(p) · GW(p)
For every example you list (polyamory, etc)
I think you mean "for every example you are likley to list", I didn't list any.
I bet I can find you a counterexample of equivalent strength.
What exactly would that accomplish? I said more similar in some respects, didn't I? I didn't say on net or overall.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T11:02:51.754Z · LW(p) · GW(p)
I think you mean "for every example you are likley to list", I didn't list any.
Yup.
I didn't say on net or overall.
That's the rub. I repeat my claim: the culture you want is, on net or overall, closer to our society than to societies that are OK with infanticide. It's evidence against your extrapolated-volition utopia being OK with infanticide. (unless I have absolutely zero understanding of Bayes)
Replies from: None, CharlieSheen↑ comment by [deleted] · 2012-01-02T11:05:30.178Z · LW(p) · GW(p)
sigh
That's the rub. I repeat my claim: the culture you want is, on net or overall, closer to our society than to societies that are OK with infanticide. It's evidence against your extrapolated-volition utopia being OK with infanticide. (unless I have absolutely zero understanding of Bayes)
Look two comments up.
It is certainly weak evidence in favour of a practice being nasty that societies which practice it are generally nasty in other ways. But it is just that, weak evidence.
↑ comment by CharlieSheen · 2012-01-02T11:12:58.312Z · LW(p) · GW(p)
That's the rub. I repeat my claim: the culture you want is, on net or overall, closer to our society than to societies that are OK with infanticide. It's evidence against your extrapolated-volition utopia being OK with infanticide. (unless I have absolutely zero understanding of Bayes)
Tsk tsk tsk, not very multicultural of you.
↑ comment by [deleted] · 2012-01-02T10:58:17.050Z · LW(p) · GW(p)
Please please just for a second try to look at your own society as the alien one for the purposes of analysis, to ascertain is rather than should when it comes to such questions. I find this has helped me more than anything else in thinking about social questions and avoiding political thinking.
↑ comment by [deleted] · 2012-01-02T09:24:28.152Z · LW(p) · GW(p)
The more interesting question is what to do when parents disagree about infanticide and the complications that come about from custody.
Also adoption contracts would probably need to have a "don't kill my baby that I've given up clause" lest some people wouldn't want to give up children for adoption.
↑ comment by [deleted] · 2012-01-02T09:17:38.384Z · LW(p) · GW(p)
Because its illegal to kill other people's pets or destroy their property? Duh.
Actually selling your baby on the adoption market should probably be legal too.
Replies from: wedrifid, Jayson_Virissimo, Multiheaded↑ comment by Jayson_Virissimo · 2012-01-02T10:13:01.065Z · LW(p) · GW(p)
Because its illegal to kill other people's pets or destroy their property? Duh.
So, premeditated killing of someone else's child should be criminal damage rather than murder?
Replies from: wedrifid, None↑ comment by wedrifid · 2012-01-02T10:26:50.188Z · LW(p) · GW(p)
So, premeditated killing of someone else's child should be criminal damage rather than murder?
What monetary value does the child have, for the purpose of calculating damages I wonder? We should do early testing to see how much status the parents were likely to gain via the impressiveness of their possession in the future. Facial symmetry, genetic indicators...
Replies from: Estarlio, Strange7, Multiheaded↑ comment by Estarlio · 2012-06-05T03:50:37.607Z · LW(p) · GW(p)
The emotional investment a parent makes in their child must be huge, and the damages similarly so. It seems perfectly reasonable for a parent to say, "There's nothing available that I value more than I valued my child, consequently no sum of money will suffice to cover my damages. Whatever you give me it's still going to work out as a loss."
Replies from: wedrifid↑ comment by wedrifid · 2012-06-05T04:09:08.066Z · LW(p) · GW(p)
The emotional investment a parent makes in their child must be huge, and the damages similarly so. It seems perfectly reasonable for a parent to say, "There's nothing available that I value more than I valued my child, consequently no sum of money will suffice to cover my damages. Whatever you give me it's still going to work out as a loss."
This is reasoning we may use now. But it does not apply in the spirit of the weirdtopia where we evaluate children only as property without moral value beyond that.
Replies from: Estarlio↑ comment by Estarlio · 2012-06-05T04:36:08.409Z · LW(p) · GW(p)
Where did we start talking about weirdtopia?
Sewing-Machine says: Why not permit the killing of babies not your own, for the same reason?
Konkvistador says: Because its illegal to kill other people's pets or destroy their property? Duh.
Jayson_Virissimo says: So, premeditated killing of someone else's child should be criminal damage rather than murder?
And then we're back to the bit I was responding to. But we all seem to be talking about what should be the case, where we want to end up. The reasoning we can apply at the moment seems the relevant thing to that. If weirdtopia doesn't look like a place our reasoning would work, if we wouldn't want to live there.... Well, so much the worse for weirdtopia.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-05T04:42:18.327Z · LW(p) · GW(p)
Where did we start talking about weirdtopia?
A weirdtopia. The premises that lead to the reasoning and conclusions here are only premises I could consider reasoning from from the perspective of a weird alternate reality. I certainly don't endorse anything we're talking about here myself but do suggest that they are incompatible with the nice sounding "Whatever you give me it's still going to work out as a loss" kind of moral expressions you mention - at least to the extent that they are embedded in the law.
↑ comment by Strange7 · 2012-06-05T02:12:18.111Z · LW(p) · GW(p)
Testing would be a lot of work and potential corruption for comparatively little gain in nailing down the sig figs. The EPA is already willing to put an approximate dollar value on the life of a random citizen shortened by pollution (for cost-benefit purposes when evaluating proposed cleanup plans), so I'd say just estimate the average or typical value and use that as the standard, preferably showing your work well enough to allow adjustments over time or judicial discretion in unusual cases.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-05T02:33:16.614Z · LW(p) · GW(p)
Testing would be a lot of work and potential corruption for comparatively little gain in nailing down the sig figs.
This is a situation in which "Speak for yourself!" would apply. In the weirdtopia where killing other people's children is criminal damage and such damages are calculated being able to prove higher value of said property would and should influence the amount of recompense they receive. For the same reason that Shane Warne could insure his finger for more than I could insure my finger an owner of an impressive child would be able to have that child evaluated and treated as a more valuable piece of property than an inferior child. They would aggressively and almost certainly successfully fight any attempt to make their child evaluated as a mediocre child.
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T03:27:58.396Z · LW(p) · GW(p)
That's what I meant by 'judicial discretion in unusual cases.'
Setting the default value a standard deviation or three above the actual average would probably be sensible. Cuts down on expensive investigations and appeals, since most bereaved parents would realize on some level that they won't actually gain by nitpicking, and erring on the side of punitive damages would help appease the victim and discourage recklessness.
↑ comment by Multiheaded · 2012-01-03T17:09:04.643Z · LW(p) · GW(p)
Downvoted for sarcasm. I was under the impression that (unsubtle forms of) sarcasm in non-humorous discussions are outlawed on LW, and that's very OK with me.
Replies from: wedrifid, TheOtherDave↑ comment by wedrifid · 2012-01-03T18:29:19.277Z · LW(p) · GW(p)
Downvoted for sarcasm. I was under the impression that (unsubtle forms of) sarcasm in non-humorous discussions are outlawed on LW, and that's very OK with me.
Downvoted for being a wet blanket and incorrect assumption of sarcasm. If it's ok to talk about the implications of legalizing infanticide then it is ok to follow the weirdtopia through and have fun with it. I adamantly refuse to take on a sombre tone just because people are talking about killing babies. My due diligence to the seriousness of babykilling with my expression of clear opposition - with that out of the way I am (and should be) free to join Konk and Jayson counterfactual wherein the actual logical implications related to killing other people's non-people infants are considered.
On a related note - of all the movies I was forced to endure and study in high school the only one I don't resent as a boring waste of my time is Gattaca!
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T18:32:30.665Z · LW(p) · GW(p)
I'm being a fucking idiot tonight.
Replies from: wedrifid↑ comment by wedrifid · 2012-01-03T18:58:04.205Z · LW(p) · GW(p)
I'm being a fucking idiot tonight.
If I downvote you for calling a valuable lesswrong contributor a fucking idiot is that a compliment or a criticism? ;)
Replies from: TheOtherDave, Multiheaded↑ comment by TheOtherDave · 2012-01-03T19:51:35.496Z · LW(p) · GW(p)
If I tell you you have a perverse wit will you hold it against me?
↑ comment by Multiheaded · 2012-01-03T19:19:29.054Z · LW(p) · GW(p)
I'd never agree with being called a fucking-idiot-in-general! :D It's just an observation that my mind feels numb and sluggish tonight, probably because of the weather.
↑ comment by TheOtherDave · 2012-01-03T18:12:45.148Z · LW(p) · GW(p)
Leaving aside the amusing notion of LW outlawing sarcasm, I'm curious about how you concluded that wedrifid's comment was (unsubtle) sarcasm.
(Just to be clear: I'm not contesting your freedom to downvote the comment for that reason or any other, including simply being irritated by people saying such things about children.)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T18:18:32.406Z · LW(p) · GW(p)
He started "investigating" a child's value to parents with things like the status they could gain from it, instead of obvious things like their instinctive emotional response to it, etc. That's manifestly not what most parents think and feel like.
Replies from: wedrifid, Nornagest, TheOtherDave↑ comment by wedrifid · 2012-01-03T19:05:36.844Z · LW(p) · GW(p)
He started "investigating" a child's value to parents with things like the status they could gain from it, instead of obvious things like their instinctive emotional response to it, etc. That's manifestly not what most parents think and feel like.
Emotional distress caused does seem like another important consideration when calculating damages received for baby/property destruction. It probably shouldn't be the only consideration. Just like if I went and cut someone's arm off it would be appropriate to consider the future financial and social loss to that person as well as his emotional attachment to his arm.
It doesn't seem very egalitarian but it may be a bigger crime to cut off the arm of a world class spin bowler (or pitcher) than the arm of a middle manager. It's not like the latter does anything that really needs his arm.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T19:22:58.486Z · LW(p) · GW(p)
True enough, but it simply doesn't feel to me that a child can be meaningfully called "property" at all. Hell, I'm not completely sure that a pet dog can be called property.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T19:50:09.634Z · LW(p) · GW(p)
Hypothetical question: if my child expresses the desire to go live with some other family, and that family is willing, and in my judgment that family will treat my child roughly as well as I will, is it OK for me to deny that expressed desire and keep my child with me?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T19:59:01.017Z · LW(p) · GW(p)
(quick edit)
Yes, it's OK, just the same as with a mentally impaired relative under your care, and for roughly the same reasons.
If said relative couldn't be considered property, then neither does this judgment signify that children are property.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T20:10:20.264Z · LW(p) · GW(p)
OK, then... I suspect you and I have very different understandings of what being property entails. If you're interested in unpacking your understanding, I'm interested in hearing it.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T20:12:50.126Z · LW(p) · GW(p)
Ok, maybe later.
↑ comment by Nornagest · 2012-01-03T18:33:16.959Z · LW(p) · GW(p)
While I don't fully disagree, I'm not sure that's a meaningful objection. One implication of the status-signaling frame is that our instinctive emotional responses (among other cognitive patterns) are calibrated at least partly in terms of maximizing status; it doesn't require any conscious attention to status at all, let alone an explicit campaign of manipulation.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T18:40:15.681Z · LW(p) · GW(p)
Well, I think that self-signaling especially - and likely even signaling to very close people like family members too - is one of the basic needs of humans, and, being as entangled with human worldview as it is, deserves to be counted under the blanket term "emotional response".
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T19:26:24.689Z · LW(p) · GW(p)
Even granting that, it's still true that if Nornagest is right and my emotional responses are calibrated in terms of expected status-maximization, then it makes sense to consider emotional responses in terms of (among other things) status-maximization for legal purposes.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T19:52:52.477Z · LW(p) · GW(p)
We clearly need to find out what kinds of emotional responses are calibrated by what adaptations in what proportion. Nominating status-seeking as the most important human drive here out of the blue just seems unjustified to me in this moment.
Replies from: Nornagest, TheOtherDave↑ comment by Nornagest · 2012-01-03T20:13:26.173Z · LW(p) · GW(p)
There's a tradition of examining that frame here that's probably inherited from Overcoming Bias; it's related to a model of human cognitive evolution as driven primarily by political selection pressures, which seems fairly plausible to me. I should probably mention, though, that I don't think it's a complete model; it's fairly hard to come up with an unambiguous counterexample to it, but it shares with a lot of evo-psych the problem of having much more explanatory than predictive power.
I think it's best viewed as one of several complementary models of behavior rather than as a totalizing model, hence the "frame" descriptor.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T20:21:44.325Z · LW(p) · GW(p)
I described it as a frame because I think it's best viewed as one of several complementary models of behavior rather than as a totalizing model.
I have a suspicion that we'll only be able to produce any totalizing model that's much good after we crack human intelligence in general. I mean, look at all this entangled mess.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-03T20:27:11.002Z · LW(p) · GW(p)
Well, "that's much good" is the tough part. It's not at all hard to make a totalizing model, and only a little harder to make one that's hard to disprove in hindsight (there are dozens in the social sciences) but all the existing ones I know of tend to be pretty bad at prediction. The status-seeking model is one of the better ones -- people in general seem more prone to avoiding embarrassment than to maximizing expected money or sexual success, to name two competing models -- but it's far from perfect.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T20:28:09.870Z · LW(p) · GW(p)
Yup. My point exactly.
↑ comment by TheOtherDave · 2012-01-03T20:25:28.095Z · LW(p) · GW(p)
Well, couching things in terms of status-signaling is conventional around here. But, sure, there are probably better candidates. Do you have anything in particular in mind you think should have been nominated instead?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T20:38:13.857Z · LW(p) · GW(p)
Nothing in particular, no, just skepticism. A (brief, completely uneducated) outside view of the field especially suggests that elegant-sounding theories of the mind are likely to fail bad at prediction sooner or later.
↑ comment by TheOtherDave · 2012-01-03T18:32:08.477Z · LW(p) · GW(p)
Agreed on both counts, and thanks for clarifying.
For my own part, in the hypothetical context Konkvistador and Jayson_Virissimo established, of infanticide being a property crime, it seems at least superficially reasonable to consider how our legal system would assess damages for infanticide and how that would differ from the real world where infanticide isn't a property crime.
And evaluating the potential gain that could in the future be obtained by the destroyed property is a pretty standard way of assessing such damages, much as damages found if someone accidentally chops my arm off generally take into account my likely future earnings had I kept both arms.
So I guess I'm saying that while I'm fairly sure wedrifid was being ironic (especially since I think he's come out elsewhere as pro-babies and anti-infanticide on grounds other than potential gain to their parents), I found his use of irony relatively subtle.
Again, that doesn't in any way preclude your objecting to his post.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-03T18:35:10.909Z · LW(p) · GW(p)
The funny thing is, I haven't felt even a tingle of outrage/whatever, I only objected to tone, on a formal principle, for a stupid reason which seems to have already vanished somewhere.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-03T18:37:44.855Z · LW(p) · GW(p)
Nor was I inferring outrage.
↑ comment by [deleted] · 2012-01-02T10:23:13.806Z · LW(p) · GW(p)
Maybe.
Maybe we could just keep it murder, I don't know. There is no law (heh) we have to be consistent about this. In many places across the world killing a pregnant women is tried as a double murder (I think this includes some US states).
↑ comment by Multiheaded · 2012-01-02T09:21:32.776Z · LW(p) · GW(p)
Actually selling your baby on the adoption market should probably be legal too.
I weakly agree, if only for the reason that it sounds better than foster care and could well curb infanticide. On the other hand, in countries that have a problem with slavery it could weaken any injunction against slave trade, by the same argument as the one I support against infanticide. Or it could harm the sacredness of the child-parent bond in general. Well, on the whole it seems just about worth it to me, and no part of it even feels creepy or alarmingly counterintuitive.
Replies from: Strange7, None↑ comment by Strange7 · 2012-06-05T02:25:08.769Z · LW(p) · GW(p)
The slave trade thing might be prevented by specifically forbidding the quick or anonymous sale of children. Have the current and prospective parents jump through some hoops, get interviewed by a social worker, etc. and the whole thing thoroughly documented. Find an equilibrium that keeps the nonmonetary transaction costs high enough that low-level slave traders won't think it's worth the trouble to 'go legit,' and the paper trail thick enough that corrupt aristocrats won't want to take the risk of public humiliation, without actually making it more difficult for the beleaguered biological parents than raising an unwanted child themselves.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-05T02:35:14.145Z · LW(p) · GW(p)
The ultimate slavery counter: red tape!
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T03:13:46.277Z · LW(p) · GW(p)
Working from the assumption that slave-traders are in it for the money? Yeah. Slavery stops happening when it becomes more cost-effective to pay the workers directly, than to pay guards to coerce them.
The main use of slave labor is agriculture, because it's easy to have a large group within a single overseer's line of sight, and output is easy to measure. Child labor has historically succeeded there because of the low skill requirement, and because an individual child's lower productivity was matched by lower housing and food costs. If a child costs more to acquire than an adult - specifically if that difference in up-front costs outweighs the net present value of that slim productivity-per-upkeep-cost advantage - anyone who keeps using children for unpaid ag labor will simply be driven out of the market by competitors willing to do the math.
The app people worry about is sex. Police and prosecuting attorneys (in the US, at least) are already willing to resort to extremely dubious tactics to score a pedophile conviction; this would give them a legitimate audit trail to follow. Someone seeking to purchase a child for such purposes would not dare attract so much official attention... unless they were suicidally stupid, which is the sort of problem that solves itself.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-05T05:52:12.564Z · LW(p) · GW(p)
Slavery stops happening when it becomes more cost-effective to pay the workers directly, than to pay guards to coerce them.
Hell no, it does not; only the label might change. If the only employers are would-be slavers with no financial, public or moral pressure to look to their workers' welfare, then wage slavery is little better than traditional slavery - in fact it's often worse because a capitalist employer, unlike a slaver, has zero investment in a slave, drawing from a huge pool of unskilled manpower with no acquisition cost. You don't need any guards if a person has no choice but work for you, work for another employer like you or starve!
Replies from: Strange7, None↑ comment by Strange7 · 2012-06-05T07:37:36.362Z · LW(p) · GW(p)
I said "slavery stops," not "quality of life improves." Getting employers to compete in a way that benefits workers is a different problem, and obtaining for the workers the freedom to choose to starve (rather than, say, being executed as an example to others) is only the first step.
Quality of life for workers is also a very different problem from quality of life for open-market-adopted children, which was the original topic.
↑ comment by [deleted] · 2012-06-05T07:08:58.342Z · LW(p) · GW(p)
Link broken.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-05T07:13:42.810Z · LW(p) · GW(p)
Better now?
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T07:31:47.217Z · LW(p) · GW(p)
no, still broken.
Replies from: Multiheaded, army1987↑ comment by Multiheaded · 2012-06-05T07:37:00.190Z · LW(p) · GW(p)
Changed the URL.
↑ comment by A1987dM (army1987) · 2012-06-05T07:35:36.526Z · LW(p) · GW(p)
It works for me.
↑ comment by Multiheaded · 2012-01-01T15:40:59.715Z · LW(p) · GW(p)
(edit)
I have the feeling that I've got to state the following belief in plain text:
Regardless of whether "babies are people" (and yeah, I guess I wouldn't call them that on most relevant criteria), any parent who proves able to kill their child while not faced with an unbearable alternative cost (a hundred strangers for an altruistic utilitarian, eternal and justified damnation for a deeply brainwashed believer) is damn near guaranteed to have their brain wired in a manner unacceptable to modern society.
Such wiring so strongly correlates with harmful, unsympathetic psychopaths that, if faced with a binary choice to murder any would-be childkillers on sight or ignore them, we should not waver in exterminating them. Of course, a better solution is a blanket application of unbounded social stigma as a first line deterrent and individual treatment of every one case, whether with an attempt at readjustment, isolation or execution.
Replies from: soreff, juliawise, TheOtherDave↑ comment by soreff · 2012-01-01T15:59:45.910Z · LW(p) · GW(p)
harmful, unsympathetic psychopaths
There is another, quite different, situation where it happens: Highly stressed mothers of newborns.
Replies from: MultiheadedThe answer to this couldn’t be more clear: humans are very different from macaques. We’re much worse. The anxiety caused by human inequality is unlike anything observed in the natural world. In order to emphasize this point, Robert Sapolsky put all kidding aside and was uncharacteristically grim when describing the affects of human poverty on the incidence of stress-related disease.
"When humans invented poverty," Sapolsky wrote, “they came up with a way of subjugating the low-ranking like nothing ever before seen in the primate world.”
This is clearly seen in studies looking at human inequality and the rates of maternal infanticide. The World Health Organization Report on Violence and Health reported a strong association between global inequality and child abuse, with the largest incidence in communities with “high levels of unemployment and concentrated poverty.” Another international study published by the American Journal of Psychiatry analyzed infanticide data from 17 countries and found an unmistakable “pattern of powerlessness, poverty, and alienation in the lives of the women studied.”
The United States currently leads the developed world with the highest maternal infanticide rate (an average of 8 deaths for every 100,000 live births, more than twice the rate of Canada). In a systematic analysis of maternal infanticide in the U.S., DeAnn Gauthier and colleagues at the University of Louisiana at Lafayette concluded that this dubious honor falls on us because “extreme poverty amid extreme wealth is conducive to stress-related violence.” Consequently, the highest levels of maternal infanticide were found, not in the poorest states, but in those with the greatest disparity between wealth and poverty (such as Colorado, Oklahoma, and New York with rates 3 to 5 times the national average). According to these researchers, inequality is literally killing our kids.
↑ comment by Multiheaded · 2012-01-01T16:16:11.689Z · LW(p) · GW(p)
Interesting. Having suspected that something along these lines was out there, I did mention the possibility of readjustment. However,
1) sorry and non-vindictive as we might feel for this subset of childkillers, we'd still have to give them some significant punishment, in order not to weaken our overall deterrence factor.
2) This still would hardly push anyone (me included) from "indiscriminating extermination" to "ignore" in a binary choice scenario.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T18:58:25.867Z · LW(p) · GW(p)
Replies from: TheOtherDave, Multiheaded↑ comment by TheOtherDave · 2012-01-01T19:14:33.306Z · LW(p) · GW(p)
I suspect that "babykilling is OK in and of itself, but it's a visible marker for psychosis and we want to justify taking action against psychotics and therefore we criminalize babykilling anyway" isn't a particularly stable thought in human minds, and pretty quickly decomposes into "babykilling is not OK," "psychosis is not OK," "babykillers are psychotic," a 25% chance of "psychotics kill babies," and two photons.
Replies from: AspiringKnitter, None, Multiheaded↑ comment by AspiringKnitter · 2012-01-01T22:07:54.554Z · LW(p) · GW(p)
I know it's stupid to jump in here, but you don't mean psychotic or psychosis. You mean psychopathic (a.k.a. sociopathic). Please don't lump the mentally ill together with evil murderers. Actual psychotic people are hearing voices and miserable, not gleefully plotting to kill their own children. You're thinking of sociopaths. Psychotics don't kill babies any more than anyone else. It's sociopaths who should all be killed or otherwise removed from society.
Replies from: cousin_it, PhilosophyTutor, ahartell, TheOtherDave, Will_Newsome↑ comment by cousin_it · 2012-01-02T23:21:43.878Z · LW(p) · GW(p)
Some of the traits listed on the wikipedia page for psychopathy are traits that I want and have modified myself towards:
Psychopaths do not feel fear as deeply as normal people and do not manifest any of the normal physical responses to threatening stimuli. For instance, if a normal person were accosted in the street by a gun-wielding mugger, he/she might sweat, tremble, lose control of his/her bowels or vomit. Psychopaths feel no such sensations, and are often perplexed when they observe them in others.
Psychopaths do not suffer profound emotional trauma such as despair. This may be part of the reason why punishment has little effect on them: it leaves no emotional impression on them. There are anecdotes of psychopaths reacting nonchalantly to being sentenced to life in prison.
Some psychopaths also possess great charm and a great ability to manipulate others. They have fewer social inhibitions, are extroverted, dominant, and confident. They are not afraid of causing offense, being rejected, or being put down. When these things do happen, they tend to dismiss them and are not discouraged from trying again.
↑ comment by PhilosophyTutor · 2012-01-03T01:56:07.577Z · LW(p) · GW(p)
It's sociopaths who should all be killed or otherwise removed from society.
Lots of sociopaths as the term is clinically defined live perfectly productive lives, often in high-stimulation, high-risk jobs that neurotypical people don't want to do like small aircraft piloting, serving in the special forces of their local military and so on. They don't learn well from bad experiences and they need a lot of stimulation to get a high, so those sorts of roles are ideal for them.
They don't need to be killed or removed from society, they need to be channelled into jobs where they can have fun and where their psychological resilience is an asset.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-03T02:34:54.931Z · LW(p) · GW(p)
Huh, okay. Thanks.
↑ comment by ahartell · 2012-01-02T22:44:21.552Z · LW(p) · GW(p)
Aren't sociopaths mentally ill too?
Replies from: juliawise, None↑ comment by juliawise · 2012-01-02T23:16:38.112Z · LW(p) · GW(p)
Yes, but people with different types of illness vary in whether they are likely to kill other people, which is the question here. This metastudy found half of male criminals have antisocial personality disorder (including sociopaths and psychopaths) and less than 4% have psychotic disorders. In other words, criminals are unlikely to be people who have lost touch with reality and more likely to be people who just don't care about other people.
Replies from: ahartell↑ comment by [deleted] · 2012-01-02T23:00:59.754Z · LW(p) · GW(p)
If you say they are, it's in a totally different way. Taboo "mentally Ill".
Replies from: ahartell↑ comment by ahartell · 2012-01-02T23:38:08.852Z · LW(p) · GW(p)
I was being a bit pedantic. When she says "don't lump the mentally ill together with evil murderers" I think she means "don't lump [psychotic] people in with evil murderers", which I don't disagree with. However, not all sociopaths are evil murderers. I would even say it's wrong to lump these mentally ill sociopaths together with evil murderers.
In other words, AspiringKnitter,
Replies from: AspiringKnitter, wedrifidPlease don't lump the mentally ill together with evil murderers.
↑ comment by AspiringKnitter · 2012-01-03T00:15:34.618Z · LW(p) · GW(p)
Okay. I've never heard of any non-evil sociopaths before, but I'll accept that they exist if you tell me they do.
What I meant was indeed that psychotic people aren't any more evil on average than normal people. The point is irrelevant to the thread, but I make it wherever it needs to be made because conflating the two isn't just sloppy, it harms real people in real life.
Replies from: None, ahartell↑ comment by [deleted] · 2012-01-03T00:31:40.699Z · LW(p) · GW(p)
I think many sociopaths become high-powered businesspeople.
The other thing that "harms people in real life" is saying stuff like "sociopaths should all be killed or otherwise removed from society". To say such things, you must override your moral beliefs, which is not a good habit to be in, and not a good image of yourself to cache.
Replies from: Bugmaster, AspiringKnitter, wedrifid↑ comment by Bugmaster · 2012-01-04T04:52:23.351Z · LW(p) · GW(p)
To say such things, you must override your moral beliefs, which is not a good habit to be in, and not a good image of yourself to cache.
This may be a nitpick, but it's not clear to me that "removing all sociopaths from society" will even be beneficial to the remaining society. It's entirely possible that our society requires a certain number of sociopaths in order to function.
I have no hard evidence one way or the other, but I'm pretty sure that, historically, plans that involved "remove all X from society" turned out very poorly, for any given X.
Replies from: None↑ comment by [deleted] · 2012-01-05T15:29:46.391Z · LW(p) · GW(p)
yeah good point. Not all sociopaths are murderers, just cut the middleman and do whatever with the murderers.
Proxy tests (are you a sociopath, are you black, do you have a shaved head, etc) are a terrible idea.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-05T15:33:21.023Z · LW(p) · GW(p)
By "murderer," here, do you mean someone who has actually committed murder?
Replies from: None↑ comment by [deleted] · 2012-01-05T21:18:38.935Z · LW(p) · GW(p)
yeah. mostly. Though it would be nice to catch murderers before they kill anyone. At this point tho, I dont think we are generally rational enough to figure out in advance who the murderers are without huge collateral disutility.
I'm going to stop discussing this because it is about to get dangerously mindkillery.
Replies from: Strange7↑ comment by Strange7 · 2012-06-05T05:05:38.901Z · LW(p) · GW(p)
Depends how far in advance you're looking. Aiming a loaded gun, or charging forward while screaming and brandishing something sharp and heavy, provide very solid evidence before any injury is done, and modern medicine can turn what seemed like successful murder back into 'attempted' by making it possible to recover after the injury.
↑ comment by AspiringKnitter · 2012-01-03T01:36:48.656Z · LW(p) · GW(p)
The other thing that "harms people in real life" is saying stuff like "sociopaths should all be killed or otherwise removed from society". To say such things, you must override your moral beliefs, which is not a good habit to be in, and not a good image of yourself to cache.
Good point, although actually, my moral beliefs are consequentialist, and therefore actually formulated as "prevent the greatest possible number of murders" rather than "kill the fewest possible people personally", so it's not actually accurate to say I have to override moral beliefs to advocate removing sociopaths from society. But I guess the best idea is to neutralize the threat they pose while still giving them a chance at redemption. You're right.
I think many sociopaths become high-powered businesspeople.
I thought most high-powered businesspeople were evil. XP
Replies from: None↑ comment by [deleted] · 2012-01-03T23:03:33.110Z · LW(p) · GW(p)
my moral beliefs are consequentialist, and therefore actually formulated as "prevent the greatest possible number of murders" rather than "kill the fewest possible people personally", so it's not actually accurate to say I have to override moral beliefs to advocate removing sociopaths from society.
Of course. I agree that one death is preferable to many, no matter who or what does the killing. I am talking about the effects on yourself of endorsing murder, and possibly the less noble real reason you chose that solution.
Maybe you have observed what I am talking about: people having to steel themselves against their moral intuitions when they say or do certain things. You can see it in their faces; a grim, slightly sadistic hatred, I call it the "murder face". I don't think people do this because they are strict utilitarians. The murder face is not the reaction you would expect from a utilitarian reluctantly deciding that someone has to be executed.
I don't think you said "sociopaths should all be killed or otherwise removed from society" for strictly utilitarian reasons either. I would expect a utilitarian to stress out and shit themselves for a few days (or as long as they had, up to years) trying to think of some other way to solve the problem before they would ever even think of murder.
The thing is, trades of one life for many are nearly always false dichotomies. There is some twisted way that humans are unjustifiably drawn to consider murder without even trying to consider alternatives. See the sequence on ethical injunctions.
Thru the known mechanisms of self-image, cached thoughts and so on, proposing murder as a solution just makes this problem worse in the future. You literally become less moral by saying that.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-04T04:39:54.668Z · LW(p) · GW(p)
I would expect a utilitarian to stress out and shit themselves for a few days (or as long as they had, up to years) trying to think of some other way to solve the problem before they would ever even think of murder.
But I don't have to solve the problem. Whatever I think of regarding sociopaths is pretty pointless, since I won't have the chance to do it anyway, and even if I decided I really did think that was definitely the best course of action (which I'm not certain of; note that I've always qualified it with "or otherwise removed from society", which could include all sorts of other possibilities) after considering all the other possibilities, I doubt that I personally would be able to do it anyway, and if I did I would go to jail and I don't want that either. So for me to say it is as easy as the trolley problem (am I the only person for whom the trolley problem is easy?).
The thing is, trades of one life for many are nearly always false dichotomies. There is some twisted way that humans are unjustifiably drawn to consider murder without even trying to consider alternatives. See the sequence on ethical injunctions.
Thank you. If I'm ever in a position where killing someone is a course of action that's even on my radar as something to consider, I'll bear that in mind.
Thru the known mechanisms of self-image, cached thoughts and so on, proposing murder as a solution just makes this problem worse in the future. You literally become less moral by saying that.
Thank you for pointing that out. Just for the record, not killing people is one of my terminal values, and if I'm ever in a position to deal personally with the sociopath problem, I'll be considering the other possibilities first.
↑ comment by ahartell · 2012-01-03T00:27:35.726Z · LW(p) · GW(p)
Yeah, my understanding is that they exist. Just wondering, how would you expect to hear about a non-evil sociopath?
Yeah, I'm totally on board with you there (though I'm not really fond of the word evil). I remember hearing that psychotic people are much more likely to hurt themselves than average, but not more likely to hurt others. And yeah, it's bad to consider them to be "evil" when they're not or to contribute to a societal model of them that does the same.
↑ comment by wedrifid · 2012-01-03T00:52:16.717Z · LW(p) · GW(p)
I was being a bit pedantic. When she says "don't lump the mentally ill together with evil murderers" I think she means "don't lump [psychotic] people in with evil murderers", which I don't disagree with. However, not all sociopaths are evil murderers. I would even say it's wrong to lump these mentally ill sociopaths together with evil murderers.
Are we talking about psychotic people here or sociopaths (psychopaths)? The two are vastly different. Or are you saying that neither psychotic people nor sociopaths are necessarily evil?
Replies from: ahartell↑ comment by TheOtherDave · 2012-01-01T22:13:19.368Z · LW(p) · GW(p)
OK.
↑ comment by Will_Newsome · 2012-01-02T23:00:43.384Z · LW(p) · GW(p)
(It's odd how the words "schizophrenic" and "psychotic" bring up such different connotations even though schizophrenia is the poster-child of psychosis. (Saying this as a schizotypal person with "ultra high risk" of schizophrenia.))
↑ comment by [deleted] · 2012-01-01T19:54:47.531Z · LW(p) · GW(p)
Where did the two photons come from?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T20:02:39.560Z · LW(p) · GW(p)
The photons come from unjustified pattern-matching.
Replies from: None↑ comment by Multiheaded · 2012-01-01T19:40:46.452Z · LW(p) · GW(p)
Exhibit A: me.
↑ comment by Multiheaded · 2012-01-01T19:16:09.131Z · LW(p) · GW(p)
In the end, I just feel that it's incompatible with my terminal values, one way or the other.
↑ comment by juliawise · 2012-01-01T16:16:48.607Z · LW(p) · GW(p)
Infanticide has been considered a normal practice in a lot of cultures. The Greeks and Romans, for example, don't seem to have been run down by psychopaths.
I don't think we have a good way to know about the later harmful actions of people who kill their infants. Either we find them out and lock them up, in which case their life is no longer really representative of the population, or we don't know about what they've done.
Replies from: Multiheaded, Multiheaded, Strange7↑ comment by Multiheaded · 2012-01-01T16:51:52.415Z · LW(p) · GW(p)
I've managed to overlook the most important (and fairly obvious) thing, though!
If the idea of "childkilling=bad" is weakly or not at all ingrained in a culture, it's easy to override both one's innate and cultural barriers to kill your child, so most normally wired people would be capable of it => the majority of childkillers are normal people.
If it's ingrained as strongly as in the West today, there would be few people capable of overriding such a strong cultural barrier, => the majority of childkillers left would be the ones who get no barriers in the first place, i.e. largely harmful, unsympathetic psychopaths. The other ones would have an abnormally strong will to override barriers and self-modify, which can easily make them just as dangerous.
Replies from: juliawise, soreff↑ comment by juliawise · 2012-01-01T17:33:44.417Z · LW(p) · GW(p)
Okay, got it. I agree that in a culture that condemns infanticide, people who do it anyway are likely to be quite different from the people who don't. But Bakkot's claim was that our culture should allow it, which should not be expected to increase the number of psychopaths.
I'm also not sure that unbounded social stigma is an effective way to deter people who essentially don't care about other people. We don't really know of good ways to change psychopathy.
(edited for clarity)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T17:52:57.081Z · LW(p) · GW(p)
But Bakkot's claim was that our culture should allow it, which should not be expected to increase the number of psychopaths.
First, any single relaxed taboo feels to me like a blow against the entire net of ethical inhibitions, both in a neurotypical person and in a culture (proportional to the taboo's strength and gravity, that is). Therefore, I think it could be a slippery slope into antisociality for some people who previously behaved acceptably. Second, we could be taking one filter of existing psychopaths from ourselves while giving the psychopaths a safe opportunity to let their disguise down. Easier for them to evade us, harder for us to hunt them down.
I'm also not sure that unbounded social stigma is an effective way to deter people who essentially don't care about other people.
Successful psychopaths do understand that society's opinion of them can affect their well-being, this is why they bother to conceal their abnormality in the first place.
Replies from: juliawise, TimS↑ comment by juliawise · 2012-01-01T18:41:35.653Z · LW(p) · GW(p)
If "hunting down" psychopaths is our goal, we'd do better to look for people who torture or kill animals. My understanding is that these behaviors are a common warning sign of antisocial personality disorder, and I'm sure it's more common than infanticide because it's less punished. Would you advocate punishing anyone diagnosed with antisocial personality right away, or would you want to wait until they actually committed a crime?
I'd put taboos in three categories. Some taboos (e.g. against women wearing trousers, profanity, homosexuality, or atheism) seem pointless and we were right to relax them. Some taboos, like those against theft and murder, I agree we should hold in place because they produce so little value for the harm they produce. Some, like extramarital sex and abortion, are more ambiguous. They probably allow some people to get away with unnecessary cruelty. But because the the personal freedom they create, I think they produce a net good.
I put legalized infanticide in the third category. I gather you put it in the second? In other words, do you believe the harm it would create from psychopaths killing babies and generally being harder to detect would be greater than the benefit to people who don't raise unwanted children?
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-01-01T19:06:56.175Z · LW(p) · GW(p)
In other words, do you believe the harm it would create from psychopaths killing babies and generally being harder to detect would be greater than the benefit to people who don't raise unwanted children?
I believe that legalized infanticide would be harmful, at least, to our particular culture for many reasons, some of which I'm sure I haven't even thought of yet. I'm not even sure whether the strongest reason for not doing it is connected to psychopathic behaviour at all. Regardless, I'm certain about fighting it tooth and nail if need be, at at least a 0.85.
By the way, have you considered the general memetic chaos that would erupt in Western society if somehow infanticide was really, practically made legal?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T19:18:34.546Z · LW(p) · GW(p)
Replies from: TheOtherDave, Multiheaded↑ comment by TheOtherDave · 2012-01-01T19:34:21.483Z · LW(p) · GW(p)
Huh. I don't follow the reasoning. Why do you expect social stigma attached to infanticide to correlate with less fun?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T19:48:00.879Z · LW(p) · GW(p)
Replies from: Nornagest, NancyLebovitz, TheOtherDave↑ comment by Nornagest · 2012-01-06T19:16:09.772Z · LW(p) · GW(p)
More broadly, I think having fewer things prohibited correlates with more fun unless there's some reason the prohibition increases the amount of fun in the universe.
That's pretty much tautological -- you could as well express it as "forbidding things correlates with more fun unless there's some reason allowing something increases the amount of fun in the universe". What you really need for this argument to work is a way of showing that people attach intrinsic utility to increased latitude of choice, which in light of the paradox of choice looks questionable.
Replies from: Bakkot↑ comment by NancyLebovitz · 2012-01-06T17:57:53.595Z · LW(p) · GW(p)
Aside from any other possible issues, you're leaving out the possibility that one person may want to kill a baby that another person is very attached to.
Do you have an age or ability level at which you think being a person begins?
Replies from: Bakkot, Multiheaded↑ comment by Bakkot · 2012-01-07T05:42:50.649Z · LW(p) · GW(p)
Replies from: Caspian, wedrifid↑ comment by Caspian · 2012-01-07T14:09:19.533Z · LW(p) · GW(p)
I expect this proposal could be taken seriously: when an owner wants to have a pet put down other than for humanitarian reasons, others who have had a close relationship to the pet, and are willing and able to take responsibility for it, get the right to veto and take custody of the pet.
Ways in which Nancy's argument was not exactly like arguing that abortion should be illegal because other people might have gotten attached to the fetus:
- She didn't say: therefore it should be completely prohibited.
- There can be more interaction by non-mothers with a baby than a fetus.
I'm not sure how much I will participate on this topic, it seems like a bit of a mind killer. I'm impressed we've found a more volatile version of the notorious internet abortion debate.
Replies from: MixedNuts, Multiheaded, Multiheaded↑ comment by MixedNuts · 2012-01-07T22:56:09.345Z · LW(p) · GW(p)
The standard reply to "But I like your fetus, don't kill it!" is "I'd let you have it, but we don't have the tech for me to give it to you now. My only options are going through several months of pregnancy plus labor, or killing it now. So down the drain it goes.". This suggests that inasmuch are there are people attached to fetuses not inside themselves, we should work on eviction tech.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-07T23:02:01.211Z · LW(p) · GW(p)
Or, in any even slightly libertarian weirdtopia, it could be a matter of compensation for bearing the child.
Replies from: MixedNuts, AspiringKnitter↑ comment by MixedNuts · 2012-01-07T23:14:47.914Z · LW(p) · GW(p)
That's legal now (though we tend to offer status and supportive work like childcare, not money). Libertarianism mandates that refusing the transaction at any price and aborting also remains legal (unless embryos turn out to be people at typical abortion age, in which case they are born in debt).
↑ comment by AspiringKnitter · 2012-01-07T23:12:04.534Z · LW(p) · GW(p)
To which I can see people responding by getting pregnant, getting others attached, threatening abortion and collecting compensation just to make money. Especially if pro-lifers run around paying off as many would-be aborters as possible.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-08T09:11:53.786Z · LW(p) · GW(p)
Maybe. Maybe society would create new norms to fix that.
I'd like to mention that I'm emphatically not a libertarian (in fact identifying as socialist), and find many absurdities with its basic concept (see Yvain's "Why I Hate Your Freedom); however, I'd always like to learn more about how it could plausibly work from its proponents, and am ready to shift towards it if I hear some unexpectedly strong arguments.
↑ comment by Multiheaded · 2012-01-07T21:06:21.484Z · LW(p) · GW(p)
I'm impressed we've found a more volatile version of the notorious internet abortion debate.
Odd to hear that about a community upon which one member unleashed an omnipotent monster from the future that could coerce folks who know the evidence for its existence to do its bidding. And where, upon an attempt to lock said monster away, about 6000 random people were sorta-maybe-kinda-killed by another member as retaliation for "censorship".
:D
↑ comment by Multiheaded · 2012-01-07T22:51:28.085Z · LW(p) · GW(p)
(take a stupid picture I made, based on this)
↑ comment by Multiheaded · 2012-01-06T21:40:40.415Z · LW(p) · GW(p)
Aside from any other possible issues, you're leaving out the possibility that one person may want to kill a baby that another person is very attached to.
Indeed. Look at a scenario like this. What if an adventurous young woman gets an unintended pregnancy, initially decides to have the child, many of her friends and her family are looking forward to it... then either the baby is crippled during birth or the mother simply changes her mind, unwilling to adapt her lifestyle to accommodate child-rearing, yet for some weird reason (selfish or not) refusing to give it up for adoption?
Suppose that she tells the doctor to euthanize the baby. Consider the repercussions in her immediate circle, e.g. what would be her mother's reaction upon learning that she's a grandmother no more (even if she's told that the baby died of natural causes... yet has grounds to suspect that it didn't)?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-06T21:55:31.047Z · LW(p) · GW(p)
Completely independent of any of the rest of this, I absolutely endorse the legality of lying to people about why my child died, as well as the ethics of telling them it's none of their damned business, with the possible exception of medical or legal examiners. I certainly endorse the legality of lying to my mother about it.
Further, I would be appalled by someone who felt entitled to demand such answers of a mother whose child had just died (again, outside of a medical or legal examination, maybe) and would endorse forcibly removing them from the presence of a mother whose child has just died.
I would not endorse smacking such a person upside the head, but I would nevertheless be tempted to.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-06T22:03:25.027Z · LW(p) · GW(p)
Crap, now that was ill-thought. Yeah, definitely agreed. I removed the last two sentences. The rest of my argument for babies occasionally having great value to non-parents still stands.
↑ comment by TheOtherDave · 2012-01-01T19:59:05.488Z · LW(p) · GW(p)
If I kill a person, the number of Fun-having-person-moments in the universe is reduced by the remaining lifetime that person would potentially have had. If I kill a baby, the number of Fun-having-person-moments in the universe is reduced by the entire lifetime of the person that baby would potentially have become.
Reasoning sensibly about counterfactuals is hard, but it isn't clear to me why the former involves less total Fun than latter does. If anything, I would expect that removing an entire lifetime's worth of Fun-having reduces total Fun more than removing a fraction of a lifetime's worth.
Replies from: Bakkot, wedrifid↑ comment by Bakkot · 2012-01-01T20:09:43.779Z · LW(p) · GW(p)
Replies from: army1987, TheOtherDave, wedrifid, Multiheaded, Multiheaded↑ comment by A1987dM (army1987) · 2012-01-06T21:40:13.505Z · LW(p) · GW(p)
If I believed the only reason nobody has killed me yet is because it is illegal to kill people, I wouldn't be very happy.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-07T05:28:08.264Z · LW(p) · GW(p)
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-07T12:11:38.058Z · LW(p) · GW(p)
I mean that a world where there is someone who would want to kill me, and the only reason why they don't is that they're afraid of ending up in jail, is not so much of a world in which I'd like to live.
Replies from: orthonormal, MixedNuts↑ comment by orthonormal · 2012-01-07T17:01:34.104Z · LW(p) · GW(p)
It's not that anyone hates you; they might kill you because they're afraid of you killing them first, if there were no legal deterrent against killing.
In particular, if you had any conflict with someone else in a world where killing was legal, it would quite possibly spiral out of control: you're worried they might kill you, so you're tempted to kill them first, but you know they're thinking the same way, so you're even more worried, etc.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-07T17:24:26.753Z · LW(p) · GW(p)
It's not that anyone hates you; they might kill you because they're afraid of you killing them first, if there were no legal deterrent against killing.
At least in my country, killing someone for self-defence is already legal. (Plus, I don't think I'm going to threaten to kill someone in the foreseeable future, anyway.)
Replies from: TheOtherDave, orthonormal↑ comment by TheOtherDave · 2012-01-07T17:26:36.388Z · LW(p) · GW(p)
I'm not sure where you live, but is killing someone who you think will try to kill you some day actually considered self-defense for legal purposes there? I'm pretty sure self-defense doesn't cover that in the US.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-07T18:47:18.277Z · LW(p) · GW(p)
No. I guess I misunderstood what orthonormal meant by “afraid of you killing them first”...
↑ comment by orthonormal · 2012-01-07T17:31:34.457Z · LW(p) · GW(p)
At least in my country, killing someone for self-defence is already legal.
Right, but "I accidentally ran over his dog, and I was worried that he might kill me later for it, so I immediately backed up and ran him over" probably won't count as self-defense in your country. But it's the sort of thing that traditional game theory would advise if killing was legal.
This really is a case where imposing an external incentive can stop people from mutually defecting at every turn.
(Plus, I don't think I'm going to threaten to kill someone in the foreseeable future, anyway.)
If killing were legal (in a modern state with available firearms, not an ancient tribe with strong reputation effects), threatening to kill someone would be the stupidest possible move. Everyone is a threat to kill you, and they'll probably attempt it the moment they become afraid that you might do the same.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-07T18:55:23.122Z · LW(p) · GW(p)
But it's the sort of thing that traditional game theory would advise if killing was legal.
I don't get it... He wouldn't gain anything by killing you (ETA: other than what your father/wife/whoever would gain by killing him after he kills you), so why would you be afraid he would do that? (Also, I'm not sure the assumptions of traditional game theory apply to humans.)
This really is a case where imposing an external incentive can stop people from mutually defecting at every turn.
If this was the case, I would expect places with less harsh penalties, or with lower probabilities of being convicted, to have a significantly higher homicide rate (all other things being equal). Does anyone have statistics about that? (Though all other things are seldom equal... Maybe the short/medium term effects of a change in legislation within a given country would be better data.)
Replies from: orthonormal↑ comment by orthonormal · 2012-01-07T19:07:47.584Z · LW(p) · GW(p)
If this was the case, I would expect places with less harsh penalties, or with lower probabilities of being convicted, to have a significantly higher homicide rate (all other things being equal). Does anyone have statistics about that?
I haven't read it yet, but I think this is basically the thesis of Steven Pinker's The Better Angels of our Nature.
I don't get it... He wouldn't gain anything by killing you, so why would you be afraid he would do that? (Also, I'm not sure the assumptions of traditional game theory apply to humans.)
Have you seen The Dark Knight? This is exactly the situation with the two boats. (Not going into spoiler-y detail.) Causal decision theory demands that you kill the other person as quickly and safely (to you) as possible, just as it demands that you always defect on the one-shot (or known-iteration) Prisoner's Dilemma.
Anyway, I think you shouldn't end up murdering each other even in that case, and if everyone were timeless decision theorists (and this was mutual knowledge) they wouldn't. But among humans? Plenty of them would.
↑ comment by MixedNuts · 2012-01-07T12:28:23.308Z · LW(p) · GW(p)
As opposed to where? We can ban or allow murder. We can't yet do personality modifications that deep.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-07T13:22:26.050Z · LW(p) · GW(p)
As opposed to this world. I don't think that, right now, there's anyone who would want to kill me.
We can't yet do personality modifications that deep.
So, if Alice murdered Bob, she had always wanted to kill him since she was born (as opposed to her having changed her mind at some point)? Probably we can't deliberately do personality modifications that deep (or do we? The results of Milgram's experiment lead me to suspect it wouldn't be completely impossible for me to convince someone to want to kill me -- not that I can imagine a reason for me to do that).
↑ comment by TheOtherDave · 2012-01-01T20:20:46.066Z · LW(p) · GW(p)
(shrug) We're both neglecting lots of things; we couldn't have this conversation otherwise.
I agree with you that the risk of being killed reduces Fun, at least in some contexts. (It increases Fun in other contexts.) Then again, the risk of my baby being killed reduces Fun in some contexts as well. I don't see any principled reason to consider the first factor in my calculations and not the second (or vice-versa), other than the desire to justify a preselected conclusion.
I agree that it's not clear that adding a person to the universe increases the amount of Fun down the line. It's also not clear that subtracting a person from the universe reduces the amount of Fun. Reasoning sensibly about conterfactuals is hard.
Replies from: Multiheaded, Bakkot↑ comment by Multiheaded · 2012-01-01T20:58:29.621Z · LW(p) · GW(p)
Then again, the risk of my baby being killed reduces Fun in some contexts as well.
You've struck onto something here (taking into account your update about the risk only coming from yourself)
1) Under the current system, parents are somewhat Protected From Themselves. What if a mother, while suffering a state of affect, consciously and subconsciously knew that she was allowed to kill her baby, so she did it, then was hit with regret&remorse?
2) Under the current system, parents feel like society is pressuring them not to commit especially grave failures of parenting, which gives them a feeling of fairness.
Replies from: daenerys, TheOtherDave↑ comment by daenerys · 2012-01-01T21:26:17.367Z · LW(p) · GW(p)
If the only thing stopping a parent from killing their child is the illegalization of said act, then they shouldn't be parents anyway. If you can't control yourself with an infant, then the probability is pretty high that you are going to be some type of abusive parent. The child is likely going to be a net drain on society because of the low-level of upbringing.
It is probably better for the baby (and society) for it to be killed while it is a blicketless infant, than to grow up under the "care" of such a parent.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T21:34:00.888Z · LW(p) · GW(p)
I can easily visualize that, in our world, some very quickly passing one-in-a-lifetime temptation to get rid of an infant is experienced by many even slightly unstable or emotionally volatile parents, then forgotten.
Would you really want to give that temptation a chance to realize itself in every case when the (appropriately huge - we're talking about largely normal people here) social stigma extinguishes the temptation today?
Oh, and in no way it's "only the illegalization", it's the meme in general too.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-06T16:40:28.443Z · LW(p) · GW(p)
Maybe.
Suppose, for example, that what you're describing here as instability/emotional volatility -- or, more operationally, my likelihood of doing something unrecoverable-from which I generally abhor based on a very quickly passing once-in-a-lifetime temptation -- is hereditable (either genetically or behaviorally, it doesn't matter too much).
In that case, I suspect I would rather that infants born to emotionally volatile/unstable parents ten million years ago had not matured to breeding age, as I'd rather live in a species that's less volatile in that way. So it seems to follow that if the social stigma is a social mechanism for compensating for such poor impulse control in humans, allowing humans with poor impulse control to successfully raise their children, I should also prefer that that stigma not have been implemented ten million years ago.
Of course, I'm not nearly so dispassionate about it when I think about present-day infants and their parents, but it's not clear to me why I should endorse the more passionate view.
Incidentally, I also don't think your hypothetical has much to do with the real reasons for an infanticide social stigma. I support the meme, I just don't think this argument for it holds water.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-06T21:23:39.300Z · LW(p) · GW(p)
Sorry, but I don't like your reasoning.
- Emotionally volatile people shouldn't be automatically assumed to fail upon most such temptations, after all (when they fail in a big way, that's when we hear about it the most), and might not even be a net negative for society in other spheres (although yeah, they probably are... still, it's awfully cold just to unapologetically thin their numbers with eugenics. I know that a lot of things LWians (incl. me) would do or intend to do are awfully cold, but hell, this one concerns me directly!).
- The "volatility" of one's behavior is a sum of the individual's psychological make-up - which might or might not be largely hereditary - and the weakness or strength of one's tendency for self-control - which is definitely largely cultural/environmental.
Look at the Far Eastern and Scandinavian societies. Wouldn't an emotionally unstable person being raised in one of them be trained to control their emotions to a much greater degree than e.g. in Southern Europe?
Further on the "hereditability" part; I'm really emotionally unstable (as you might have witnessed), but my parents are really stable and cool-headed most of the time; however, my aunt from my mother's side is a whole lot like me. I attribute most of my mental weirdness to birth trauma (residual encephalopathy, I don't know if it's pre- or post-natal), but I don't know whether part of it might be due to some recessive gene that manifested in my aunt and me, but not at all in my mother.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-06T21:49:04.901Z · LW(p) · GW(p)
I agree that we shouldn't assume that emotionally volatile people fail upon most such temptations.
I agree that my reasoning here is cold (indeed, I said as much myself, though I used the differently-loaded word "dispassionate").
I agree that if impulse control is generally nonhereditable (and, again, I don't just mean genetically), the argument I use above doesn't apply.
I agree that different cultures train their members to "control their emotions" to different degrees. (Or, rather, I don't think that's true in general, but we've specifically been talking about the likelihood of expressing transient rage in the form of violence, and I agree that cultures differ in terms of how acceptable that is.)
I understand that, independent of any of the above, you don't like my reasoning. It doesn't make me especially happy either, come to that.
I still, incidentally, don't believe that the stigma against infanticide is primarily intended to protect infants from transient murderous impulses in their parents.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-06T21:59:29.698Z · LW(p) · GW(p)
I still, incidentally, don't believe that the stigma against infanticide is primarily intended to protect infants from transient murderous impulses in their parents.
Neither do I; the reasons for its development do need a lot of looking into. I just listed a function that it can likely accomplish with some success once it's already firmly entrenched.
..."control their emotions" to different degrees. (Or, rather, I don't think that's true in general, but we've specifically been talking about the likelihood of expressing transient rage in the form of violence, and I agree that cultures differ in terms of how acceptable that is.)
Yeah. I used "control" in the meaning of "steer", not "rule over".
↑ comment by TheOtherDave · 2012-01-01T21:17:30.137Z · LW(p) · GW(p)
Before I respond to this, can you reassure me that you're actually interested in my honest response to it?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T21:23:45.339Z · LW(p) · GW(p)
Yes, and by asking this you already tipped me off that it's likely to be unpleasant to me, so please fire away.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T21:37:22.575Z · LW(p) · GW(p)
Does the regret and remorse in case 1 actually matter? If it does, what do you want to say about parents who would feel less regret or remorse given the death of their child than given his or her continued life?
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-01-06T21:52:10.488Z · LW(p) · GW(p)
If their life is that terrible, there ought to be social services to take the child away from them and a good mechanism of adoption to place the child into. And I'm willing to pay a huge lot for that in various ways before legalizing infanticide becomes a reasonable alternative to me.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-06T22:02:42.819Z · LW(p) · GW(p)
So I repeat my question: does the regret and remorse in case 1 actually matter? For example, what if a parent was regretful and remorseful about having their child forcibly put up for adoption; would that change your position?
I understand the argument that the infant's life is valuable, and am not challenging that here. It was your invoking the parent's regret and remorse as particularly relevant here that I was challenging.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-06T22:08:32.128Z · LW(p) · GW(p)
So I repeat my question: does the regret and remorse in case 1 actually matter?
Depends of what kind of parent and what kind of person they would've been if not for that incident. There's certainly evidence that their parenting could've been poor, but I believe that it could've been just fine for a significant minority of cases. I don't sympathize much with completely worthless parents, but what we have here is not a strong enough proof of worthlessness. And I feel really terrible for the "mostly-normal" parent here that I thought of (while somewhat modeling one on myself).
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-10T10:16:58.908Z · LW(p) · GW(p)
Huh? Would someone please explain how is this disagreeable at all? Look, I'm ready to change my mind if it's the wise thing to do, I just don't understand; where to, and why, do you want me to shift?
↑ comment by Multiheaded · 2012-01-01T21:41:47.561Z · LW(p) · GW(p)
Does the regret and remorse in case 1 actually matter? Enormously. For once, it could plausibly drive most people who did that to suicide.
If it does, what do you want to say about parents who would feel more regret or remorse given the death of their child than given his or her continued life?
Is there a miscommunication here? parents who would feel more regret or remorse given the death of their child than given his or her continued life - that sounds to be, like, most parents in general, and ALL the parents whom society approves of.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-06T16:29:22.414Z · LW(p) · GW(p)
Indeed you're right; I mis-wrote. Fixed.
↑ comment by Bakkot · 2012-01-01T20:29:31.737Z · LW(p) · GW(p)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T20:40:42.147Z · LW(p) · GW(p)
I've never held that other people should be allowed to kill your baby, for precisely that reason
(rereads thread) Why, so you haven't. I apologize; the fear of having my baby killed (well, by anyone other than me, anyway) is as you say irrelevant to your point. My error.
↑ comment by wedrifid · 2012-01-01T20:33:42.603Z · LW(p) · GW(p)
Probably true, but there's something you seem to be neglecting: Living in fear of being killed will significantly reduce the amount of fun you're having. Making it legal to kill non-person entities doesn't introduce this fear. Making it legal to kill person entities does.
This seems to be pointing out that killing could be even worse due to fear but in fact isn't. It's more of a non-argument in favour of the opposing position than an argument in favour of yours, at least is it is framed as "but something you're neglecting".
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T20:42:34.256Z · LW(p) · GW(p)
Replies from: wedrifid↑ comment by wedrifid · 2012-01-01T20:51:24.564Z · LW(p) · GW(p)
I had trouble parsing that, could you rephrase?
The phrase "but there's something you seem to be neglecting" does not make sense as a reply to the comment you quote.
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T21:01:07.705Z · LW(p) · GW(p)
Replies from: Nornagest↑ comment by Nornagest · 2012-01-06T23:50:14.622Z · LW(p) · GW(p)
Fear is frequently fun -- ask any carnival promoter, or fans of Silent Hill. (That's small-f fun; from a big-F standpoint, we'd be looking at fear as an aspect of sensual engagement or emotional involvement, but I think the argument still holds.) Without taking into account secondary effects like grief, it's not at all clear to me that an environment containing a suitably calibrated level of lethal interpersonal threats would be less fun or less (instantaneously) Fun than one that didn't, and this holds whether or not the subject is adult.
I do think those secondary effects would end up tipping the balance in favor of adults, though, once we do take them into account. There's also a fairly obvious preference-utilitarian solution to this problem.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T00:04:25.781Z · LW(p) · GW(p)
But the fear you get from Silent Hill is fear you can walk away from and know you're not going to be attacked by zombies and nor will your loved ones. You choose when to feel it. You choose whether to feel it at all, and how often. Making fear that is known to be unfounded available on demand to those who choose it is not even in the same ballpark as making everyone worry that they're going to be killed.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-07T00:12:33.878Z · LW(p) · GW(p)
True enough, and I'm not going to rule out the existence of people calibrated to enjoy low or zero levels of simulated threat (I'm pretty sure they're common, actually). It's also pretty obvious that there are levels of fear which are unFun without qualification, hence the "suitably calibrated" that I edited into the grandparent. But -- and forgive me for the sketchy evopsych tone of what I'm about to say -- the response is there, and I find it unlikely that for some reason we've evolved to respond positively to simulated threats and negatively to real ones.
Being a participant in one of the safer societies ever to exist, I don't have a huge data set to draw on. But I have been exposed to a few genuinely life-threatening experiences without intending to (mostly while free climbing), and while they were terrifying at the time I think the final fun-theoretic balance came out positive. My best guess, and bear in mind that this is even more speculative, is that levels of risk typical to contemporary life would have been suboptimal in the EEA.
Replies from: Strange7, TheOtherDave, AspiringKnitter↑ comment by Strange7 · 2012-06-05T05:25:50.774Z · LW(p) · GW(p)
How would you feel about a society otherwise similar to our own which included some designated spaces with, essentially, a sign on the door saying "by entering this room, you waive all criminal and civil liability for violent acts committed against you by other people in this room" and had a subculture of people who hung out in such places, intermittently mutilating and murdering each other?
Replies from: Nornagest↑ comment by Nornagest · 2012-06-05T06:01:14.492Z · LW(p) · GW(p)
I think I'd be okay with it in principle, in the absence of some well-established psychology showing strong negative externalities and in the presence of some relatively equitable system for mitigating the obvious physical externalities (loss of employment due to disability, etc.), preferably without recourse to the broader society's resources. I probably wouldn't participate in the subculture, though -- my own level of fun calibration relative to threat isn't that high.
Replies from: Strange7↑ comment by Strange7 · 2012-06-27T15:44:56.318Z · LW(p) · GW(p)
Well, keep in mind, even inside such a room social norms would rapidly evolve against letting things get too exciting, it's just that there wouldn't be any recourse to a larger legal system to resolve the finer points.
Maybe a big guy sits down in the corner with a tattoo across his bare chest saying "I am the lawgiver, if anyone in the room I watch is injured or killed without appropriate permission I will break the aggressor's arms" and mostly follows through on that. When somebody kicks the lawgiver's ass without taking over the job, everybody else votes with their feet.
↑ comment by TheOtherDave · 2012-01-07T00:27:25.285Z · LW(p) · GW(p)
a few genuinely life-threatening experiences [..] I think the final fun-theoretic balance came out positive
Death represents pretty significant disutility; if the experience was significantly life-threatening, you're attributing some correspondingly significant utility to the experience of surviving. How confident are you?
Replies from: Nornagest↑ comment by Nornagest · 2012-01-07T00:36:51.107Z · LW(p) · GW(p)
Ah. I probably should have been clearer about that. Above I haven't been talking about expected utilities (which are likely negative, although I'd need a clearer picture of the risks than I have to do the math); in the last paragraph of the grandparent I was discussing the sum of fun-theoretic effects applying to me in the local Everett branch, and previously I'd been talking about what I assumed to be the utilitarianism of Bakkot's hypothetical (which seemed to make the most sense as an average-utilitarian framework with little or no attention given to future preferences).
My preferences do contain a large negative term for death (and I don't free climb anymore, incidentally). I'm not that reckless.
↑ comment by AspiringKnitter · 2012-01-07T01:26:06.571Z · LW(p) · GW(p)
Okay, yes. However, I'm almost certain that having killers running around unchecked will not produce the optimal level and type of fear in the greatest possible number of people.
I find it unlikely that for some reason we've evolved to respond positively to simulated threats and negatively to real ones.
Why? A simulated threat prompts an immediate response, but killers on the loose prompts a lot of worrying over a long period of time. While fighting off a murderer might spike your adrenaline, that's not what killers on the loose will do. Instead people will lock their doors. They'll fear for their safety. They'll be afraid to let strangers into their home. They'll worry about what happens if they have a fight with their friend-- because the friend can commit murder with impunity. They'll look over their shoulders. Parents will spend every second worrying about their children. The children will have little or no freedom, because the parents won't leave them alone and may just keep them inside all the time, which is NOT optimal. People will have a lot of cortisol, depressing immune systems and promoting obesity.
That's NOT THE SAME as a single burst of adrenaline, whether from falling while climbing or from watching a movie or even from fighting for your life. So I guess you're right that it's not about whether it's real or not (though if it's a game, then when it gets too intense, you can just turn it off, and you can't turn off real life), but about the type of threat. However, the simulated threat doesn't actually make you less likely to continue living, whereas a real threat does.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-07T01:53:30.325Z · LW(p) · GW(p)
Well, of course I don't think that allowing murder without restriction is going to make everyone fun-theoretically better off, let alone maximally satisfy their preferences over the utilitarian criteria I actually believe in. My original claim was a lot narrower than that, and in any case I'm mostly playing devil's advocate at this point; although I really do think that fun-theoretic optimization is best approached without reflexively minimizing things like fear or pain on grounds of our preexisting heuristics. That said, I'm not sure this is always going to be true:
A simulated threat prompts an immediate response, but killers on the loose prompts a lot of worrying over a long period of time. While fighting off a murderer might spike your adrenaline, that's not what killers on the loose will do. Instead people will lock their doors [...] People will have a lot of cortisol, depressing immune systems and promoting obesity.
We know about a lot of societies with a lot of different accepted levels of violence. The most violent that I know of present up to about a 30% chance of premature death, so much higher than anything Western society presents that it's scarcely conceivable (even front-line soldiers don't have those death rates, although front-line service is more dangerous per unit time). But there's very much not a monotonic relationship between level of violence and cultural paranoia, or trust of strangers, or freedom given to children. Early medieval Iceland, for example, had murder rates orders of magnitude higher than what we see now (implicit in textual sources, and confirmed by skeletal evidence); but children worked and traveled independently there, and hospitality to strangers was enshrined in law and custom. The same seems to go for more contemporary societies if the murder rates I've seen are at all accurate, although I don't have as rich a picture for most of them. Our cultural fears of violence are very poorly correlated with actual expectations, as even a cursory glance over the most recent child molestation scare should show.
If studies of relative cortisol levels have ever been performed, I don't know about them; but the cultures themselves don't seem to show evidence of that kind of stress. I'd expect to see more paranoia following a recent uptick in violence, but I wouldn't expect to see it well correlated with the base rate.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T02:15:33.661Z · LW(p) · GW(p)
Early medieval Iceland, for example, had murder rates orders of magnitude higher than what we see now (implicit in textual sources, and confirmed by skeletal evidence); but children worked and traveled independently there, and hospitality to strangers was enshrined in law and custom. The same seems to go for more contemporary societies if the murder rates I've seen are at all accurate, although I don't have as rich a picture for most of them.
Okay. What kind of murder are we talking about? What made up most of the extra-- was it all sorts of things or was it duels? And was it accepted or was it frowned on? Were murderers prosecuted? Did victims' families avenge them?
I'd expect to see more paranoia following a recent uptick in violence, but I wouldn't expect to see it well correlated with the base rate.
Good point.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-07T07:24:10.700Z · LW(p) · GW(p)
Okay. What kind of murder are we talking about? What made up most of the extra-- was it all sorts of things or was it duels? And was it accepted or was it frowned on? Were murderers prosecuted? Did victims' families avenge them?
I'm not historian enough to say for sure, unfortunately. Judicial duels were part of the culture there, but the textual sources indicate that informal feuds were common, as were robbery and various other forms of informal violence. You could bring suit upon a murderer or other criminal in order to compel them to pay blood money or suffer in kind, but there was much less central authority than we're used to, and nothing resembling a police force.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T08:21:32.563Z · LW(p) · GW(p)
Was it by any chance a culture of honor?
Replies from: Nornagest↑ comment by Nornagest · 2012-01-07T18:27:58.690Z · LW(p) · GW(p)
Yes. Don't get too hung up on the specific example, though; I chose it only because it's a time and place that I've actually studied. The pattern (or, really, lack of a pattern) I'm trying to point to is much more general, and includes many cultures that don't have a strong emphasis on honor.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T21:00:14.379Z · LW(p) · GW(p)
Okay.
↑ comment by Multiheaded · 2012-01-01T20:18:02.909Z · LW(p) · GW(p)
Much less significantly, a culture in which you are obliged to either raise your children or see them put through foster care is also a much less fun culture to live in.
Somewhat regardless of our private feelings on the matter, a tip: Forget OKCupid, do you not see how earnestly stating such beliefs in public gives your handle a reputation you might not mind in general, yet greatly want to avoid at some future point of your LW blogging - such as when wanting to sway someone in an area concerning ethical values and empathy?
And it's not clear that adding a person to the universe (as things stand today) will, on average, increase the amount of fun had down the line; this is why you're not obliged to be trying to have as many children as possible at all times.
Now that's pretty certain.
Replies from: Bakkot, TheOtherDave↑ comment by Bakkot · 2012-01-01T20:22:00.021Z · LW(p) · GW(p)
Replies from: Emile, wedrifid, Multiheaded↑ comment by Emile · 2012-01-02T00:54:32.290Z · LW(p) · GW(p)
I'd hope that LessWrong is a community in which having in the past been willing to support controversial opinions would increase your repute, not decrease it.
Giving respect to controversy for the sake of controversy is just inviting more trolling and flamewars.
I have respect for true ideas, whether they are outmoded or fashionable or before their time. I don't care whether an idea is original or creative or daring or shocking or boring, I want to know if it's sound.
The fact that you seem to expect increased respect because of controversial opinions makes me think that you when you wrote about your support for infanticide, you were motivated more by the fact that many people disagreed with you, than by the fact that it's actually a good idea that would make the world a better place.
You remind me of Hanson (well, Doherty actually) on Libertarian Purity Duels
Replies from: BakkotLibertarians are a contentious lot, in many cases delighting in staking ground and refusing to move on the farthest frontiers of applying the principles of noncoercion and nonaggression; resolutely finding the most outrageous and obnoxious position you could take that is theoretically compatible with libertarianism and challenging anyone to disagree. If they are not of the movement, then you can enjoy having shocked them with your purism and dedication to principle; if they are of the movement, you can gleefully read them out of it.
↑ comment by Bakkot · 2012-01-02T02:13:43.970Z · LW(p) · GW(p)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-02T09:08:00.271Z · LW(p) · GW(p)
...whereas my positions on Newcomb's paradox... are not
two-box
Let's not go off on that tangent in here, but two-boxing is hardly uncontroversial on LW: lots of one-boxers here, including Yudkowsky. I'm one too. Also, didn't you say you "want to win"?
Replies from: Bakkot↑ comment by wedrifid · 2012-01-01T20:26:59.744Z · LW(p) · GW(p)
I'd hope that LessWrong is a community in which having in the past been willing to support controversial opinions would increase your repute, not decrease it. If we always worry about our reputation when having discussions about possibly controversial topics, we're not going to have much discussion at all.
We don't mind. You aren't actually going to kill babies and you aren't able to make it legal either (ie. "mostly harmless"). Just don't count too much on your anonymity! Assume that everything you say on the internet will come back to haunt you in the future - when trying to get a job, for example. Or when you are unjustly accused of murder in Italy.
EDIT: Pardon me, when I say "we" don't mind I am speaking for myself and guessing at an overall consensus. I suspect there are one or two who do mind - but that's ok and I consider it their problem.
Replies from: Bakkot, Multiheaded↑ comment by Multiheaded · 2012-01-01T20:44:07.923Z · LW(p) · GW(p)
you aren't able to make it legal either
That only has a certainty approaching 1 if we all went and forgot about CEV and related prospects.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T21:05:59.289Z · LW(p) · GW(p)
Really? What's your estimate of the probability that Bakkot's inclusion in a CEV-calculating-algorythm's target mind-space will make it more likely for the resulting CEV to tolerate infanticide?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T21:55:39.797Z · LW(p) · GW(p)
Pretty negligible, but still orders of magnitude above Bakkot just altering society to tolerate infanticide on his own.
Replies from: wedrifid, TheOtherDave↑ comment by TheOtherDave · 2012-01-01T22:10:14.973Z · LW(p) · GW(p)
I think I'm not understanding you.
Call P1 the probability that Bakkot's inclusion in a CEV-calculating-algorythm's target mind-space will make it more likely for the resulting CEV to tolerate infanticide. Call P2 the probability that Bakkot isn't capable of making infanticide legal, disregarding P1.
You seem to be saying P1 approximately equals 0 (which is what I understand "negligible" to mean), and P2 approximately equals 1, and that P2-P1 does not approximately equal 1.
I don't see how all three of those can be true at the same time.
Edit: if the downvotes are meant to indicate I'm wrong, I'd love a correction as well. OTOH, if they're just meant to indicate the desire for fewer comments like these, that's fine.
Replies from: dlthomas↑ comment by dlthomas · 2012-01-01T22:24:17.498Z · LW(p) · GW(p)
Where do you get "P2 approximately equals 1"?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T22:42:05.666Z · LW(p) · GW(p)
Multiheaded said "That only has a certainty approaching 1 if we all went and forgot about CEV and related prospects."
I understand "that" to refer to "bakkot isn't able to make make infanticide legal".
I conclude that the probability that Bakkot isn't capable of making infanticide legal, if we forget about CEV and related prospects, is approximately 1.
P2 is the probability that Bakkot isn't capable of making infanticide legal, if we disregard the probability that Bakkot's inclusion in a CEV-calculating-algorythm's target mind-space will make it more likely for the resulting CEV to tolerate infanticide.
I conclude that P2 is approximately 1.
↑ comment by Multiheaded · 2012-01-01T20:33:04.801Z · LW(p) · GW(p)
I'd hope that LessWrong is a community in which having in the past been willing to support controversial opinions would increase your repute, not decrease it.
Not always. For any random Lesswrongian with a contrarian position you're nearly sure to find a Lesswrongian with a meta-contrarian one.
Also, notice that your signaling now is so bad from a baseline human standpoint that people's sociopath/Wrong Wiring alarms are going off, or would go off if there's more of such signaling. I think that my alarm's just kinda sensitive* because I had it triggered by and calibrated on myself many times.
*(Alas, this could also be evidence that along the line I subconsciously tweaked this bit of my software to get more excuses for playing inquisitor with strangers)
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T20:40:05.603Z · LW(p) · GW(p)
Replies from: orthonormal, TheOtherDave↑ comment by orthonormal · 2012-01-07T17:21:25.807Z · LW(p) · GW(p)
FWIW, I disagree with you but you don't set off my "sociopath alarm". I think you and Multiheaded may not be able to have a normal conversation with each other, but each of you seems to get along fine with the rest of LW.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-07T21:00:35.445Z · LW(p) · GW(p)
I think you and Multiheaded may not be able to have a normal conversation with each other
If it helps, I can pretty much envision what's needed for such a conversation, and understand full well that the reasons it's not actually happening are all in myself and not in Bakkot. But I don't have the motivation to modify myself that specific way. On the other hand, it might come along naturally if I just improve in all areas of communication.
Heck, I might be speaking in Runglish. Bed tiem.
↑ comment by TheOtherDave · 2012-01-06T16:26:22.865Z · LW(p) · GW(p)
I'm curious: did you?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-07T05:27:05.652Z · LW(p) · GW(p)
Replies from: daenerys↑ comment by daenerys · 2012-01-07T18:09:22.024Z · LW(p) · GW(p)
If it helps, my opinion of you has been raised by this thread, rather than lowered. I think very few LWians actually think less of you for this discussion, but that could just be me projecting typical mind fallacy.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-07T21:22:52.077Z · LW(p) · GW(p)
I think very few LWians actually think less of you for this discussion
That's lumping a whole lot of things together. I'd gladly hire Bakkot if I was running pretty much any kind of IT business. I'd enjoy some kinds of debate with him. I'd be interested in playing an online game with him. I probably wouldn't share a beer. I definitely would participate in a smear campaign if he was running for public office.
↑ comment by TheOtherDave · 2012-01-01T20:25:44.565Z · LW(p) · GW(p)
Do you mean that it's pretty certain that I'm not obliged to be trying to have as many children as possible at all times?
Or that it's pretty certain that the fact that it's not clear that adding a person to the universe (as things stand today) will, on average, increase the amount of fun had down the line is why I'm not obliged to be trying to have as many children as possible at all times?
Or both?
Also: how important is it to you to manage your handle's reputation in such a way as to maximize your ability to sway someone on LW in areas concerning ethical values and empathy?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T20:40:41.117Z · LW(p) · GW(p)
Or both?
Hmm. Ehhh? ...Feels like both.
Also: how important is it to you to manage your handle's reputation in such a way as to maximize your ability to sway someone on LW in areas concerning ethical values and empathy?
Unimportant, because I'm poor at persuading the type of people who care about their status on LW anyway, and am only at all likely to make an impact on the type of person who, like me, cares little/sporadically about their signaling here.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T20:42:36.695Z · LW(p) · GW(p)
OK, thanks for clarifying.
↑ comment by Multiheaded · 2012-01-06T21:09:57.119Z · LW(p) · GW(p)
Much less significantly, a culture in which you are obliged to either raise your children or see them put through foster care is also a much less fun culture to live in.
Quite aside from everything else, this line is needlessly grating to anyone who even slightly adheres to the Western culture's traditional values. You could've phrased that differently... somehow. There's a big difference between denouncing what a largely contrarian audience takes as the standards imposed upon them by society at large and denouncing what they perceive to be their own values. This might be hypocritical, but I guess that many LW readers feel just like that.
↑ comment by wedrifid · 2012-01-01T20:31:22.639Z · LW(p) · GW(p)
If I kill a person, the number of Fun-having-person-moments in the universe is reduced by the remaining lifetime that person would potentially have had. If I kill a baby, the number of Fun-having-person-moments in the universe is reduced by the entire lifetime of the person that baby would potentially have become.
Go start breeding now. Or, say, manufacture defective condoms. (Or identify your real reason for not killing babies.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T20:35:51.966Z · LW(p) · GW(p)
Please re-read the comment thread. If you still think we're talking about my reasons for doing or not doing anything in particular, let me know, and I'll try to figure out how to prevent such misunderstandings in the future.
↑ comment by Multiheaded · 2012-01-01T19:24:33.427Z · LW(p) · GW(p)
Oh blast it, I'll just be honest.
Right now, I simply can't help but feel that if everyone who'd find it preferable to our world was (in real life) hit by a truck tomorrow, my utility function would increase.
Replies from: daenerys, Solvent↑ comment by daenerys · 2012-01-01T21:16:16.069Z · LW(p) · GW(p)
if everyone who'd find it preferable to our world was (in real life) hit by a truck tomorrow, my utility function would increase.
Downvoted.
You just said that you want me dead in real life.
I don't see how this is at all acceptable. Having a different viewpoint than you (note: I have never killed any babies, nor do I have any desire to) does not make saying these things towards me, and others with my view, ok.
Replies from: TheOtherDave, Multiheaded↑ comment by TheOtherDave · 2012-01-01T21:23:11.603Z · LW(p) · GW(p)
If it should happen that tomorrow I find myself in the state of believing I would be happier were you dead, what do you think I ought to do about that?
I mean, I think we can agree that I ought not take steps to end your life, nor should I threaten to do so. (Multiheaded did neither of these things.)
But would it really be unacceptable for me to observe out loud that that was the state I was in?
Why?
↑ comment by juliawise · 2012-01-04T23:53:05.194Z · LW(p) · GW(p)
But would it really be unacceptable for me to observe out loud that that was the state I was in?
That depends on what it contributes to the discussion. "I'm too tired to talk about this now" or "I find it distressing that you think a world with less stigma against infanticide would be fun" help us understand where the other is coming from, even if they don't help us understand the topic better.
"I wish you were dead" detracts from the discussion.
↑ comment by duckduckMOO · 2012-01-04T22:46:36.817Z · LW(p) · GW(p)
Multiheaded said his/her (it's her, right? >_>) utility would increase, not happiness. If this is true, then, ignoring oppurtunity costs dead is what daenerys and other baby killing advocators ought be, subjectively-objectively for multiheaded.
edit: but it's almost definetely not true. Utility was probably being conflated with something, or Multiheaded was biased by emotional state (was REAL MAD, in less technical terms.)
↑ comment by daenerys · 2012-01-01T21:38:58.072Z · LW(p) · GW(p)
Can somebody else please give answering this a crack? Because I think I am too upset that this question is even disputed to be able to provide a clear answer. Best shot:
To me it seems obvious that there falls a category of Things You Shouldn't Say To People. "I wish you were dead" and it's variants definitely falls under that category. The utility you get from saying it is less than the disutility I get from hearing it. Also it leads to a poisonous society that no one wants to participate in.
Edit: I am amused that my post admitting to having an emotional reaction affect my reasoning abilities got downvoted.
Replies from: TheOtherDave, TheOtherDave, Multiheaded↑ comment by TheOtherDave · 2012-01-02T04:28:06.538Z · LW(p) · GW(p)
For what it's worth, I don't believe you deserved the downvote. I also don't believe most of the other comments in this thread deserved to be downvoted, especially since it makes it far less likely that anyone else will give answering my question a crack, since it's mostly invisible now.
That said, I do understand the "it's OK for it to be true but you can't say it" mainstream social convention, which is what you seem to be invoking.
It just doesn't seem to fit very well with the stated goals of this site. For my own part, if someone wants me dead, I want to know they want me dead. We can't engage with or improve a reality we're not allowed to even admit to. (Which is also why I dispute the "poisonous society" claim. A society where it's understood that people might want me dead and there's no way for me to know because of course they won't ever say it seems far more poisonous to me.)
Replies from: daenerys↑ comment by daenerys · 2012-01-02T16:42:29.162Z · LW(p) · GW(p)
Slightly better next day answer:
I never declared Crocker's Rules on this site. If you would like to, you can, and people can tell you when they want you dead.
However blanket statements such as "I wish everyone with were dead" are never ok, because you can't know that absolutely everyone who holds Position X has declared Crocker's Rules. Even if everyone who participated in the discussion under position X has declared Crocker's Rules, there might be lurkers who haven't.
I suppose an exception to that might be "I wish everyone who has declared Crocker's Rules was dead", but I can't see why anyone would make that statement.
Replies from: TheOtherDave, TheOtherDave↑ comment by TheOtherDave · 2012-01-02T16:54:07.146Z · LW(p) · GW(p)
I'm still curious, however, about your answer to my original question. If it should happen that tomorrow I find myself in the state of believing I would be happier were you dead, what do you think I ought to do about that?
Or, if the answer is different: If it should happen that tomorrow you find myself in the state of believing you would be happier were I dead, what do you think you ought to do about that? (Given that I too have not declared Crocker's Rules.)
I mean, I understand that you don't think we should actually tell each other about it, but I'm wondering if that's all there is to say on the matter... just keep the feeling secret and go on about our business normally?
↑ comment by TheOtherDave · 2012-01-02T16:47:50.672Z · LW(p) · GW(p)
That's fair.
For my own part, that's not the threshold I consider Crocker's Rules to endorse crossing, but I suppose reasonable people can disagree on where that threshold is and over time the actual threshold will come to resemble some aggregated function of our opinions on the matter, and announcements like yours are part of that process.
↑ comment by TheOtherDave · 2012-01-01T21:45:16.235Z · LW(p) · GW(p)
Sorry to have upset you. Thanks for answering my question.
↑ comment by Multiheaded · 2012-01-01T21:49:37.425Z · LW(p) · GW(p)
Also it leads to a poisonous society that no one wants to participate in.
Believe me, I really feel that sentiment much stronger in regards to infanticide than you feel it in regards to passive-aggressive rudeness.
↑ comment by Multiheaded · 2012-01-01T21:21:31.025Z · LW(p) · GW(p)
You just said that you want me dead in real life.
Well, you, ceteris paribus, would want people - including, in particular, emotionally volatile people like me - free to kill their children in real life. I'd hate that more than I'd regret your death, indeed!
(Although at no point and in no way am I going to be insane enough to really kill you, just as you're not insane enough to personally kill babies)
↑ comment by Solvent · 2012-01-02T03:22:00.169Z · LW(p) · GW(p)
if everyone who'd find it preferable to our world was (in real life) hit by a truck tomorrow, my utility function would increase.
I think you should take that back, personally. I can understand you saying it out of frustration, but saying that you want people dead is generally a bad thing to do.
↑ comment by Multiheaded · 2012-01-01T19:14:45.361Z · LW(p) · GW(p)
Oh, and you're creating significant emotional turmoil in me right now. I'm stepping away and going to sleep, although I don't suspect that this turmoil is any sign of me being less rational than you in regards to our respective values right now.
Replies from: juliawise↑ comment by TimS · 2012-01-01T18:06:00.448Z · LW(p) · GW(p)
First, any single relaxed taboo is a blow against the entire net of ethical inhibitions
This is not an uncontested statement.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T18:16:31.627Z · LW(p) · GW(p)
Thanks for catching me, adjusted.
↑ comment by soreff · 2012-01-01T17:06:01.579Z · LW(p) · GW(p)
The other ones would have an abnormally strong will to override barriers and self-modify, which can easily make them just as dangerous.
You are overlooking the extreme situations some people are forced into. Looking at the act as being primarily a function of a person's internal state state can be a poor approximation. As nearly as I can tell, if an arbitrarily selected person in the West were put in a situation as dire as these infanticidal mothers had been forced into, they would quite probably do the same thing.
Note that the geographical variation in infanticide rates is more plausibly consistent with external factors driving the rates than internal factors. The populations of the USA and Canada are not hugely different, yet there is a 2X difference in the rates between them (as I quoted from the article that I cited before). I strongly doubt that the proportion of psychopaths and extreme self-modifiers differs so strongly between the two nations - but the US has been shredding its social safety nets for years.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-01T17:21:50.974Z · LW(p) · GW(p)
This is easy enough to check. Do most poor, fairly desperate people whose situation is sufficiently alike that of our hypothetical normal childkiller, in fact, kill their children?
(No, I can't quite define "sufficiently alike" right off the bat. Wouldn't mind working it out together.)
↑ comment by Multiheaded · 2012-01-01T16:32:43.275Z · LW(p) · GW(p)
The Greeks and Romans, for example, don't seem to have been run down by psychopaths.
With genocide of any foreigners and mass torture for entertainment also having been considered perfectly acceptable, the Roman culture in the flesh would certainly feel alien enough to us that an utilitarian, altruistic time traveler could likely be predicted to attempt to sway it, with virtually any means justifying the end for them.*
I know I would, and I know that I'm not an unusual decision maker for the LW community.
*(cue obvious SF story idea with the time traveler ending up as Jesus)
Replies from: juliawise↑ comment by juliawise · 2012-01-01T16:47:43.067Z · LW(p) · GW(p)
But these seem to have been larger cultural phenomena, not the unchecked actions of a few psychopaths. Psychopathy affects around 1% of the population, and I doubt so few people could have swayed the entire culture if the rest of them had no interest in killing people.
Replies from: Strange7, Multiheaded↑ comment by Strange7 · 2012-06-05T04:28:09.059Z · LW(p) · GW(p)
One percent of the modern population. How much historical data is there?
Replies from: juliawise↑ comment by juliawise · 2012-06-05T19:26:30.551Z · LW(p) · GW(p)
You're right that we don't have data on the incidence of psychopathy in ancient Rome, and our data its current incidence is pretty sketchy. (Unlike most mental illnesses, psychopathy is more a problem for other people than the person who has it, so psychopaths have no reason to get treatment. Not that we really have any treatment if they did.)
But there seem to be both genetic and social components (e.g. being abused as a child), so probably those same genetic opportunities got triggered in some people throughout history. Possibly at different rates than here and now.
↑ comment by Multiheaded · 2012-01-01T16:53:47.145Z · LW(p) · GW(p)
See my reply's second comment.
↑ comment by TheOtherDave · 2012-01-01T16:10:56.759Z · LW(p) · GW(p)
I suspect a lot of the people who would agree with this sentiment would change their minds in the face of a sufficiently compelling argument that there exists some scenario under which they would be able to kill their child.
Replies from: juliawise↑ comment by juliawise · 2012-01-01T16:21:19.817Z · LW(p) · GW(p)
I've worked with parents of very disabled children, and it's not an easy life. For mothers especially, it becomes your career. I can imagine a lot of parents might consider infanticide if they knew that was going to be their life.
Replies from: daenerys↑ comment by daenerys · 2012-01-01T20:35:55.954Z · LW(p) · GW(p)
Ditto, as someone who works in disability care and child care (including infant care), I support the baby-killing scenario.
I worked for a family that had a severely mentally and physically disabled 6-year old. She was at infant-level cognition, practically blind, and had very little control over her body. There was almost nothing going on mentally, but she was very volatile about sounds/music/surroundings. You could tell if she was happy or sad by whether she was laughing or crying, and she cried a LOT.
Trying to get her to STOP crying was extremely difficult, because there was no communication, and she never wanted the SAME things. However it was also very important to get her calm QUICKLY because if she cried too long she would have a "meltdown", be near inconsolable, throw up, and then you'd have to vent her stomach.
Her parents were the best at reading her. They trained people by pretty much putting you in a room with her, until you developed an ineffable intuitive ability to keep her happy. When I moved to a different city, it took them about 3-4 months to find a replacement for me who wouldn't quit by the second day. I was driving back to my old city once a week to work for them during that time.
Her existence had a terrible effect on her family. They had to hire around the clock care. As in, amazingly patient care-givers that were hard to find, to cover about 100 hours a week. I would get stressed covering 2 shifts a week, and I don't know how her parents were managing to cope.
This child was a drain on society and on everyone around her. Because of her parents' religious values, they wouldn't kill her even if it were legal. But their lives would have been dramatically improved if it were otherwise.
Also, I agree that infants have less or equal personhood than many animals. The way I handle the discrepancy is by being a vegetarian. But since most people aren't vegetarians, they don't really have a strong supporting reason to be against legalized infanticide.
Replies from: Vaniver↑ comment by Vaniver · 2012-01-02T00:06:36.975Z · LW(p) · GW(p)
So, my position is that the necessary standard to justify ending a 10 month old's life is only a bit lower than that of ending a 18 year old's life, and is only a bit higher than the necessary standard to justify ending a fetus's life. I'm patient. But what that statement often obscures is that I'm willing to let people meet that standard. I would support ending the individual you described at ages of 6 years, 60 years, 6 months, or 6 months after conception.
But the acknowledgement that not every life should be continued is very different from a "return policy" sort of infanticide which Bakkot is justifying by saying "well, they're not people yet." Sometimes it's best to kill people, too, and so personhood isn't the true issue.
↑ comment by orthonormal · 2012-01-04T01:21:56.539Z · LW(p) · GW(p)
Ah, I was wondering how the welcome thread got to more than 500 comments so quickly!
↑ comment by [deleted] · 2012-01-03T19:53:56.477Z · LW(p) · GW(p)
In other posts in this thread I've discussed infanticide, and proposed ways to reduce parental grief in cultures that would adopt it (I didn't say it should be adopted btw). But only now did I remember that the practice of infanticide where others preform the killing (something I proposed down thread as an implementation that would reduce psychological stress) reminded me of the practice of killing "mingi" (cursed) children in Ethiopia. Many of the individuals exposed to outside culture would prefer to adopt it or at least find ways to not kill the children while still severing them from the parents.
While obviously CNN as always has a progressive-Eurocentric-mind-projection-fallacy spin in its reporting and the tribes in question may be just adopting preferences of higher status tribes and groups rather than because not practising it seems so much better than practising it. I do think this is weak evidence that people prefer to live in societies that don't practice infanticide. Also reading some of the accounts has caused me (rightfully or not) to increase the estimated psychological suffering of parents. But consider that this wasn't a choice in most cases, it isn't that large either. I shouldn't be surprised, humans are built to live in a world where life is cheap after all.
I have no doubt that the practice of mingi historically did indeed help the tribe, taken as a whole traditions do tend to be adaptive in the environment in which they where established, but now that their (social) envrionment has changed, the practice seems to be falling out of favour.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-04T15:25:00.869Z · LW(p) · GW(p)
I do think this is weak evidence that people prefer to live in societies that don't practice infanticide.
Thanks for updating.
↑ comment by occlude · 2012-01-01T21:02:39.075Z · LW(p) · GW(p)
Please let me know if I've missed a discussion of this point; it seems important, but I haven't seen it answered.
What is the particular and demonstrable quality of personhood that defines this okay to kill/not okay to kill threshold? In short, what is blicket?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T21:10:12.055Z · LW(p) · GW(p)
Replies from: occlude, TheOtherDave↑ comment by occlude · 2012-01-01T21:55:29.684Z · LW(p) · GW(p)
I won't argue that newborns are people, because I have the same problem defining person that you seem to have. But until I can come up with a cogent reduction distilling person to some quality or combination of qualities that actually exist -- some state of a region of the universe -- then it seems prudent to err on the side of caution.
Replies from: Bakkot↑ comment by TheOtherDave · 2012-01-01T21:25:04.834Z · LW(p) · GW(p)
Well, one relatively simple question that might help clarify some things: do I remain a person when I'm asleep?
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T21:36:09.923Z · LW(p) · GW(p)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T21:43:52.716Z · LW(p) · GW(p)
Cool. Would I still be a person while in a coma that I will naturally come out of in five years but not before? (I recognize that no observer could know that this was the case, I'm just asking whether in fact I would be, if it were. Put another way: after I woke up, would we conclude that I'd been a person all along?)
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T21:53:41.144Z · LW(p) · GW(p)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T21:56:28.770Z · LW(p) · GW(p)
OK, cool... that clarifies matters. Thanks.
↑ comment by Vaniver · 2012-01-01T10:16:18.555Z · LW(p) · GW(p)
Infanticide of one's own children should be legal (if done for some reason other than sadism) for up to ten months after birth. Reason: extremely young babies aren't yet people.
What's your discount rate?
(That is, if I offered you $100 now, or $X a year from now, what is the lowest value of X that would make you choose the latter option?)
Replies from: Bakkot↑ comment by Bakkot · 2012-01-01T18:36:21.741Z · LW(p) · GW(p)
Replies from: Vaniver↑ comment by Vaniver · 2012-01-01T22:12:07.696Z · LW(p) · GW(p)
I would love to loan you money at 20% interest. Send me a private message if you're interested.
but they're not yet;
When playing chess, how many moves ahead do you look?
you're not doing harm to a person by infanticide any more than you are by using contraception.
A man produces about 47 billion sperm a year; a woman releases 13 eggs a year; a couple that tries to become pregnant over the course of a year will have a 75% chance of live birth pregnancy if the female is 30. So each feasible sperm-egg combination over the course of a year has about a trillionth chance of making it to a live birth. *
As soon as conception happens, then you've got a zygote which is very likely to make it to live birth. And once it makes it to live birth, it's very likely to make it to adulthood. So there seems to be a very bright line at conception. (Contraceptives prevent conception; condoms by preventing sperm from entering, the pill by preventing ovulation, and so on.)
(I should note that I think there are sound reasons to treat a risk that will end one out of a trillion people chosen at random as less of a concern than a risk that is certain to end a certain person, and that this line of reasoning depends heavily on this premise, but it would take too long to go into those reasons here. I can in another comment if you're interested.)
*Noting that 'potential resulting individual DNAs' are individually much less likely than just sperm-egg combinations.
Replies from: MixedNuts, Bakkot↑ comment by MixedNuts · 2012-01-02T11:37:25.959Z · LW(p) · GW(p)
As soon as conception happens, then you've got a zygote which is very likely to make it to live birth.
From the NIH:
It is estimated that up to half of all fertilized eggs die and are lost (aborted) spontaneously, usually before the woman knows she is pregnant. Among those women who know they are pregnant, the miscarriage rate is about 15-20%. Most miscarriages occur during the first 7 weeks of pregnancy. The rate of miscarriage drops after the baby's heart beat is detected.
So your bright line should be heartbeat, or at least zygote implantation. This does not significantly affect your conclusions.
Replies from: Vaniver↑ comment by Bakkot · 2012-01-01T22:21:42.981Z · LW(p) · GW(p)
Replies from: Vaniver↑ comment by Vaniver · 2012-01-01T23:31:53.390Z · LW(p) · GW(p)
One or two, but for me deciding which move to make is practically instinct, less lookahead. Also I'm not entirely sure how this is relevant.
What role should the future play in decision-making?
For me, it seems that if you're confident that having more people in the world is a net positive, then as a necessary conclusion the moral thing to do is to try to have as many children as possible.
It is not clear to me that prohibiting murder derives from that position or mandates birth.
If you're not sure of this, I don't undersand how you can conclude it's a moral wrong to destroy something which is not yet a person but merely has the potential to become one.
By quantification of "merely." If we determine that a particular coma patient has a 90% chance of reawakening and becoming a person again, then it seems almost as bad to end them as it would be to end them once they were awake. If we determine that a particular coma patient has a 5% chance of reawakening and becoming a person again, then it seems not nearly as bad to end them. If we determine that a particular coma patient has a 1e-6 chance of reawakening and becoming a person again, then it seems that ending them has little moral cost.
If infants are nearly guaranteed to become people, then failing to protect them because we are impatient does not strike me as wisdom.
Replies from: Bakkotcomment by Bruno_Coelho · 2011-12-26T23:00:21.277Z · LW(p) · GW(p)
Hi everybody,
I’m male, 24, philosophy student and live in Amazon, Brazil. I came across to LessWrong on the zombies sequence, because in the beginning, one of my intelectual interests was analytic philosophy. I saw that reductionism and rationality have the power to respond various questions, righting them to something factually tractable. My goals here is to contribute to the community in a useful form, learn as much as possible, become stronger and save the world reducing the risks of human extintion. I'm looking for some advice in these topics: bayesian epistemology, moral uncertain and the complexity of the wishes. If some of the participants in the forum can help me, I will be very grateful.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-27T00:56:06.824Z · LW(p) · GW(p)
Do you have specific questions? You could ask them here, or in the comments of the relevant posts (the age of the thread doesn't matter much, since more people read the Recent Comments sidebar than read any particular post's comments).
Also, on the topic of morality, have you come across lukeprog's mini-sequence?
Replies from: Bruno_Coelho↑ comment by Bruno_Coelho · 2011-12-27T02:50:30.452Z · LW(p) · GW(p)
Yes, I read part of the sequence and a recent post of lukeprog on his blog. He think that much of the language of morality is failed, and we have to substitute with another language more precise. In normative terms, decision theory is the best candidate,I suppose, but in the site we have various versions.
comment by dekelron · 2011-12-26T17:25:48.940Z · LW(p) · GW(p)
Hi all,
I'm 25 from Israel. I worked in programming for 4 years, and have recently decided to move on to more interesting stuff (either math, biology, or neurology, don't know).
I'm new in LW, but have read OB from time to time over over the past 5 years. Several months ago I ran into LW, (re)read a lot of the site, and decided to stick around when I realized how awesome it is.
Nice to meet you all!
Ron
Replies from: MichaelVassar, FAWS↑ comment by MichaelVassar · 2011-12-27T09:58:34.067Z · LW(p) · GW(p)
Israel seems like a natural place for LW. Any thoughts on why the memes haven't gotten more traction there yet?
Replies from: erratio↑ comment by erratio · 2011-12-28T01:37:40.178Z · LW(p) · GW(p)
Very naive guess: people in Israel live in constant high proximity to the two biggest mindkillers, religion and politics/nationalism, both of which have serious and immediate real-world consequences for them.
Replies from: Ezekiel↑ comment by Ezekiel · 2011-12-30T00:09:47.460Z · LW(p) · GW(p)
I'm Israeli, and although my contact with general society in the country is low, I think that's probably a factor. Meme propagation also just takes time.
Replies from: dekelron↑ comment by dekelron · 2011-12-30T09:50:53.524Z · LW(p) · GW(p)
Actually I doubt it's something that complicated. In my opinion, the site is not known because there are few people to publicize it, loop.
Anyhow, ARE there more LWers from Israel? I would really like it if there was a meetup here.
Replies from: dbaupp↑ comment by dbaupp · 2011-12-31T12:57:01.307Z · LW(p) · GW(p)
Anyhow, ARE there more LWers from Israel?
According to this survey, there are at least 2 people from Israel (from Haifa and Kfar Saba).
↑ comment by FAWS · 2011-12-27T00:27:49.275Z · LW(p) · GW(p)
Now that you have some karma you should be able to post in the discussion section. Please make sure your post doesn't look like a spam ad, though.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-27T00:44:54.094Z · LW(p) · GW(p)
To follow up on what FAWS said, "What are good apps for rationalists?" is a much better title than "Useful Android Apps for the Rational Mind", since the latter sounds like you're trying to sell something to us.
comment by troll · 2012-04-17T20:34:44.716Z · LW(p) · GW(p)
minimalist, 17, white, male, autodidact, atheist, libertarian, california, hacker, studying computer science, reading sequences, intellectual upbringing, 1 year bayesian rationalist, motivation deficient, focusing on skills, was creating something similar to bayesian rationality before conversion, have read hpmor (not intro to lw), interested in contributing to ai research in the future
Replies from: Richard_Kennaway, Oscar_Cunningham, thomblake, jimrandomh, MarkusRamikin, Emile, shokwave, Bugmaster↑ comment by Richard_Kennaway · 2012-04-18T11:20:35.882Z · LW(p) · GW(p)
The Identikit LessWrongian!
↑ comment by Oscar_Cunningham · 2012-04-17T22:31:50.721Z · LW(p) · GW(p)
"Minimalist" is implied by the sparsity of the rest of the comment, and so is ironically redundant.
Replies from: troll↑ comment by thomblake · 2012-04-18T23:56:39.091Z · LW(p) · GW(p)
I'm sure you're aware at this point, but with that description you blend into the wallpaper.
Thank you for creating a comment to link "stereotypical Less Wrong reader". If only you were a couple of years older.
Since you're 17, have you looked into the week-long summer camp?
Replies from: troll↑ comment by jimrandomh · 2013-04-24T17:09:42.102Z · LW(p) · GW(p)
Consider restarting with a different account name. Trolling (that is, trying to provoke people) is not welcome here, and when your username is "troll", people will not (and should not) give you the benefit of doubt.
Replies from: troll↑ comment by troll · 2013-04-24T21:59:56.110Z · LW(p) · GW(p)
Should not? Why? Obviously I'm not provoking anyone.
Replies from: Vaniver↑ comment by Vaniver · 2013-04-24T22:35:44.585Z · LW(p) · GW(p)
Illusion of transparency seems relevant; even if you know why you picked that username, others can only guess, and their guess should be expected to match their experience, not your private knowledge.
Replies from: troll↑ comment by troll · 2013-04-24T22:40:54.752Z · LW(p) · GW(p)
I expect people to know what a troll is based on cultural knowledge. I expect them to not care due to this being LW.
Replies from: shminux, ArisKatsaris↑ comment by Shmi (shminux) · 2013-04-24T23:08:19.517Z · LW(p) · GW(p)
Consider your second expectation falsified and update on it, as a "bayesian rationalist" would.
Replies from: troll, troll↑ comment by ArisKatsaris · 2013-04-25T08:01:28.929Z · LW(p) · GW(p)
I expect them to not care due to this being LW.
The choice of a name can provide some evidence about whether it's a good-faith account or not; and the name "troll" is providing evidence against. If you told people why you chose that name that might serve to counteract the effect, but I think you've not yet done so... Needing to justify your nick may seem unfair to you, but consider it from the point of view of someone who doesn't know you.
Replies from: army1987, troll↑ comment by A1987dM (army1987) · 2013-04-25T16:52:58.591Z · LW(p) · GW(p)
↑ comment by troll · 2013-04-25T20:50:38.212Z · LW(p) · GW(p)
My standard way of dealing with internet names is to just ignore them completely because they don't provide much evidence/usefulness (unless I want to reference the person) and I want to read the comment anyway. I guess I thought LWers would either not notice my name at all or see it, be a little more suspicious, and read anyway. (not immediately downvote or tell me my name sucks and I should change it)
AFAICT, you're looking at posts anyway, so good/bad natured names shouldn't matter, only good/bad natured writing.
↑ comment by MarkusRamikin · 2012-04-17T21:08:52.890Z · LW(p) · GW(p)
That handle bodes well.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-04-17T22:00:36.184Z · LW(p) · GW(p)
On an elitist gaming forum I used to frequent (RPG Codex), we called such things "post-ironic" (meaning "post-modern as fuck online performance art").
Basically the joke is that everyone gets the joke, and that allows its author to act as if it was no joke, and self-consciously reference that fact - which is the joke.
↑ comment by shokwave · 2012-04-17T21:00:17.317Z · LW(p) · GW(p)
Contrarian?
Replies from: troll↑ comment by troll · 2012-04-17T21:31:29.086Z · LW(p) · GW(p)
No.
Replies from: DSimon↑ comment by DSimon · 2012-04-19T00:13:42.929Z · LW(p) · GW(p)
Anti-contrarian?
Replies from: troll, Multiheaded↑ comment by Multiheaded · 2012-04-19T04:21:23.739Z · LW(p) · GW(p)
Data point: I'm anti-contrarian (well, somewhat) in emotional sentiment, but not in any rationally held principle, and I'm trying not to mistreat contrarians, especially if I'm curious about their ideas. This might be unpleasant to admit, though, as it's basically prejudgice.
↑ comment by Bugmaster · 2013-04-24T23:19:35.002Z · LW(p) · GW(p)
You weren't kidding when you said "minimalist". Nicely done.
Replies from: troll↑ comment by troll · 2013-04-24T23:30:22.985Z · LW(p) · GW(p)
I guess a lot of people are interested enough in an account with the handle "troll" to check my first post, but not enough to not consider the name when reviewing posts.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-04-24T23:37:49.672Z · LW(p) · GW(p)
Realistically, when someone replies to one of my posts on some long thread, I don't take the time to click through their handle and find their own intro post. I don't think that doing so is a good use of my time, and I believe that I am typical in this regard. However, I do take the time to read their handle, and if it seems to say "I am not arguing in good faith", I take notice.
This gives me an idea for a new Less Wrong feature, though: allow users to enter a short descriptions of themselves, and display it when the mouse hovers over their handle for a certain amount of time. I know how I'd implement it with jQuery, but I'm not sure how easy it would be to plug into the LW general architecture.
Replies from: Sniffnoy, troll↑ comment by Sniffnoy · 2013-04-24T23:51:47.748Z · LW(p) · GW(p)
I think it would be simpler to just allow people to add a short description of themselves to the user page. (And then maybe later the hovering thing can be added if people want that.)
Replies from: Bugmaster↑ comment by Bugmaster · 2013-04-25T00:11:25.087Z · LW(p) · GW(p)
Agreed; if we had that feature, then we could write the Greasemonkey (or whatever) extension as well, since it would just scrub their user page for the description.
Replies from: gwern↑ comment by gwern · 2013-04-25T01:45:06.464Z · LW(p) · GW(p)
Don't we have that as part of the linked wiki userpages?
Replies from: Sniffnoy↑ comment by Sniffnoy · 2013-04-27T00:26:21.890Z · LW(p) · GW(p)
...huh. OK, how on earth do you set up that "profile" thing you have? I can't find it anywhere in the preferences. I think we need to promote this a bit more.
Replies from: gwern↑ comment by gwern · 2013-04-27T00:27:25.101Z · LW(p) · GW(p)
As far as I know, you just register the exact same account name on the LW wiki, and create your userpage, and it's transcluded over automatically.
Replies from: Sniffnoycomment by Malevola · 2012-01-24T23:35:33.438Z · LW(p) · GW(p)
Less Wrong,
After lurking for about a week, I decided to register today. I have read some of the Sequences and a good many posts and comments. I am a life long agnostic who recently began to identify as atheist. I am interested in rationality for many reasons, however, my primary reason is that I'd like to learn more about rationality to help me get over my fear of death. A fear that I feel is very irrational, yet I am unable to shake it.
I am 39, female and a mother, I have lots of college under my belt but no degree. I guess I never really cared about that. I am also a schizophrenic and that makes rationality quite challenging for me. (Not that it's not challenging for many people.)
I am looking forward to reading more of the Sequences and hope to be able to comment or post in the near future. I am glad I found this site. Thanks for your time.
comment by jswan · 2011-12-26T22:10:07.073Z · LW(p) · GW(p)
I've been lurking here on and off since the beginnings at OB, IIRC, though more off than on. Expressed in the language of the recent survey: I'm an 43-year-old married white male with an advanced humanities degree working in the technical side of for-profit IT in the rural USA. I was raised in a non-theist environment and was interested in rationality tools from an early age. I had a spontaneous non-theistic mystical experience when I was 17 that led me to investigate (but ultimately reject) a variety of non-materialist claims. This led to a life-long interest in the workings of the brain, intuition, rationality, bias, and so on.
I enjoy LW primarily because of the interest in conscious self-improvement and brain hacking. I think that the biggest error I see in general among self-described rationalists is the tendency to undervalue experience. My thinking is probably informed most strongly by individual athletics, many of the popular writers in the rationalist tradition, and wide variety of literature. These days, I'm nursing obsessions with Python programming, remote backcountry cycling, and the writing of Rebecca Goldstein.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-27T00:48:19.424Z · LW(p) · GW(p)
I think that the biggest error I see in general among self-described rationalists is the tendency to undervalue experience.
There are a couple of things you could mean by this. Can you give an example?
Replies from: jswan↑ comment by jswan · 2011-12-27T03:02:57.987Z · LW(p) · GW(p)
There are indeed a couple of different ways I do mean it, but my best specific examples come from athletics. About eight or nine years ago I started getting seriously interested in long distance trail running. Like most enthusiastic autodidacts I started reading lots of material about shoes, clothing, hydration, nutrition, electrolytes, training, and so on. As I'm sure you've seen, a lot of people on the Internet can get paralyzed by analysis in the face of vast easily available information. In particular, they have a lot of trouble sorting out conflicting information gained from other knowledgeable people.
Frequently, further research will help you arrive at less-wrong conclusions. However, in some endeavors there really is a great deal of individual variation, and you just have to engage in lengthy, often-frustrating self-experimentation to figure out what techniques or training methods work best for you. This base of experience can't really be replaced by secondary research. Where research skill comes in, though, is in figuring out where to focus that secondary research (and this in itself is a skill that is honed by experience). As a friend of mine likes to put it: the best practitioners of [insert skill here] in the world perform almost all components of their skill the same way. They all have weird idiosyncrasies too. The place to focus your research is in the areas they have in common.
Anyway, this is a longer response than I had intended, and undoubtedly this is not new to you; it's just variation on standard cognitive bias. However, I think that deferral of experience and self-experimentation in favor of secondary research (aka, analysis paralysis) is a common bias blind-spot among rationality enthusiasts.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-27T16:15:30.530Z · LW(p) · GW(p)
I agree it's a common failure mode, and that the areas in which I've done cheap self-experimentation and kept notes showed remarkably quick improvement. There are some LW posts expounding the meme of actually trying things, but it's less prominent than it ought to be.
comment by Pesto · 2012-01-06T15:10:19.284Z · LW(p) · GW(p)
I'm a 22-year-old mathematics graduate student, moving to Boston next year.
I was recommended HPMoR by another Boston math grad student, followed the authors' notes to read most of the sequences, and then started following lesswrong, although I didn't create an account until recently.
I can't say how I came to actually be a rationalist, though---most of the sequences seemed true or even obvious in hindsight when I first read them, and I've always had a habit of remembering "x tells me y is true" instead of "y is true" when x tells me y is true.
I'm signed up for cryonics. (Current probability estimates 90% that it preserves enough information to be reversible, 95% that I'll die with enough notice to be preserved, 50% that humanity'll advance far enough to reverse it, and 70% that CI'll survive that long.)
I'm vegetarian for carbon efficiency and because the animals that produce most of our meat have negative utility from awful conditions. I don't think sentience is the right standard; is there a good past lesswrong discussion about that?
Replies from: orthonormal, Alicorn, beoShaffer↑ comment by orthonormal · 2012-01-06T16:25:09.947Z · LW(p) · GW(p)
I've always had a habit of remembering "x tells me y is true" instead of "y is true" when x tells me y is true.
Impressive if true- the best way to test this might be playing a game like The Resistance...
I'm vegetarian for carbon efficiency and because the animals that produce most of our meat have negative utility from awful conditions. I don't think sentience is the right standard; is there a good past lesswrong discussion about that?
The last one I remember started off with a really confrontational post, and ended up being an angry discussion; I don't think I'll find and link it. I think you could write a better one, and I'd comment on it- I think your points are good reasons to cut back on meat and to strongly prefer small farms over industrial-scale meat (at least for pork, since pigs are the most sentient of our livestock), and I do both of these, but I don't find it worthwhile to go completely vegetarian.
↑ comment by beoShaffer · 2012-01-06T16:16:15.538Z · LW(p) · GW(p)
Welcome to Less Wrong.
is there a good past lesswrong discussion about that?
There is conversation that touches one it further down in this intro post. There was also a discussion article a while back, I'll see if I can find it.
-edit I was thinking of this but there are actually a ton of results if you just search for vegetarian. This also looks like it might be of interest.
comment by jwmares · 2011-12-27T04:27:12.755Z · LW(p) · GW(p)
I heard about LW from a startup co-founder. I'm 22, in Pittsburgh, graduating college in 4 months and on my 2nd startup. Raised hard-core Catholic, and still trying to pull together arguments from various sources as to the existence of God. The posts on LW have certainly helped, and I'd say I'm leaning towards atheism - though it's been a short journey of only 6 months or so since I've started to question my religion.
I'm very interested in the Singularity movement and how that will shape human philosophy and morality. I've also done some body hacking and started tracking my time, an interest which I think a lot of the LW community shares. Looking forward to becoming more active in the community!
Replies from: orthonormal↑ comment by orthonormal · 2011-12-28T01:23:47.952Z · LW(p) · GW(p)
Welcome!
The best unsolicited advice I have to give is this: your philosophical leanings are immensely sensitive to psychology, and in particular to the sort of self you want to project to the people around you. So if you want to decide one way or another on a philosophical question that's tormenting you, the biggest key is to surround yourself (socially, in real life) with people who will be pleased if you decide that way. If you want to do your best to figure out what's true, though, the best way is to surround yourself with people who will respect you whatever you decide on that matter, or else to get away from everyone you know for a week or two while you think about it.
Good luck!
Replies from: jwmares↑ comment by jwmares · 2011-12-28T06:35:26.545Z · LW(p) · GW(p)
Thanks ortho. I've definitely found that to be the case. I've also struggled to meet moral atheist girls, though a lot of that is also sampling bias (having only been looking for a few months). Interested to see how everything plays out!
comment by rv77ax · 2011-12-27T04:04:07.105Z · LW(p) · GW(p)
Hello LW readers,
Long time lurker here. Just created this account so I can, probably, participated more in LW discussion.
I'm male, 27 years old, from Indonesia. I work as freelance software developer. I love music and watching movies. Any movies. Movie is the only way I can detached from reality and have a dream without a sleep.
I come from Muslim family, both of my parent is Muslim. Long story short, after finished my college, with computer science degree, I tried to learn extend my knowledge more in Islam. I read a lot of books about Islam history, Islam teaching, Quran commentary, book that explain hadith and Quran, etc. Every books that my parents have. Soon, with the help of Internet, I renounce my faith and become an atheist. I see rationalism, philosophy in general, as the way to see the world without giving any judgments. Because, in the end, there is no absolute truth, only facts and opinions.
I know LW from /r/truereddit, and has been reading some of the articles and discussions in here, very informative and thoughtful. The only thing I can help here probably by translating some of articles, especially the Sequences, into Bahasa Indonesia.
Replies from: cousin_it, orthonormal↑ comment by cousin_it · 2011-12-27T13:13:42.427Z · LW(p) · GW(p)
Because, in the end, there is no absolute truth, only facts and opinions.
Eliezer's essay The Simple Truth is a nice argument for the opposite. The technical name for his view is correspondence theory. A short summary is "truth is the correspondence between map and territory" or "the sentence 'snow is white' is true if and only if snow is white".
Replies from: rv77ax, thomblake↑ comment by rv77ax · 2011-12-28T06:53:36.357Z · LW(p) · GW(p)
Actually, The Simple Truth is one of my favorite essay, and it's not the opposite of my statement. Autrey is the one who work with facts (reality) and Mark is the one who work with opinion (belief). Who jump at the cliff at the end ?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-12-31T06:44:04.218Z · LW(p) · GW(p)
I interpreted your comment about no absolute truth to mean something like the objects in the universe having no inherent properties (or at least less inherent properties than most might think). Was that what you meant?
Replies from: rv77ax↑ comment by rv77ax · 2012-01-03T18:06:35.486Z · LW(p) · GW(p)
I am not sure I'm fully understand about Mind Projection Fallacy, but I answered it with: Yes.
The point is the word "truth" that we, English language, use today is not truth in the sense of everything is true and everyone accept it as true; but only part of it is true, I called in facts, and the rest of the part is just an opinions.
↑ comment by thomblake · 2011-12-27T16:42:37.416Z · LW(p) · GW(p)
The technical name for his view is correspondence theory.
If you really want to be technical, I think it would be hard to say whether this view is supposed to be a correspondence or deflationary theory of truth, and some (including the linked article) would regard them as currently at odds.
Personally, I think the distinction is not very important (which is also hinted at in the linked article) and it makes sense to use the language of both. The Simple Truth in particular casts it as deflationary; the shepherd doesn't even know what 'truth' is, and thinks questions about it are silly - he just knows that the pebbles work.
ETA: To be slightly more helpful to readers, here's a relevant section of the SEP article that intends to illustrate the difference:
Replies from: TheOtherDaveA correspondence-type formulation like
(5) “Snow is white” is true iff it corresponds to the fact that snow is white,
is to be deflated to
(6) “Snow is white” is true iff snow is white,
↑ comment by TheOtherDave · 2011-12-27T17:23:43.624Z · LW(p) · GW(p)
One can, of course, get arbitrarily wrapped around the axle of reference here. "The man with a quarter in his shoe is about to die," said by George, who has a quarter in his shoe, shortly before his own death, is true... but most intuitive notions of truth leave a bad taste in my mouth if it turns out that George, when he said it, had not known about the quarter in his shoe and was asserting his intention to kill Sam, whom George mistakenly believed to have a quarter in his shoe. Which is unsurprising, since many intuitive notions of truth are primarily about evaluating the credibility and reliability of the speaker; when I divorce the speaker's credibility from the actual properties of the environment, my intuitions start to break down.
↑ comment by orthonormal · 2011-12-28T01:27:08.745Z · LW(p) · GW(p)
Because, in the end, there is no absolute truth, only facts and opinions.
There are several different things you could mean by this. Do you agree that, outside of human cognition, some things happen rather than others? And also, isn't it practically useful if our expectations are in line with the sorts of things that actually happen?
Replies from: rv77ax↑ comment by rv77ax · 2011-12-28T06:36:58.928Z · LW(p) · GW(p)
There are several different things you could mean by this.
Yes. The big context are science and ethics. In science, we work with facts, and from them we develop a hypothesis (opinion). Someone can agree with one hypothesis, and become true, until it proven otherwise. In ethics, everything is just opinions.
Do you agree that, outside of human cognition, some things happen rather than others?
Yes. If I can simplify it, only one thing is happened outside of our cognition, and its linear with time.
isn't it practically useful if our expectations are in line with the sorts of things that actually happen?
No. I think that would become confirmation bias.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-01T18:19:04.682Z · LW(p) · GW(p)
isn't it practically useful if our expectations are in line with the sorts of things that actually happen?
No. I thing that would become confirmation bias.
So you do accept scientific evidence, then- simple (approximate) models that explain well-verified patterns should be taken as practically true, until their limits are found. Right?
(Otherwise, on what grounds do you cite research about confirmation bias?)
Replies from: TimS, rv77ax↑ comment by rv77ax · 2012-01-03T18:43:51.416Z · LW(p) · GW(p)
So you do accept scientific evidence, then- simple (approximate) models that explain well-verified patterns should be taken as practically true, until their limits are found. Right?
Yes and no, depends on the context. In reality, some of patterns can be taken as practically true and some of it is not.
As an example, If I drop something from top of building, it's always go down to the ground; this pattern is always reproducible with the same result by all peoples who can test it. But, if I drink hot water when I'm sick and I get healthy in the next morning, that would become biased, because it's not always reproducible with the same result.
I think, it's only a matter of how someone defined the value for "well-verified" and "limit" until it become true for himself.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-03T21:42:38.656Z · LW(p) · GW(p)
So you're talking about a quantitative difference rather than a qualitative one- we should be far more skeptical about our generalizations than we're inclined to be. A good point in this community, but phrasing it as "no truth" probably communicates the wrong concept.
comment by KwHayes · 2011-12-27T01:17:10.740Z · LW(p) · GW(p)
Hello! I'm male, 20-something, educator, living in Alberta, Canada. I came across LessWrong via some comments left on a Skepchick article.
My choice to become an educator is founded upon my passion for rational inquiry. I work in the younger grades, where teaching is less about presenting and organizing knowledge and more about the fundamental, formative development of the human brain. Because of this, I am interested in exploring the mental faculties that produce "curiosity behaviors" and the relationship between these behaviors and motivation.
I'm a constructivist at heart; I help guide my students to become masterful thinkers and doers by modifying environmental variables around them. Essentially, I trick them into achieving curriculum-mandated success by 'exploiting' their mental processes. In order to do this effectively, I need to understand as best I can the processes that guide human thoughts and behaviors. This is something I have been interested in since I was young - I am fortunate to have found a career that allows me to explore these interests and use my understanding to better my students'.
I've considered myself to be a rationalist since i was 16 or so, and it's hard to trace my motivations to anything declarative. I have always been a disassembler; As a child, I would take things apart and explore them, but I would rarely put anything back together. Instead, I would use my energy to create new things for myself. This probably alludes to something meaningful about my own brain, but I am so far unable to fully illuminate it.
My goal here is to explore the thoughts and ideas of others and construct enduring understandings for myself. It would be great if these understandings can be applied to education, but satisfying and reinforcing my own curiosity will suffice :). My background is weakly academic; I do not have formal experience with many of the theoretical frameworks that I've seen used here, but I feel that my knowledge and experience will allow me to add some value. I'm a debater, a discusser, and a collaborator, so I think I will fit in pretty well. I'm also excited at the prospect of meeting individuals with whom my interests overlap - so far, the chances seem pretty good!
In short: I am an educator, interested in the way that environment and media interact with the human faculties of inquiry and curiosity. My goal is to understand how these faculties influence motivation, and eventually learning. I am also concerned with the ways that we define all of the above words, and especially what teachers sometimes thoughtlessly call "intelligence." I hope to one day develop a more clear framework of learning as it relates to cognitive processes.
Replies from: NancyLebovitz, cousin_it, Curiouskid↑ comment by NancyLebovitz · 2011-12-27T01:30:44.137Z · LW(p) · GW(p)
Welcome! I hope you'll post about some of the specific methods you're used with your students.
↑ comment by cousin_it · 2011-12-27T13:07:41.534Z · LW(p) · GW(p)
I am interested in exploring the mental faculties that produce "curiosity behaviors" and the relationship between these behaviors and motivation.
This interests me because my small experience with teaching kids suggests that curiosity is indeed the bottleneck resource. Please post about your experiments and conclusions.
↑ comment by Curiouskid · 2012-01-02T00:15:24.836Z · LW(p) · GW(p)
You should talk to daenerys she's also an educator of the young.
comment by OrdinaryOwl · 2012-04-15T02:26:29.986Z · LW(p) · GW(p)
Hello Less Wrong!
I am a twenty year old female currently pursuing a degree in programming in Washington State, after deciding that calculus and statistics was infinitely more interesting to me than accounting and economics. I found LW via HPMOR, and tore through the majority of the Sequences in a month. (Now I'm re-reading them much more slowly for better comprehension and hopefully retention.)
I wish to improve my rationality skills, because reading the Sequences showed me that there are a lot of time-wasting arguments out there, and I want to spend my time doing productive, interesting, and fun things instead. Also, I've always enjoyed philosophy, so finding a site that uses scholarship and actual logic to tackle critical issues was amazing.
Other defining things about me: I like cooking, folding origami, playing video games, and reading science fiction, fantasy, and history books. I struggle with procrastination and akrasia. I look forward to self-improvement!
comment by WhiskyJack · 2012-04-06T15:11:42.807Z · LW(p) · GW(p)
Howdy,
tl;dr This seems like a place that I can use to shore up some of my cognitive shortcomings, eliminate some bias and expand my worldview. Maybe I can help someone else along the way.
I have been reading the material here for the last several days and have decided that this is a community that I would like to be a part of and hopefully contribute to. My greatest interests are improving my map of the territory(how great is that analogy?), using my constantly improving map to be a better husband and father, and exploring transhumanist ideas and conceits.
I came to be a rationalist when I started reading somewhat milquetoast skeptical literature. Having been raised religious and having served in the Marine Corps I have found that I have a tendency to allow arguments from authority too much credence. If I am not careful I can serve as quite the dutiful drone.
It became important over the last few months that I be able to do as much of my own philosophical and scientific legwork as possible. If an author or speaker that I enjoy espouses ideas I am inclined to agree with it is vital (in my estimation) that I either be able to verify the information presented myself or locate reliable independent verification. This is the type of thinking that I feel I owe my wife and son. LessWrong seems like it aligns well with that ideal. Bias and gullibility kill.
The religious arguments were fun at first, but have become boring. The issue is resolved to my satisfaction. I tend to approach things scientifically instead of philosophically. I struggle to grok philosophy. I think that means I need to redouble my efforts there. My maths could use work, but aren't as sorry as some folks. I get algebra and have survived a few classes in statistics. Keyword: survived.
I am slowly chewing my way through the sequences and learning a good bit. I'm not the fastest thinker, so I will have to read some of them a few times to get the ideas involved. Some of the quantum ideas seem wildly exotic, but that just means I am going to have to really brush up on my physics....of which I have none. I'm not about to make an argument from incredulity there. I don't know enough to HAVE an opinion yet.
I used to read Common Sense Atheism and I find myself now thinking, "Ah, this is what Luke was going on about.' There is some pretty cool stuff here and I look forward to contributing what I can.
Replies from: TimS, Vaniver, Larks↑ comment by TimS · 2012-04-06T15:51:31.210Z · LW(p) · GW(p)
Welcome to LessWrong. One of the most interesting parts of LessWrong for me is noticing the cognitive bias in our thought process. For example, noticing that one dislikes another solely because the other is a member of a different group. (Psychology calls this the in-group bias).
Noticing those sorts of mistakes doesn't necessarily require all that much mathematical ability. In short, the hope in this community is that clear thinking helps you achieve your stated goals (rather than some inaccurate approximation created by unclear thinking from the imperfect brain). In short, don't sweat the math, there's lots of practical stuff that can be achieved without it. If you are particularly interested in improving your self-awareness, might I recommend Alicorn's Luminosity sequence?
Replies from: WhiskyJack↑ comment by WhiskyJack · 2012-04-06T17:46:39.240Z · LW(p) · GW(p)
Thanks! I consider myself more self aware than most, largely because I have done work similar to what is proposed in the Luminosity sequence myself. Of course interesting arguments could be had about how subjective the experience is, what ‘self’ I am even trying to be aware of (would that just be semantic?), but the result was a positive net gain in my quality of life. I'm curious to try the work with different techniques, though.
It will be interesting to see if the concept I hold of myself as pretty self-aware survives around here. All part of the process, I suppose.
As far as the math... If I don't try I definitely won't learn it. It will be a struggle, though.
↑ comment by Vaniver · 2012-04-18T00:28:17.534Z · LW(p) · GW(p)
Welcome!
My background is in physics and mathematical optimization techniques, and so it interests me a lot what perspective people without those skills have on the sorts of thinking and strategies we talk about on Less Wrong. Knowing what [inferential gaps] we missed is really useful to writers or educators. Don't be afraid to ask questions.
Or, if it comes to it, to let sleeping theories lie. Lots of posters here don't finish all of the sequences, or avoid the more esoteric decision theory posts.
↑ comment by Larks · 2012-04-18T00:11:38.731Z · LW(p) · GW(p)
Did you burn any bridges while in the Marines?
Replies from: GuySrinivasan, Vaniver↑ comment by SarahSrinivasan (GuySrinivasan) · 2012-04-18T00:37:19.310Z · LW(p) · GW(p)
You said you were religious before serving... were you by chance a Mason? :D
comment by Kevedes · 2012-07-25T17:48:39.749Z · LW(p) · GW(p)
null
Replies from: None↑ comment by [deleted] · 2012-07-25T18:53:19.108Z · LW(p) · GW(p)
I know after reading this post, one of the first things I thought was that I wanted to read the article you mentioned. So I went and found the article and have linked it below in case any one else wanted to read it as well.
Thanks for referencing it!
Replies from: thomblakecomment by Elithrion · 2012-04-02T21:17:45.052Z · LW(p) · GW(p)
Hello there!
I think I first saw LessWrong about three years ago, as it frequently came up in discussions on KW, the forum formerly linked to the Dresden Codak comic. This makes mine one of the longer lurking periods, but I've never really felt the urge to take discussion to the actual posts being discussed and talked about them elsewhere when I felt the need to comment. All this changed when Alicorn told me that when I was asked to make a post relevant to LessWrong that meant I actually had to post it on LessWrong (a revelation which I should have probably anticipated). So it has come to this.
The simplest place to start describing myself is by saying that I'm the type of person that skims through the 200 most recent comments to see which ones are well liked before writing anything.* In real life terms, I've finished up my bachelor's degree in December, after making various errors. Unfortunately, with it finished, I have discovered that I lack motivation to pursue a standard career, since just about the only things I find myself caring about are stories, knowing the future (in the general, not the personal, respect), and understanding things, particularly things related to people. (This is probably not normal for a human, but I can't say I mind it.) Fortunately, these things are fairly similar to the things LW is interested in, so it shouldn't be a problem!
These atypical weights in my utility function do, however, leave me with opinions that I think are largely a lot "darker" than the typical poster (and I don't just write that for sexy bad-boy appeal). For example:
- I think utilitarianism is a terrible system to base anything on, and is basically what you adopt if you want to say "I think being nice is good" and want to make it sound like a well-reasoned ethical system. I'd like it better if you just said "I think being nice is good".
- I think democracy and equality under the law merely look like good ideas because we don't yet have the computational power to implement actually good ideas of which these are at best extremely simplified approximations.
- I think that seemingly obvious statements such as "we are all agreed that [it] is wrong to kill people (meaning, fully conscious and intelligent beings)", from a highly rated comment by Alejandro1 down the page, are not very obvious and require serious justification. I think there are cases in our world where it is completely acceptable to kill people (although admittedly he probably meant his comment to apply only to a very specific subset of killing people), and there are many possible worlds where such cases would be far more frequent.
Well, the first two of those don't even have much to do with my personal preferences. And yet, I'm not a scary person, I promise! While maybe my utility function makes it easier for me to accept these conclusions, the overwhelming majority of my beliefs actually arose from oodles of thinking about the topics, and they are just things that I think are true, regardless of whether I want them to be true or not. That said, when the enraged zealots come for us, I'm pretty sure I'm going to be one of the first to burn at the stake! I also wish that using smiley faces was more acceptable here, since I would not mind adding an equals sign-three one to the end of that sentence to convey the intended mood a little better.
Well, this has already gone on too long already, but I hope you were not too bored. I might as well mention that at the moment, I'm trying to write a realistic post-apocalyptic novel (where the recovery has set in enough that they're ahead of the previous all-time high), and applying for a Center for Modern Rationality helper position, since I think these things are interesting, and I'd like to explore them before moving on to uninteresting survival strategies if necessary.
Bye for now, and I hope we have illuminating conversations together!
*If you're curious what I found, here are the general conclusions (although some of these are fairly low confidence):
- introductions that include the person's real name are a little bit better liked, but not significantly
- there is no particular correlation between length and upvotes
- most introductions reach a rating of 5 over time, even if they're relatively content-free
- including something that praises LW or HPMoR or the community has a small positive correlation with upvotes
- introductions which trigger responses of any sort are generally upvoted more (not surprising since they're more visible and overall upvotes per view seem almost universally positive)
- introductions that describe something fairly unique get noticeably more upvotes
- general good writing style helps (big surprise there)
- posts that primarily promote something unrelated to introductions are rated lower
- mentioning having a PhD or other real-world qualifications seems to be fairly karma-neutral
- other minor things I have even less confidence in
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-02T22:38:55.657Z · LW(p) · GW(p)
I think democracy and equality under the law merely look like good ideas because we don't yet have the computational power to implement actually good ideas of which these are at best extremely simplified approximations.
Yeah, probably. Mainly because it seems likely to me that almost any system in place has better, more optimal alternatives which we don't have the computational power to implement. It is a useful statement in some ways, if only to distinguish ideological, "this-is-sacred", versus instrumental, "this is the best we can do so far" types of beliefs. However, a more useful statement would compare democracy and equality to all the other options that require the same computational power or less.
"we are all agreed that [it] is wrong to kill people (meaning, fully conscious and intelligent beings)"
I unpack this statement to mean that, all other circumstances being equal, it's preferable to accomplish your goals in a way that involves not killing conscious beings. This isn't obvious, really, but it's intuitive to humans, who are generally conscious beings who don't want to be dead and who can empathize with other conscious beings and assume they also don't want to be dead. It's not obvious, I guess, that someone else's consciousness, which I can never experience directly, is comparable in value to my own consciousness, which I experience continually... I find myself unable to break it down any further, though, so I think I must take this as an axiom of my ethical system. Humans have specific brain sub-systems in charge of empathy, which likely evolved for reasons of social cohesion and its survival advantages, and I'm not sure you can break morality any further down than that...but saying those words doesn't cancel the empathy modules either. Empathy would make it hard for me to justify choosing to kill a conscious being right in front of me, and some desire for symmetry or fairness or universality makes my brain want this to be the case everywhere, for all conscious beings, not just those ones immediately in front of me whose life is in my hands. I don't want someone else a thousand miles away to start killing people either, because [insert axiom] their conscious is equal in value to mine, thus in a different possible world I could be them, and I really don't want to get killed. Thus it's wrong.
Make any sense?
since just about the only things I find myself caring about are stories, knowing the future (in the general, not the personal, respect), and understanding things, particularly things related to people.
I've started caring about these things much less since setting out on the process of establishing a standard career. It might be caused by years of working too much while studying full time, and the resulting burnout, or just from having to cram a lot of career-relevant stuff into my head and thus having less room left over for bigger ideas. It might also just be from getting older–during the past few years, I've studied a lot and worked a lot, but I also aged up from adolescence to young adulthood, with the accompanying changes in brain development. I would say be warned, though–forcing yourself to focus on something specific might cause you to lose some of your general curiosity.
I might as well mention that at the moment, I'm trying to write a realistic post-apocalyptic novel (where the recovery has set in enough that they're ahead of the previous all-time high
Sounds fascinating. I'm not sure I've read any post-apocalyptic novels where the current level of development was higher than that before the apocalypse, which is what I'm interpreting. I've completed what I guess could be called a post-apocalyptic novel, though some realistic-ness was compromised in the name of a more exciting and compelling narrative. Best of luck!
Replies from: Elithrion↑ comment by Elithrion · 2012-04-03T03:25:09.140Z · LW(p) · GW(p)
However, a more useful statement would compare democracy and equality to all the other options that require the same computational power or less.
This is definitely true. That said, I actually do have at least two systems that I prefer to democracy that are implementable at current processing power levels (they might have somewhat higher needs than democracy, but nothing huge). Equality probably actually does require a lot of processing power to shift completely. However, it is conceivable that we could benefit from creating additional classes of citizens with widely different rights (currently we have children and the mentally ill in this category), although I have not thought about that too much, so I'm not sure if we actually would or not.
I unpack this statement to mean that, all other circumstances being equal, it's preferable to accomplish your goals in a way that involves not killing conscious beings.
Sorry, it was probably bad of me to quote without context. What he actually meant (in my interpretation) was that it is clear that it should be illegal to kill adult human beings, which was part of his argument that it should be illegal to kill infants (search it if you want the full context), so it is with this claim that I took exception. Certainly, I would agree that if all else is equal (a premise that is almost never true, unfortunately), it would be better not to kill people than to kill people. In particular, I think the reason that some view it as possibly okay for parents to kill infants is that the status of infants is close to that of property or pets of their parents. It is here that the analogy breaks down, because our current society does not have adults as pets or property of other adults. However, I think such a situation would be perfectly acceptable - for example, it should be legal for me (in full possession of my faculties, without coercion, etcetera) to sign over to someone else the right to kill me if he or she so chooses. After such a contract is made, I believe it should be completely legal for them to kill me if they wish it. Additionally, we already implicitly provide such rights to any state we enter with some conditions attached (I use a social contract approach here, which is not to indicate I endorse social contracts) - they can kill us if we violently and dangerously resist the police, in some places if we break the law in certain ways, and further the state transfers the right to kill us to private citizens if we attack them and sometimes in other instances. As such, there are indeed many cases when killing people is deemed acceptable and proper, and I think most of these instances are not outrageous.
I'm not sure I've read any post-apocalyptic novels where the current level of development was higher than that before the apocalypse, which is what I'm interpreting. I've completed what I guess could be called a post-apocalyptic novel, though some realistic-ness was compromised in the name of a more exciting and compelling narrative.
Yep, you're interpreting that correctly. Mostly the apocalypse is an extremely well justified for a big shake-up of society without massive technological progress. To be honest, I like fantasy better than science fiction in general, since it explores societies more than it does technology, and I think that is much more appealing in a novel. So, I'm trying to sort of get the best of both worlds - a character driven story exploring interesting societal patterns, and a setting that is somewhat familiar to anyone who knows the modern world, as well as makes them think about where we might head. Although I'm not sure to what extent this thoughtful motivation sprung up after I had a story idea I really liked, which is what really triggered novel writing inspiration. We'll see how it goes anyway, and thanks for the interest!
↑ comment by Zaine · 2012-04-02T21:57:38.626Z · LW(p) · GW(p)
Hello! I'd welcome you, but I can't honestly represent anything or anyone besides, well, me (I'm a complete neophyte). Really my interest quite piqued at your thoughts on Mr. Bentham's philosophy, as they happen to be the exact opposite conclusion I came to - namely that utilitarianism is essentially for people who think, "Things could be so much better if I ran things." The main logical process that led to this conclusion was: People aren't being logical < If they were logical, they would consider the probability of the net good of an act, and only act if the probability was very high, or just above normal but still low risk < What about contentious issues, based upon value systems? who would make the call on those? < ___.
On that last step I've never really made any progress, as it seems no matter how objective (I consider this word to include the consideration of emotions) and rational you are, on the contentious issues that have no... *
- ... Sorry, I just had a thought. I remember reading somewhere that for things that have no right or wrong, after the collective evidence has been weighted for accuracy, legitimacy, credibility etcetera, the option(s) that have the greatest probability of truth should be (as a rationalist) treated as truth for the time being; if some new evidence tips the scale in the other direction, so follows the belief. This ... means no religion could be rationally considered true - as of now, at least. Thus any governmental system based upon utilitarianism would only tolerate religion insofar as it affects the emotional welfare of its citizens. And that if either 3,000 innocents or 3 brilliant, Nobel prize winning, humanity-revolutionizing genius scientists absolutely, all other possible and impossible avenues had been taken and failed, had to die, then it would come down to probabilities of each possibility's net good (utility) when deciding which to pick.
I suppose I made a little bit of progress there, so thank you for the kick - but you can see, I hope, how I think utilitarianism is embraced by people who think on the opposite pole of "I think being nice is good". I don't think it's embraced only by those who think they should be running things anymore, though. That changed since the beginning of the post, and since this was a bit about your thought processes in coming your conclusion, I've kept mine.
Cheers!
*The following bracketed fragment completes the thought I was going write, before I cut off the sentence and started from "... Sorry"; it was written ex post facto: [right or wrong, it comes down to the individual value system of the decider(s).]
Replies from: Elithrion↑ comment by Elithrion · 2012-04-02T23:19:45.489Z · LW(p) · GW(p)
I think your thought process brings up a few different aspects of evaluating ethical philosophies, and disentangling them would be very helpful.
First, I certainly agree that there are probably people out there that reach utilitarianism through a process of motivated cognition - they want to be in control, and the reason they use (perhaps even to themselves) to make that sound better is that it would be for the good of everyone. However, I also think that there are many other people out there who grew up believing that good is what we should strive for and that the way to do that is to aim for the greatest benefit of the greatest number of people. These types of people might then reach for utilitarianism not to justify actions they wanted to perform already, but rather as what they perceive as the closest complete ethical system to their previous objectives.
While the former group of people merely use utilitarianism as an excuse (even if they believe they believe in it), it is actually the latter group whose reasoning I am generally more concerned about. Whereas the dictator types will do what they planned to do anyway, the forces of good types are vulnerable to taking utilitarianism too seriously, and doing such things as, for example, thinking that maybe it's okay to sacrifice one human life if it will save one million ants (without considering ecosystem impacts), which I do not think is a thought that would have ever arisen from their core belief system. Which is not to say that all utilitarians would agree with that trade-off, but I have seen some who seem like they would, and that is just one minor example of the many problems I have with the idea.
The other point I wanted to bring up is that utilitarianism is really a system for general thinking, even if one likes it, not for immediate real-world implementation. Indeed, it is unimplementable, in the mechanism design sense. So the only way (that I can think of) that you could put it into practice in the real world is to have a strong AI (or equivalent) build detailed models of everyone (possibly involving brain scans) and implement a solution based on those (as otherwise, any implementation would suffer from participants refusing to tell the truth about their utility functions). So, the question of "how would contentious decisions be made?" is fairly unanswerable, except through accepting some deviation from utilitarianism.
I hope that helps crystallize your thoughts a little bit.
Replies from: TheOtherDave, Zaine↑ comment by TheOtherDave · 2012-04-02T23:58:46.966Z · LW(p) · GW(p)
That said, where there exists a measurable difference between an implementable approximation of utilitarianism and an implementable approximation of some other moral principle X, then it makes sense to consider oneself a utilitarian or an Xian even if one is, as you say, accepting deviations from utilitarianism or X in order to achieve implementability.
↑ comment by Zaine · 2012-04-03T02:10:16.977Z · LW(p) · GW(p)
Thank you! I'd never really thought of that other (the latter) approach to utilitarianism; that explains a lot.
Nitpick: The use of 'crystallize' in regard to 'thoughts', I think, would only be recommendable when describing a particularly desirable thought process. I understood crystallize to mean elucidate, in this context, but cause for confusion is there.
↑ comment by TheOtherDave · 2012-04-02T21:35:20.582Z · LW(p) · GW(p)
Welcome! FWIW, your thoughts about democracy, equality under the law, and the utility of killing people are not uncommon around here. Possibly your thoughts about utilitarianism are as well, although it depends rather a lot on what you consider a better system to base anything on, and on just what you mean by "utilitarianism".
Replies from: Elithrion↑ comment by Elithrion · 2012-04-02T22:42:25.224Z · LW(p) · GW(p)
Well, at the very least, I am fairly confident that my particular conclusions about what alternative systems I prefer are not common. As evidence of deviation from the mean, I find myself more in favour of legal infanticide (or even filicide depending on your preferred age ranges for each word) than the most pro-infanticide positions expressed in that big debate down below, which in my case is merely a quick consequence of other, possibly more unusual, positions.
Maybe I'll do a summary in the actual discussion area when I feel up to it, or if people are genuinely curious as to what my positions are.
comment by windmil · 2011-12-27T05:11:40.635Z · LW(p) · GW(p)
Hello all.
I've been lurking around here and devouring the sequences for about two years now. I haven't said much because I rarely feel like I have much that's useful, or I don't feel knowledgeable about the subject. But I thought I might start commenting a bit more.
I'm 19, in Florida and studying engineering. I really want to do something that will bring the world forward in some way, and right now that has me pointed at trying to put my personal effort towards nanotechnology. For now though I'm just trying to win classes and learn as much as I can.
Not too much more than 'hi', but there it is.
Replies from: orthonormal, khafra↑ comment by orthonormal · 2011-12-28T01:34:56.138Z · LW(p) · GW(p)
Welcome!
I really want to do something that will bring the world forward in some way, and right now that has me pointed at trying to put my personal effort towards nanotechnology.
Although I disagree that this is the best direction for marginal technological development (in particular, I don't know if we're smart enough to not do nanotech horribly wrong), I expect you'll learn some extremely important things in the process of studying...
Replies from: windmil↑ comment by windmil · 2011-12-28T03:53:18.303Z · LW(p) · GW(p)
It might not be. Of course I don't feel like I'm on track to help suddenly make atomically precise, self replicating nanomachines. But it would be nice to get closer to some mechanically precise manufacturing, or just certain better materials for some applications. Also I could make some money.
I am an early engineering undergrad, so right now I'm mostly taking intro to anything at all classes and not doing any real work. I wouldn't be surprised if I changed directions at all.
↑ comment by khafra · 2012-01-03T20:48:19.056Z · LW(p) · GW(p)
Good to meet you. AFAIK, since molybdenumblue and one other whose name I can't recall left, _ozymandias, you, and me are the only people here willing to admit to being Floridians. I'm a bit south of you, in Tampa Bay.
edit: Heh, due to my terrifyingly slow computer, I noticed and added _ozymandias in a spacelike interval to your reply. Internet special relativity.
Replies from: windmilcomment by blob · 2012-04-25T14:57:10.474Z · LW(p) · GW(p)
Hello!
I'm a mathematician and working as a programmer in Berlin, Germany. I read HPMOR after following a recommendation in a talk on Cognitive Psychology For Hackers and proceeded to read most of the sequences.
Reading LW has had several practical consequences for me: Spaced repetition and effective altruism were new to me. Things have also improved around social skills, exercise and nutrition.
I'm also part of a small Berlin LW meetup: spuckblase and me have met twice - and now we got contacted by two other Berlin based lurkers which prompted the creation of a wiki entry and a mailing list. We're now planning the first meetup that will actually get a meetup post and be announced in advance.
Replies from: gwern↑ comment by gwern · 2012-04-25T17:05:14.574Z · LW(p) · GW(p)
Spaced repetition is awesome for memorizing things I value.
Such as?
Replies from: blob↑ comment by blob · 2012-04-25T20:46:17.601Z · LW(p) · GW(p)
I have decks for:
English vocabulary. I've learned many new words and sometimes get an explanation for a word I had only inferred the meaning of from the context - and guessed wrongly.
Family facts, mostly birthdays. It's a minor thing really, but I used to not know how old everyone is. And more than once I felt bad when someone asked about the age of a parent and I had to say 'no idea'.
Random facts I've looked up several times before or that I don't want to have to ever admit not knowing. Like the age of the solar system, the first few digits of Euler's number, approximately when Newton lived. This kind of thing.
What I wish I had a deck for: Math. I really enjoyed doing math and am sad that I'll forget most of the definitions and theorems now that I don't use them regularly anymore. I've tried converting my lecture notes into flashcards, but it's a lot of work that I'm not motivated enough to do.
comment by Hermione · 2012-02-23T13:54:42.303Z · LW(p) · GW(p)
Hi there. I'm Hermione (yes, really). I went to my first LW meetup recently and I'm now working on the Rationality Curriculum, so it feels like time to introduce myself and start getting involved in discussions.
There are a lot of things I'd be interested in talking about. I only found LW a couple of months ago so I'm trying to level up in rationality and work out how to teach others to do so at the same time. I'll probably be posting about this and asking for advice. Has anyone written about their experiences of reading the sequences for the first time? Should I try and absorb things really quickly, or is it better to take it slowly, and if so, what comes first? That kind of thing.
I've also been inspired by Alicorn's Luminosity sequence and have been piloting a beeper experiment, Mihaly Csikszentmihalyi style. In order to understand myself and my moods better, I've been recording what I'm doing and how I feel at random times (3x/day). I'd like to improve the indicators I've been using. I struggle to get the right balance between quantitative (more analysable) and qualitative (more accurate). Any suggestions?
Finally, I'd really like to meet some more rationalists in person, so please PM me if you're in Brussels!
Replies from: gwern, Kevin, thomblake↑ comment by gwern · 2012-02-23T17:24:59.560Z · LW(p) · GW(p)
I'd like to improve the indicators I've been using. I struggle to get the right balance between quantitative (more analysable) and qualitative (more accurate). Any suggestions?
I am slowly setting up a self-experiment with lithium focusing on mood, so I'm interested in the same question. Seth Roberts suggested I rate my mood on just a 0-100 scale as opposed to the 1-5 I was using; I suggested using the Brief POMS as an apparently standard mood rating tool (and used in previous lithium studies) but I haven't heard back.
Replies from: Hermione↑ comment by Hermione · 2012-02-27T21:32:55.374Z · LW(p) · GW(p)
Thanks. My problem seems to be along the lines of "well, I'm happy about x but simultaneously anxious about y and kind of stressed because I only just met my deadline for blah..., so what does that aggregate to?"
I'm not sure how increasing the scale would help with that, but I followed the link to the POMs stuff on your website, I reckon something similar could be a good solution, though probably with different moods.
Replies from: gwern↑ comment by gwern · 2012-02-28T01:29:27.435Z · LW(p) · GW(p)
Well, if each axis of happiness / anxiety / stress is equally important, then the happiness gets canceled out. And you'd wind up with a score indicating as much on the POMS.
This seems sensible to me. If the happiness wasn't being canceled out by the other two, would you really be feeling 'kind of stressed'? Wouldn't you be feeling a kind of relief or smugness - 'ha, beat the deadline again!' - or feeling of accomplishment - 'go me!' - or something positive like that?
↑ comment by Kevin · 2012-02-23T14:11:13.731Z · LW(p) · GW(p)
Hello there! With regards to better understanding your moods and indicators, I'd suggest a bit of noting meditation, or at least adding some of the different kinds of things to note to your vocabulary of moods and indicators.
http://kennethfolkdharma.wetpaint.com/ Just see the lists from "First Gear".
Replies from: Hermione↑ comment by Hermione · 2012-02-27T21:48:40.084Z · LW(p) · GW(p)
At the moment I'm looking for something that can be done with half a brain when busy, since the beeper study interrupts me a lot. Meditation in any form seems to require quite a big investment before it yields results.Thanks for the link, though
Replies from: Kevin↑ comment by thomblake · 2012-02-23T14:47:38.817Z · LW(p) · GW(p)
Welcome! Note that there are some references to "Hermione" on this site and they are probably about that other person.
Should I try and absorb things really quickly, or is it better to take it slowly, and if so, what comes first?
As a general comment, remember The Art must have a purpose other than itself. Don't assume you're more rational because you know some bias names or feel more rational. Make sure it's making a difference in your life, and if possible do that via systematic empirical study.
Replies from: Hermione↑ comment by Hermione · 2012-02-27T21:10:52.048Z · LW(p) · GW(p)
Hmm, thanks, that makes sense. But do you have any suggestions for indicators that would measure if I'm improving?
Replies from: thomblake↑ comment by thomblake · 2012-02-28T14:22:57.462Z · LW(p) · GW(p)
Sorry, I'm just here ironically to recite empty platitudes about empiricism.
But seriously, figuring out how to know that is one of the big projects here.
Replies from: Hermione↑ comment by Hermione · 2012-02-29T13:21:15.131Z · LW(p) · GW(p)
hah. Has anyone made any progress?
I was wondering if one could test group rationality by starting a conversation about something the group finds it hard to agree on. There are a few such topics here on LW and I'm sure there would be more if you added politics into the mix. The test would be so see whether the group could reach unanimity. I was thinking this might be a fun thing to try at the brussels meetups if they get going.
Replies from: beoShaffer↑ comment by beoShaffer · 2012-02-29T16:45:13.476Z · LW(p) · GW(p)
Unfortuantly, the set of articles with the tag verification doesn't have a perfect correspondence to articles that would be relevant here, but its close and generally to broad rather than to narrow. http://lesswrong.com/lw/2s/3_levels_of_rationality_verification/ and the rest of it's series are probably the most important.
comment by camie0626 · 2012-01-05T11:05:13.505Z · LW(p) · GW(p)
Hi people :) I'm 16 from France and the Philippines, going to a Christian boarding school. Um, i met a guy on Omegle... he gave me a link to this website after a conversation about Christianity. Long story short, I'm confused. Maybe someone would like to help me get my head straight?
Replies from: Baughn, Anubhav↑ comment by Baughn · 2012-01-05T12:17:19.725Z · LW(p) · GW(p)
Sure~!
Though for a starter, what in particular are you confused about?
You might want to start by skimming Making Beliefs Pay Rent and Belief in Belief, which lacking evidence to the contrary I believe are most likely to be helpful.
↑ comment by Anubhav · 2012-01-07T09:54:08.963Z · LW(p) · GW(p)
The guy who sent you here... That would be me.
Baughn's links are a nice place to start. For the 'Ever wonder why we're here?' question, you should probably see Mysterious Answers to Mysterious Questions. It doesn't answer that, but I think it's vital if you're ever to find a satisfying answer.
And if you think, even a bit, that it'd all be pointless if God or Jesus had never existed... You should read Explaining vs Explaining Away and Joy in the Merely Real. Everything that's beautiful about the world is beautiful no matter what!
Of course, you're not going to buy all this straightaway, and that's fine.... Just leave yourself a line of retreat for now. (And that's another article you should read, especially if all of this is beginning to feel overwhelming.) But don't just rationalise all of this away-- it's an easy trap to fall into (and some of your friends have already fallen into it, from what you were telling me), and it's kind of pointless if your doubts end up just 'confirming' everything you'd already believed
comment by [deleted] · 2011-12-30T20:11:32.168Z · LW(p) · GW(p)
Hello LW community, my name is Karl, but please call me MHD for short; here's a lot of sentences beginning with "I..." :
I am a 19 year old, slightly gifted individual, male of gender and psyche, bi, hard to define my preferred relationship structure; honestly my gonads and sexual preference are mostly irrelevant here.
I came here by way of HPMoR and was pressed to do some serious reading by my good friend, known around here as Armok_GoB.
I have at time of writing read sequences MaT and MAtMQ along with some non-structured link-walking, looking to read Reductionism next. My attitude is so far positive, but I read it with a healthy dose of sceptic afterthought and note-taking to verify that it really does make sense. You see, my native language is not English, and I have read a study that one is more gullible when communicating in a non-native language.
My mind is built for logical thinking and I have a knack for mathematics, physics and language. I know approx. 12 turing complete programming languages (C likes, LISPs, ML family, SmallTalk-esque, Assembly) reasonably well. I am looking into Tensors, Bayesian probability, formal logic, type theory, quantum physics, relativity, human psychology, Lojban and some other stuff.
Armok tells me that I am very susceptible to basilisk material; I one-box (eff me! bad error to switch those around, sorry), and I tend to fall for the Planning Fallacy and the Transparency-thingey. I am probably genetically predisposed to mild mental illness and I know from personal experience how bad a Death Spiral can really get.
I am a devoted materialist, I hate not understanding things (or at least knowing how to learn how it works), but I tend not to go into too much depth with everything; I know a bit of many topics.
I am not a fan of cryonics because I know that freezing, regardless of method, is a very good way to destroy tissue; and I would like to see some more evidence towards what actually constitutes memory and other brain-related stuff, so as to make sure the freezing method doesn't wreck it, before I buy into it.
I do some creative writing and I like sci-fi. I lack time to read as well as a book budget, another point which Armok has called me out on.
I think that's about what's relevant. Happy New Year LW!
Replies from: Vladimir_Nesov, wedrifid, Vaniver, jsteinhardt, windmil↑ comment by Vladimir_Nesov · 2011-12-30T22:55:55.251Z · LW(p) · GW(p)
I am not a fan of cryonics because I know that freezing, regardless of method, is a very good way to destroy tissue
Cryonics uses vitrification, which protects from the tissue-destroying crystal formation.
http://www.alcor.org/Library/html/vitrification.html
↑ comment by wedrifid · 2011-12-31T00:11:39.722Z · LW(p) · GW(p)
I am not a fan of cryonics because I know that freezing, regardless of method, is a very good way to destroy tissue; and I would like to see some more evidence towards what actually constitutes memory and other brain-related stuff, so as to make sure the freezing method doesn't wreck it, before I buy into it.
Oh oh. That argument was just removed. Now what are you going to do? You can make up a new one to support your existing conclusion or you could make up a new conclusion based on what you know.
Welcome to lesswrong.
Replies from: Kaj_Sotala, None↑ comment by Kaj_Sotala · 2011-12-31T00:42:32.249Z · LW(p) · GW(p)
This seems needlessly confrontational, especially as a comment to a newcomer.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-31T01:45:40.491Z · LW(p) · GW(p)
This seems needlessly confrontational, especially as a comment to a newcomer.
That would seem to be in the eye of the beholder. I saw it as an opportunity to demonstrate mastery of the most basic principle of lesswrong and instantly raise his standing in the tribe and reputation for sanity.
I reject your accusation!
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-12-31T06:34:21.813Z · LW(p) · GW(p)
My apologies for the misinterpretation, then.
↑ comment by [deleted] · 2011-12-31T01:46:32.633Z · LW(p) · GW(p)
All right, I'll play ball. If my devoting my career to AI research fails to make FAI, sure, I'll buy into cryonics.
Right now I am 19 years old, poor as dirt, lives with my parents, healthy lifestyle, careful to the point of paranoia; show me a cryonics establishment in Denmark and I will reserve a space when I have the funding. (the "show me" is a rethoric, I intend to find out myself)
I am generally optimistic with regards to FAI, and I am no strong Bayesian at all. You have a point, yeah, plain as day.
And thank you Kaj_Sotala, for taking up on this, frankly not at all "fun" or "inviting" and, yes, frankly quite "needlessly confrontational," yet still true counterargument.
wedrifid; there is a time to be direct and insulting in a playful kind of way. You need to learn when that time is.
ETA: After a brief lookup of the term "Vitrification" i find the term "Toxicity" to feature, along with "Optimistic of the future." I am not sure what to think here, compelling arguments can be made for each.
Replies from: None, wedrifid↑ comment by [deleted] · 2012-01-02T14:39:19.025Z · LW(p) · GW(p)
ETA: After a brief lookup of the term "Vitrification" i find the term "Toxicity" to feature, along with "Optimistic of the future." I am not sure what to think here, compelling arguments can be made for each.
The toxicity isn't a problem if it's going to be a brain upload, but it is a valid concern for any attempt at resurrecting the wetware.
↑ comment by wedrifid · 2011-12-31T01:49:13.654Z · LW(p) · GW(p)
frankly quite "needlessly confrontational," yet still true counterargument.
I didn't make a counterargument of any kind.
Replies from: None↑ comment by [deleted] · 2011-12-31T01:59:12.858Z · LW(p) · GW(p)
Sorry, you pointed out a counterargument made by Vladimir_Nesov, in a confrontational manner.
Also, thank your for reminding me that I have to sharpen my posting abilities.
Vladimir Nesov made a very true counterargument, you endorsed it to test my ability to change my standpoint. Nothing wrong with that; and lo and behold, I actually have. Congratulations, you and Vladimir_Nesov both get an upvote from the new guy.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-31T02:33:07.532Z · LW(p) · GW(p)
Congratulations, you and Vladimir_Nesov both get an upvote from the new guy.
Thankyou! Respond positively and thinking clearly despite (being primed to) consider an interaction to be a confrontation is potentially even more valuable trait to signal than ability to update freely. A valuable newcomer indeed!
↑ comment by jsteinhardt · 2011-12-31T00:32:30.491Z · LW(p) · GW(p)
My attitude is so far positive, but I read it with a healthy dose of sceptic afterthought and note-taking to verify that it really does make sense. You see, my native language is not English, and I have read a study that one is more gullible when communicating in a non-native language.
Kudos for that. Sceptic afterthought is always good if you have the time to devote to it.
↑ comment by windmil · 2012-01-02T01:41:00.817Z · LW(p) · GW(p)
.ui lo du'u do cu se cinri la lojban. cu pluka mi
It's nice to see someone else interested in lojban.
Replies from: None↑ comment by [deleted] · 2012-01-02T01:57:40.251Z · LW(p) · GW(p)
.uiru'e .i'i
Replies from: Michelle_Z↑ comment by Michelle_Z · 2012-01-03T04:56:19.404Z · LW(p) · GW(p)
I've recently started trying to learn a bit about lojban.
.ui
comment by Adriano_Mannino · 2012-07-04T01:23:15.160Z · LW(p) · GW(p)
Hi all, I'm a lurker of about two years and have been wanting to contribute here and there - so here I am. I specialize in ethics and have further interests in epistemology and the philosophy of mind.
Replies from: wedrifid↑ comment by wedrifid · 2012-07-04T17:06:00.649Z · LW(p) · GW(p)
Hi all, I'm a lurker of about two years and have been wanting to contribute here and there - so here I am. I specialize in ethics and have further interests in epistemology and the philosophy of mind.
I look forward to hearing what you have to say about each of these fields!
comment by Rejoyce · 2012-03-18T21:18:07.206Z · LW(p) · GW(p)
Salutations and whatnot! My name is Joyce, I'm a high school sophomore. Probably on the younger side of the age spectrum here, but I don't mind starting young. The idea of rationality isn't new to me, I've always been more inclined to the "truth", even when it sometimes hurts. In my mind knowing more about the truth = better person, so that's my motivation for being here. I'm have better grades than the average, but for the past couple of years the thing I hated most about myself was the fact that I usually "coast" a class, get my A, and then promptly forget everything I've done in the class. My goal was "get an A", not "learn something new". I'd like to learn new things now, and actually retain it, instead of just coasting by. Knowledge is power. I want to be the best, like no one ever was.
Um. When I was younger, perhaps ten, while I was tinkering with Photoshop, my older cousin approached to me and tried to introduce to me the idea of fallacies. He's...nine years older than me, so he was a barely an adult. I forgot most of the conversation, but from what I DO remember, blaming a stomachache on the last thing you ate was falling prey to SOME fallacy because it takes a day to digest food and thus you should think about what you ate 24 hours before, not two. (by the way I think this is wrong, your body reacts to bad food quicker than that, and can anyone tell me what fallacy this is? If it exists?) He also said if I wanted to win a lot of arguments I should learn about more fallacies. I was kind of doubtful and sort of didn't really care about the whole thing, but it must have been significant if the hard drive that is my brain hadn't completely forgot about it already.
What brought me here was Harry Potter and the Methods of Rationality, and what brought me to HPMoR was the writer Aspen in the Sunlight who wrote the Harry Potter fanfiction series A Year Like None Other (I didn't capitalize that correctly), and what brought me there was dragcave.net and from there I'm not quite sure. It was nearly three years ago, after all.
Ah, what else should I say. I'm an INTP. Psychology is the loveliest subject ever, oh it's just the most fun subject ever. I'm sort of taking AP Psych next year. And by that I mean buying the textbook off ebay or something and self-studying it along with my friend from another school who actually has the course, because my school doesn't offer the class. sigh Milgram's experiment was interesting and a little shocking, it's almost become my conversation starter ("Did you know that two-thirds of people would administer 450-volts of electricity through a person because a guy in a white lab coat told them to?"). Not sure what I want to be when I grow up, though I'm very well versed in computer technology. If not that, then law.
Someyears in my life I want to teach for a few years, just to experiment and find out what the best teaching method actually is. Traditional methods are so boring, and since a significant amount of my peers don't actually respond well to the current learning environment there obviously needs to be some updating to do. Electronics are going to be so cheap in the future, I could probably make my potential students shell out some 30 dollars for a decent tablet, install some heavily modded operating system, (Android/Apple if advanced enough by that time, Linux if not) lock it so my students can't tinker, and integrate that heavily in the curriculum. Sync my own tablet with all of theirs, kill some poor school's wi-fi. Maybe actually make a points system. Now that I typed that out it's losing its effectiveness appeal, but gosh, it'd at least be interesting.
Replies from: Vladimir_Nesov, arundelo, TimS↑ comment by Vladimir_Nesov · 2012-03-18T23:51:27.436Z · LW(p) · GW(p)
He also said if I wanted to win a lot of arguments I should learn about more fallacies.
This is actually one danger of learning about fallacies: you become more able at defeating arguments, and this holds irrespective of their truth, so if you have a standard tendency to privilege arguments for the positions you already hold, that makes it harder for you to change your mind. See the post Knowing About Biases Can Hurt People.
Replies from: Rejoyce↑ comment by Rejoyce · 2012-03-19T00:09:41.451Z · LW(p) · GW(p)
Thanks for the post, I'll definitely look at it after I'm done replying to this one.
When you say "privilege arguments for the positions you already hold", do you mean "only allow arguments that allow you a better chance of winning"?
This sounds like the wrong thing to say, but... I'll say it anyways, I want to see your reaction: what if you don't develop a tendency to fight the easier battle? What you say makes sense: losing less = learning less, until the point where you start to win/lose at a 50/50 rate at least. What if you pick arguments for the sake of arguing, or you promise yourself that you would only argue for the truth? Or, as is the case for me, what if you you have the tendency to fight for both sides (heck, this post)? I actually agree with you on all points, but for some reason I want to know how you would answer the opposing side, if I were on it.
Replies from: None↑ comment by [deleted] · 2012-03-19T20:47:17.770Z · LW(p) · GW(p)
Ideally, you should aim to defeat the strongest version of your opponent's argument that you can think of--it's a much better test of whether your position is actually correct, and it helps prevent rationalization. Rather than attacking a version of your opponent's argument that is weak, you should attack the strongest possible version of it. On LessWrong we usually call this Least Convenient Possible World, or LCPW for short. (I've also seen it called "steel man," because instead of constructing a weaker "straw man" version of your opponent's argument, you fix it and make a stronger one.) You may be interested in the wiki entry on LCPW and the post that coined the term.
I'm not sure about the merits of arguing for positions you don't actually believe. It can certainly be helpful in a context where your discussion partners are also tossing around ideas and collaborating by playing Devil's Advocate, since it can help you find the weaknesses in your position, but repeatedly practicing rationalization might not be healthy in the long run.
↑ comment by TimS · 2012-03-18T23:19:07.217Z · LW(p) · GW(p)
Welcome to LessWrong.
Thanks for mentioning that other fanfic, I hadn't seen it and it looks great.
I'm glad you find the moral theory stuff interesting - I do as well. I want to let you know that law is a terrible career for that sort of thing.
Replies from: Rejoyce↑ comment by Rejoyce · 2012-03-18T23:47:16.611Z · LW(p) · GW(p)
I've heard. Failing a case makes you feel worthless, and sometimes winning one makes you feel soulless. Maybe I should go into the milder forms of law. Patents, perhaps?
Replies from: TimS↑ comment by TimS · 2012-03-19T01:52:19.126Z · LW(p) · GW(p)
Some of that, but not much - the stakes aren't usually that high. My intended point was that the practice of law is about repeating what you are good at, over and over and over again. Like if you are a divorce lawyer. You can try to argue every case about the theory and purpose of alimony and child support - or you can just reference the schedule of presumptive amounts from the statute or the regulations.
The first one is interesting, like thinking about the implications of Milgram's experiment. The second one is the way it actually works.
Lots of people think that want to be lawyers because they want to translate their idealism into real-world consequences. I'm not saying that's impossible (heck, I'm trying to do it now), but it isn't the natural progression of a career in law. In short, I'm given to understand that "The Firm" is a moderately accurate picture of what the practice of law is often like.
comment by SpaceFrank · 2012-01-25T19:32:04.701Z · LW(p) · GW(p)
Hello, Less Wrong.
Like some others, I eventually found this site after being directed by fellow nerds to HPMOR. I've been working haphazardly through the Sequences (getting neck-deep in cognitive science and philosophy before even getting past the preliminaries for quantum physics, and loving every bit of it).
I can't point to a clear "aha!" moment when I decided to pursue the LW definition of rationality. I always remember being highly intelligent and interested in Science, but it's hard for me to model how my brain actually processed information that long ago. Before high school (at the earliest), I was probably just as irrational as everyone else, only with bigger guns.
Sometime during college (B.S. in mechanical engineering), I can recall beginning an active effort to consider as many sides of an issue as possible. This was motivated less from a quest for scientific truth and more from a tendency to get into political discussions. Having been raised by parents who were fairly traditional American conservatives, I quickly found myself becoming some kind of libertarian. This seems to be a common occurrence, both in the welcome comments I've read here and elsewhere. I can't say at this point how much of this change was the result of rational deliberation and how much was from mere social pressure, but on later review it still seems like a good idea regardless.
The first time I can recall actually thinking "I need to improve the way I think" was fairly recent, in graduate school. The primary motivation was still political. I wanted to make sure my beliefs were reasonable, and the first step seemed to be making sure they were self-consistent. Unfortunately, I still didn't know the first thing about cognitive biases (aside from running head-on into confirmation bias on a regular basis without knowing the name). Concluding that the problem was intractable, I withdrew from all friendly political discussion except one in which my position seemed particularly well-supported and therefore easy to argue rationally. I never cared much for arguing in the first place, so if I'm going to do it I'd prefer to at least have the data on my side.
I've since lost even more interest in trying to figure out politics, and decided while reading this site that it would be more immediately important anyway to try figuring out myself. I've yet to identify that noble cause to fight for (although I have been interested in manned space exploration enough to get two engineering degrees), but I think a more rational me will be more effective at whatever that cause turns out to be.
Still reading and updating...
Replies from: DSimon↑ comment by DSimon · 2012-01-25T19:59:47.014Z · LW(p) · GW(p)
Welcome to LW!
I like the "just with bigger guns" metaphor a lot; the trouble with intelligence is its ability to produce smart-seeming arguments for nearly any silly idea.
Replies from: SpaceFrank↑ comment by SpaceFrank · 2012-01-25T21:41:34.477Z · LW(p) · GW(p)
Exactly. I also suspect that logical overconfidence, i.e. knowing a little bit about bias and thinking it no longer affects you, is magnified with higher intelligence.
I can't help but remember that saying about great power and great responsibility.
Replies from: thomblake↑ comment by thomblake · 2012-01-25T22:08:05.506Z · LW(p) · GW(p)
Yes - see Knowing about biases can hurt people.
Replies from: SpaceFrank↑ comment by SpaceFrank · 2012-01-30T18:45:53.845Z · LW(p) · GW(p)
Thanks! I hadn't read that article yet, but I became familiar with the concept when reading one of Eliezer Yudkowsky's papers on existential risk assessment. (Either this one or this one) I did have a kind of "Oh Shit" moment when the context of the article hit me.
comment by drnickbone · 2012-01-20T19:21:46.711Z · LW(p) · GW(p)
Hi, I'm Nick Bone ... Just joined the site.
I'm based in the UK and interested in a wide variety of topics in science and associated philosophy. In particular, the basics of rationality (deductive and inductive logic, Bayesian Theorem, decision theory), foundations of mathematics (logic and set theory). Plus some of the old staples (classical arguments for/against existence of God, first cause, design, evil and so on).
My background is in mathematics and computer science (PhD in maths) and I'm currently working in an area of applied game theory. Generally I found the site by Googling, and the quality of discussion seems rather higher than on other discussion boards. Hope I can contribute.
By the way, I started off by putting together some thoughts on the "Doomsday Argument" and Strong Self-Selection Assumption which I hadn't seen discussed before. Since I'm brand new, and have no karma points, I'm not sure where to post them. Any suggestions?
Replies from: None, thomblake↑ comment by [deleted] · 2012-01-20T21:52:13.156Z · LW(p) · GW(p)
By the way, I started off by putting together some thoughts on the "Doomsday Argument" and Strong Self-Selection Assumption which I hadn't seen discussed before. Since I'm brand new, and have no karma points, I'm not sure where to post them. Any suggestions?
Sounds like perfect material for the discussion section. :-)
↑ comment by thomblake · 2012-01-20T19:43:20.682Z · LW(p) · GW(p)
You published a duplicate of this comment. You should click the button on the bottom-right of your other comment to retract and then delete it.
Replies from: drnickbone↑ comment by drnickbone · 2012-01-20T21:24:17.312Z · LW(p) · GW(p)
Sorry, I can only see one instance of the comment. (Did someone already delete the other?)
Replies from: katydee↑ comment by katydee · 2012-01-20T21:26:52.827Z · LW(p) · GW(p)
Here is a link to the duplicate comment: http://lesswrong.com/lw/90l/welcome_to_less_wrong_2012/5ptg
Replies from: drnickbone↑ comment by drnickbone · 2012-01-20T21:28:20.727Z · LW(p) · GW(p)
Thanks... Got it and retracted it (I hope...)
Replies from: katydee↑ comment by katydee · 2012-01-20T21:33:57.518Z · LW(p) · GW(p)
Yep, you got it-- I now see the comment as retracted. For future reference, what I did to find that comment was click on your name to view your comment history-- this can often be more efficient than sorting through the comment section of individual threads, especially long ones like this.
comment by Joshua Hobbes (Locke) · 2011-12-27T16:12:38.586Z · LW(p) · GW(p)
So, am I a second-class citizen because I found this place via MoR?
Anyways, I've been Homeschooled for the majority of my education thus far, mostly due to my Creationist parents' concerns about government-run schools. Fortunately they didn't think to censor the internet, and here I am. My PSATs showed me in the 98th percentile, so I expect I'll be able to get into a decent university. Plan A has always been Engineering, but after going through a few of the more inspirational sequences I think I may readjust my plans and try to do some good for this planet. How does one get into the Singularity business?
Replies from: thomblake, NancyLebovitz, Vaniver, TimS↑ comment by thomblake · 2011-12-27T16:33:08.948Z · LW(p) · GW(p)
So, am I a second-class citizen because I found this place via MoR?
I'm pretty sure that accounts for most of our new readership over the last year or so.
ETA: To actually answer the question, no.
How does one get into the Singularity business?
I'm pretty sure the preferred method here currently is #1 below, but here are some options:
- Make lots of money doing something else and then give it to SIAI.
- The lukeprog method: Be insanely awesome at scholarship and get tens of thousands of Lw karma in a few months and be generally brilliant and become a visiting fellow and wow everyone at SIAI.
- Go start your own Singularity. With blackjack. And hookers.
Also, I'm generally of the opinion that having been suddenly inspired by something you read recently should be evidence against that thing being what you should do with your life (assuming your prior is based on your feelings about it). You should check out some of the material by Anna Salamon on how to take that kind of decision seriously (I don't have a useful link handy).
↑ comment by NancyLebovitz · 2011-12-27T16:31:04.541Z · LW(p) · GW(p)
How does one get into the Singularity business?
With great difficulty. And it's not clear whether there will be any repeat trade.
↑ comment by TimS · 2011-12-27T16:28:31.536Z · LW(p) · GW(p)
Welcome to LessWrong.
So, am I a second-class citizen because I found this place via MoR?
I hope not. That's how I ended up here.
How does one get into the Singularity business?
Well, that depends a bit on what you think the Singularity is/will be.
comment by kerspoon · 2011-12-27T10:02:50.911Z · LW(p) · GW(p)
Hello,
I'm a 26 year old guy from the UK. I've finished writing my Ph.D. thesis in "Quantification of risk in large scale wind power integration" and I'm now working as a phone-app framework developer. I spent the last year on a round the world travel where I have spent a lot of my time writing practical philosophy. After coming back I found this site and read the core sequences. I loved them, they echoed a lot of my previous thoughts then took them much further. I felt like they would be easier to understand if they were one article so I have been re-writing bits of them for my own benefit. I am in two minds whether to post them here but I would appreciate the feedback to see if I have understood what was written.
Replies from: fburnaby, orthonormal↑ comment by orthonormal · 2011-12-28T01:38:08.400Z · LW(p) · GW(p)
Welcome!
I felt like they would be easier to understand if they were one article so I have been re-writing bits of them for my own benefit.
Lukeprog did a similar thing a while ago, which doubled for the rest of us as a good overview. I'd be interested in reading yours, too!
comment by HalMorris · 2012-12-14T05:08:07.632Z · LW(p) · GW(p)
Thanks to Emile for suggesting I come here write something. I hope to get to the New York meetup on Sunday; I'm not ready for "rituals" and futuristic music just yet.
I just ran across LW by trying google terms along the lines of memetics "belief systems", etc., which led me to some books from late 90s like "Virus of the Mind", and in the last 2-3 years some just "OK" books on religions as virus-like meme systems. This kind of search to see what people may have said about some odd combination of thoughts that I suspect might be fruitful has brought me interesting results in the past. E.g. by googling ontological comedian, I discovered Ricky Gervais which has brightened my life (his movie "The Invention of Lying" out to be of interest to LW-ers). I'm interested in practical social epistemology -- trying to come up with creative responses to what looks like major chunks of the population (those pesky folks who elect presidents) being less and less moored in reality and going off into diverse fantasy lands -- or to put it another way, a massive breakdown in common sense about what sources are reliable.
I asked someone how she makes such decisions and she answered that she trusts people who are saying things consistent with what she already knows. Unfortunately, much of what she already knows isn't true.
I wonder why people have such a tin ear for bullshit. Someone kept sending me the latest "proof" that global warming is a big hoax, and as far as I'm concerned their own arguments are the best case against them. I.e. if this is the best they can do, they must not have a case. This sort of reasoning isn't part of classic epistemology, but I can hardly think of anything more important getting a quick read on a source as to its trustworthiness - esp. whether those contributing to it are truth seekers or propagandists. I think Alvin Goldman's Social Epistemology (which is far from the "social construction of reality" folks) can help with some of my concerns. I'd like to see an "economics of ideas" concerned with what makes ideas fly, whether they're true or not -- pretty close to memetics and from a different perspective, "media ecology", analogous to the set of topological T3 space and then find embedded within that [Social] Epistemology analogous to the more constrained T4 spaces.
I'm not so much interested in Philosophy 401 syllabi, but more interested in finding ways to teach truth seeking and bullshit avoidance in elementary schools. Also how to push back against the propagandists and liars with some viral techniques of our own - browsers that facilitate fact checking, maybe make it fun in some way; walling off purely factual data and building consensus that on one side of the wall the data really is factual; and building tools for synthesizing answers to particular questions based on that data.
I hope to learn something from the "black arts" threads on LW.
Replies from: Qiaochu_Yuan, Nominull, wedrifid↑ comment by Qiaochu_Yuan · 2012-12-14T09:42:29.861Z · LW(p) · GW(p)
I wonder why people have such a tin ear for bullshit.
The obvious evolutionary argument that comes to mind is that not believing in bullshit, particularly the bullshit believed by powerful people in your tribe, could get you killed in the ancestral environment. Domains of human knowledge in which bullshit is not tolerated are those where that knowledge is constantly being tested against reality - computer programming is a good example, since you can't bullshit a compiler - and in other domains terrible things can happen.
Global warming in particular seems to me to be a case where most people hold beliefs one way or the other primarily to signal affiliation with either the pro- or anti-global warming tribes. That belief certainly doesn't get tested against reality in any meaningful way in many people's lives.
Replies from: HalMorris↑ comment by HalMorris · 2012-12-14T20:02:07.937Z · LW(p) · GW(p)
The obvious evolutionary argument that comes to mind is that not believing in bullshit, particularly the bullshit believed by powerful people in your tribe, could get you killed in the ancestral environment. Domains of human knowledge in which bullshit is not tolerated are those where that knowledge is constantly being tested against reality - computer programming is a good example, since you can't bullshit a compiler - and in other domains terrible things can happen.
Not so obvious. From all I've read, hunter-gatherer societies were and are largely governed by consensus although no doubt there are sometimes extremely dominant personalities. What you're describing is more like early civilization (e.g. Aztec), and what we used to see in Tarzan movies.
I have quite a different theory about the evolutionary advantage of tending towards orthodoxy, but that seems like a different issue anyway.
Global warming in particular seems to me to be a case where most people hold beliefs one way or the other primarily to signal affiliation with either the pro- or anti-global warming tribes. That belief certainly doesn't get tested against reality in any meaningful way in many people's lives.
My construction: The "AGW is a hoax" meme is exhibit A in movement conservatism's massive (most of you probably have no idea how massive and thorough) and mostly spurious argument that the MSM (Mostly sane Media), Academia, and every left-of-Milton Friedman institution are joined in one big lie factory aimed at bringing about one-world socialist government. That, I believe is why GOP congressmen are so nearly unanimous, or at best tiptoeing around if if they know the thing is a crock. Toe the line or be called a RINO and then "primaried"
↑ comment by Nominull · 2012-12-14T05:49:05.812Z · LW(p) · GW(p)
Please don't learn anything from the black arts threads. That's why they're called "black arts", because you're not supposed to learn them.
Replies from: almkglor, Nornagest, JoshuaZ↑ comment by almkglor · 2012-12-14T09:31:50.742Z · LW(p) · GW(p)
Although it might be good to be aware that you shouldn't remove a weapon from your mental arsenal just because it's labeled "dark arts". Sure, you should be one heck of a lot more reluctant to use them, but if you need to shut up and do the impossible really really badly, do so - just be aware that the consequences tend to be worse if you use them.
After all, the label "dark art" is itself an application of a Dark Art to persuade, deceive, or otherwise manipulate you against using those techniques. But of course this was not done lightly.
↑ comment by Nornagest · 2012-12-14T10:50:50.419Z · LW(p) · GW(p)
That's why they're called "black arts", because you're not supposed to learn them.
Is that why? I wonder, sometimes.
Given our merry band's contrarian bent, it occurs to me that calling something a "dark art" would be a pretty good way of encouraging its study while simultaneously discouraging its unreflective use. You'd then need to come up with some semi-convincing reasons why it is in fact too Dark for school, though, or you'd look silly.
On the other hand it doesn't seem to be an Eliezer coinage, which would have made this line of thinking a bit more likely. "Dark Side epistemology" is, but has a narrow enough meaning that I'm not inclined to suspect shenanigans.
↑ comment by JoshuaZ · 2012-12-14T06:00:31.499Z · LW(p) · GW(p)
Well, one could certainly learn from the dark arts threads what not to do and what to be aware of to watch out for.
Replies from: HalMorris↑ comment by HalMorris · 2012-12-14T16:19:06.158Z · LW(p) · GW(p)
Well, yeah, my point exactly to reiterate from elsewhere
[I'm interested in] spreading dark-art antibody memes, but you can't do that without taking a sample of the dark arts most prevalent at the moment, much as they must round up viruses every year to develop the yearly flu shot. So I wouldn't be looking for "the best" dark arts but rather the ones one is likely to encounter. E.g. a good source would be Newt Gingrich's "Language: A Key Mechanism of Control" memo (http://www.informationclearinghouse.info/article4443.htm) EXCERPT:
"In the video 'We are a Majority,' Language is listed as a key mechanism of control used by a majority party, along with Agenda, Rules, Attitude and Learning. As the tapes have been used in training sessions across the country and mailed to candidates we have heard a plaintive plea: 'I wish I could speak like Newt.' That takes years of practice ..."
This introduces the famous word list: a list of smiley-face words to use when describing your own positions, and nasty-face words to use when putting words in the mouths of your opponents (or do I say 'enemies'?). Or there is the Paul Wyrich farewell letter which did much to propagate the meme "political correctness is cultural Marxism", or the Weyrich-inspired "The Integration of Theory and Practice: A Program for the New Traditionalist Movement" (http://therealtruthproject.blogspot.com/2011/02/integration-of-theory-and-practice.html), a document Lenin might have been proud of.
I'm all about blunting the effectiveness of certain tactics that reduce the possibility of our thinking clearly (and by "our", I mean not that of LW, or the Second Foundation, but of the whole mass of people whose votes determine who we get to have as President, etc.) ASIDE: One place where Thomas Jefferson was one of the least small-gov't-ish founding fathers was education, and he was also all about disempowering religion memes
NOTE: I don't mean to get onto politics per se - just practices that tend to turn it into a struggle between hidden conspiracies, but I think it's hopelessly abstract to try to discuss that without the aid of current examples.
↑ comment by wedrifid · 2012-12-14T11:57:39.361Z · LW(p) · GW(p)
I hope to learn something from the "black arts" threads on LW.
You may be looking in the wrong place. I don't recall encountering any particularly impressive "Dark Arts" insights on this blog. You may be interested in, say, Robert Greene's The 48 Laws Of Power.
Replies from: HalMorris↑ comment by HalMorris · 2012-12-14T15:02:04.037Z · LW(p) · GW(p)
That sounds a bit like a "how to" book of black arts - if so, not what I had in mind, except for the purpose of developing and spreading dark-art antibody memes, but you can't do that without taking a sample of the dark arts most prevalent at the moment, much as they must round up viruses every year to develop the yearly flu shot. So I wouldn't be looking for "the best" dark arts but rather the ones one is likely to encounter. E.g. a good source would be Newt Gingrich's "Language: A Key Mechanism of Control" memo (http://www.informationclearinghouse.info/article4443.htm) EXCERPT:
"In the video 'We are a Majority,' Language is listed as a key mechanism of control used by a majority party, along with Agenda, Rules, Attitude and Learning. As the tapes have been used in training sessions across the country and mailed to candidates we have heard a plaintive plea: 'I wish I could speak like Newt.' That takes years of practice ..."
This introduces the famous word list: a list of smiley-face words to use when describing your own positions, and nasty-face words to use when putting words in the mouths of your opponents (or do I say 'enemies'?). Or there is the Paul Wyrich farewell letter which did much to propagate the meme "political correctness is cultural Marxism", or the Weyrich-inspired "The Integration of Theory and Practice: A Program for the New Traditionalist Movement" (http://therealtruthproject.blogspot.com/2011/02/integration-of-theory-and-practice.html), a document Lenin might have been proud of.
I'm all about blunting the effectiveness of certain tactics that reduce the possibility of our thinking clearly (and by "our", I mean not that of LW, or the Second Foundation, but of the whole mass of people whose votes determine who we get to have as President, etc.) ASIDE: One place where Thomas Jefferson was one of the least small-gov't-ish founding fathers was education, and he was also all about disempowering religion memes
comment by Lykos · 2012-05-30T20:00:47.983Z · LW(p) · GW(p)
Hello, everyone. I'm Lykos, and it's a pleasure to finally be posting here. I'm a high school junior and I pretty much discovered the concept of rationality through HP:MoR. I'm not sure where I discovered THAT. I'm an aspiring author, and am always eager to learn more, and rationality, I've found, has helped me with my ideas, both for stories and in general. I've currently read the Map and Territory sequence, and am going through Mysterious Answers to Mysterious Questions. I doubt I'll be posting much- I'll probably be spending most of my time basking in the intelligence of the rest of you.
Either way, it is a pleasure to join the community. Thank you.
comment by coffeespoons · 2012-04-05T12:26:11.977Z · LW(p) · GW(p)
Hiya,
I've been occasionally reading for a while, and have decided to get a login. I suppose the reason I'm here is that it's become important in the last 2 years or so that my beliefs are as accurate as possible. I've slowly had to let go of some beliefs because the evidence didn't seem to support them, and while that's been painful it has been worthwhile.
I'm also a friend of ciphergoth's - we've discussed less wrong a lot! I don't feel like I know a great deal yet - I still need to read more of the sequences, so I'll stick to asking questions until I feel I know more :-).
I'm 28, female, and I live in Cambridge, UK. My academic background is in the philosophy/politics/economics area, and I work in accounts.
coffeespoons
Replies from: TimS, ciphergoth↑ comment by TimS · 2012-04-05T13:25:26.194Z · LW(p) · GW(p)
Welcome to LessWrong. If you like believing true things and don't think death is a necessary counterpart to life, you'll fit in great.
If you have questions, might I suggest asking in the current open thread?
↑ comment by Paul Crowley (ciphergoth) · 2012-04-05T12:27:04.862Z · LW(p) · GW(p)
Hurrah! welcome :)
comment by Modig · 2012-01-30T01:21:56.972Z · LW(p) · GW(p)
I'm very excited to have found this community. In a way, it's like meeting a future, more evolved version of myself. So many things that I've read about here I've considered before, but often in a more shallow and immature way. A big thanks to all of you for that!
To the topic of me, I'm 24, male, and Swedish. After studying some of PJ Eby's work, I identify strongly as a naturally struggling person. I've been trying to figure out why for all my life, I think I read Wayne Dyer at about the same age as Eliezer read Feynman. Since then I've read a lot more, and at this point it seems like I have very credible explanations for why things turned out as they did.
Still, even though I might think I ought to have the tools now to stake out a better future path for myself, I'm plagued by learned helplessness and surrounded by ugh-fields. But as I see it there is only one best way forward - to learn more and then attempt to do things better.
I'm a great admirer of the stoic philosopher Lucius Seneca. Here's a short segment from one of his letters that resonates with me:
It is clear to you, I know, Lucilius, that no one can lead a happy life, or even one that is bearable, without the pursuit of wisdom, and that the perfection of wisdom is what makes the happy life, although even the beginnings of wisdom makes life bearable.
And a few paragraphs down...:
Philosophy is not an occupation of a popular nature, nor is it pursued for the sake of self-advertisement. Its concern is not with words, but with facts. It is not carried on with the object of passing the day in an entertaining sort of way and taking the boredom out of leisure. It moulds and builds the personality, orders one's life, regulates one's conduct, shows one what one should do and what one should leave undone, sits at the helm and keeps one on the correct course as one is tossed about in perilous seas. Without it no one can lead a life free of fear or worry. Every hour of the day countless situations arise that call for advice, and for that advice we have to look to philosophy.
I believe that the topics being explored on this site are a natural extension of what Seneca and his contemporaries termed philosophy. To live more purposefully, to be happy and to contribute more to others, studying these topics isn't optional, it's essential. And that's why I'm so glad this community exists and that I've found it.
Replies from: Solvent, lessdazedcomment by Lleu · 2012-01-07T17:25:44.155Z · LW(p) · GW(p)
19 male, currently in Florida.
Used to be a hardcore Christian. Then I started looking for alternate explanations and wound up believing in magic because I wanted it to be real. Then I read HP:MoR and it changed my life. My head is on a lot straighter now.
At first I thought this was just something cool. Then I was talking to someone about investing a fairly large amount of money. As we were talking, I was conscious of myself changing my plans to agree with him simply because he was nice. Despite this, he still changed my mind even though I recognized that he did it by being nice instead of a good argument. Had to go home before I could think clearly again.
It scared me that I could be so easily swayed by the Dark Arts, as I've heard them referred to. This might be something worth taking seriously after all.
So now I'm about to use what I learned to buy a car. A year ago, I would've just gone down with an informed friend and pick up something functional. Now I'm going down with a friend and a journal, identifying several possible vehicles and taking notes, then spend a week doing research on price, making sure I'm not being swayed by the salesman being nice, etc. before I actually spend any money.
I look forward to becoming less wrong.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-07T17:33:46.217Z · LW(p) · GW(p)
Welcome! That's a great example of rationality-in-practice.
comment by bramflakes · 2011-12-26T20:25:40.491Z · LW(p) · GW(p)
Hello, I'm 16 years old and from the UK. I found this blog via MoR and I'vebeen lurking for a few months now (this is my first post I think), and I'm slowly but surely working my way through the sequences. I think I've gotten to the point where I can identify a lot of the biases and irrational thoughts as they form in my brain, but unfortunately I'm not well-versed enough in rationality to know how to tackle them properly yet.
Replies from: atucker, NancyLebovitz↑ comment by NancyLebovitz · 2011-12-27T00:15:39.428Z · LW(p) · GW(p)
It would probably be worth your while to post about particular biases you'd like to tackle.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-27T00:47:13.086Z · LW(p) · GW(p)
Or, comment on relevant posts with any questions or examples you want to share.
comment by whiteswan21 · 2012-03-15T02:48:19.064Z · LW(p) · GW(p)
Greetings, everyone. My name is Elizabeth, and I am a young adult female beginning to learn how to think for herself. I stumbled across this website right after reading Alicorn's fanfiction Luminosity in the summer of 2010. Due to some personal issues, life in general, and a dead hard drive, I stopped visiting Less Wrong up until a couple of weeks ago.
I found Less Wrong attractive because of its being a free resource on learning the art of rationality. Borderline Personality Disorder runs in my family, and so my hypothesis is that I personally am drawn to things like LW partly in order to "self-medicate" after years of chaos, unpredictability, and irrationality. Chances are likely that I will be very quiet on this website for several months at least: for one thing, that is my usual modis operandi when learning about and researching a topic; for another, it would seem that I need to thoroughly acquaint myself with the sequences and other such work in order to fully understand and be able to contribute to more recent posts/discussions.
Replies from: Alicorn, Tripitaka↑ comment by Alicorn · 2012-03-15T04:11:06.507Z · LW(p) · GW(p)
I stumbled across this website right after reading Alicorn's fanfiction Luminosity in the summer of 2010
Squee! How'd you find it?
Replies from: Bugmaster, whiteswan21↑ comment by whiteswan21 · 2012-03-16T01:25:39.118Z · LW(p) · GW(p)
This is how I remember it happening (though 20 minutes of hunting around hasn't provided much evidence for this; then again, I allowed myself much more internet time those days): Cleolinda's snarky Twilight posts on livejournal --> audrey_ii's Jacob/Bella fanfic The Movement of the Earth on livejournal --> Luminosity.
↑ comment by Tripitaka · 2012-03-15T02:54:36.707Z · LW(p) · GW(p)
As a fellow semi-lurker and also mentally ill person, a heartly welcome to you! Did you choose you username with regards to black-swan-bets?
Replies from: whiteswan21↑ comment by whiteswan21 · 2012-03-16T01:41:43.872Z · LW(p) · GW(p)
Thanks! Good guess, but no connection. My username actually stems from Darren Aronofsky's film; I had first seen it right after going through a particularly negative emotional "flare-up", so to speak, and I immediately identified with many elements of the film. Nina's mother acts very much like my own, plus I felt I could relate to Nina's naivete, perfectionism, egocentrism, and high level of self-criticism (the last two traits are MUCH more pronounced when I'm in the middle of a flare-up). After seeing the film together, my boyfriend and I developed our own lingo: when referring to a flare-up (past, impending, its characteristics, etc.), we call it my "black swan"; when referring to normal me, we call that my "white swan". So you could say that my username is a subtle reminder of which "swan" to always try to be.
...though now that I think about it a little more, one could argue that my flare-ups are black swan events for the people around me (if I understand the idea correctly).
comment by Ebelean · 2012-01-02T19:00:37.730Z · LW(p) · GW(p)
Hi y'all. I'm a senior in high school in the Silicon Valley who's been lurking for a couple of months. I've been working my way through the Sequences since then. I don't know how much I have to contribute to the discussion, since I'm a bit of a newcomer to rationalism, but I enjoy reading everyone else's discussions.
I was introduced to this site through my philosophy class- a research project on transhumanism led me to Eliezer Yudkowsky's site, which led me to here. I came here for the Sequences, stayed here for the intelligent discussion (just like almost everyone else on this page). I'm really interested in computer science and economics and how they intersect with rationality.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-06T16:32:55.979Z · LW(p) · GW(p)
Welcome!
I was introduced to this site through my philosophy class- a research project on transhumanism led me to Eliezer Yudkowsky's site
Cool! Was this an assigned topic or a self-chosen one?
And since you're a HS senior, you might find it worthwhile to read the threads on where (or even whether) to go for college next year, or start your own thread if you want personalized advice.
comment by FloraFuture · 2013-03-30T01:45:39.225Z · LW(p) · GW(p)
Hi everyone,
A few of you have met me on Omegle. I finally signed up and made an account here like you guys suggested.
About me: I'm 26 years old, and my hobbies include creative writing and PC games. My favorite TV show is Rupaul's Drag Race.
I think I share almost all of the main positions that people tend to have in this community. But I actually find disagreements more interesting, so that's mainly what I'm here for. One of my passions in life is debating. I did debate team and that sort of thing when I was younger, but now I'm more interested in how to seriously persuade people, not just debating for show. I still have a lot of improving to do, though. If anyone wants to exchange notes or get some tips, then let me know.
Love,
Flora
Replies from: MugaSofer, orthonormal, shminux, FloraFuture, Kawoomba↑ comment by MugaSofer · 2013-03-30T22:14:03.260Z · LW(p) · GW(p)
One of my passions in life is debating. I did debate team and that sort of thing when I was younger, but now I'm more interested in how to seriously persuade people, not just debating for show.
I'm going to be the first person to point out that your objective should be to come to the correct conclusion, not to persuade people, because if you can out-argue anyone who disagrees with you you'll never change your mind, and "not every change is an improvement, but every improvement is a change".
With that noted, persuasion is a useful skill, especially if you're more rational than the average bear. Cryonics, for example, is a good low-hanging fruit if you can just get people to sign up for it.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-04-01T01:14:07.288Z · LW(p) · GW(p)
Cryonics, for example, is a good low-hanging fruit if you can just get people to sign up for it.
Modafinil is another good low-hanging fruit, as far as utilons/hedons per lifetime goes. Melatonin, too, and is less illegal.
↑ comment by orthonormal · 2013-03-31T16:54:08.189Z · LW(p) · GW(p)
Hi Flora!
Re: debating and persuading, the reflexes you developed for convincing third parties to a debate can actually be counterproductive to persuading the person you're speaking with. For example, reciprocity) can really help: the person you're talking with is much more likely to really listen and consider your points if you've openly ceded them a point first.
Practicing this has the nice side effect of making you pay more attention to their arguments and interpret them more charitably, increasing the chance that you learn something from your conversational partner in the process.
Replies from: FloraFuture↑ comment by FloraFuture · 2013-04-01T01:28:52.972Z · LW(p) · GW(p)
I totally agree with this. Really well said.
↑ comment by Shmi (shminux) · 2013-04-01T02:48:16.000Z · LW(p) · GW(p)
Welcome!
Just wondering... How often (and about what) have you changed your mind about something big and important, as a result of a debate/discussion or just after some quiet contemplation?
Replies from: FloraFuture↑ comment by FloraFuture · 2013-04-03T01:34:03.899Z · LW(p) · GW(p)
Very, very often. Most of it is small steps, like minor adjustments, but a few debates/discussions have completely changed my thinking. I have definitely been wrong about a lot of things in the past. Some of my errors I have noticed through my own critical thinking. But I would say that most of my positions today have been shaped by how much I've let other people challenge them.
↑ comment by FloraFuture · 2013-04-01T01:24:41.070Z · LW(p) · GW(p)
My objective is definitely to come to the correct conclusion. I know sometimes my positions win because other people can't argue their positions well, but without those debates, I have no way to really challenge my own ideas. I think as people go I tend to be self-critical, but even I can have blind spots. So I use debates to see if and where I have gone wrong. I've definitely gone wrong many times before.
I don't believe in persuasion as "trickery" -- I see it as more getting past the emotional barriers for a real, productive discussion.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-01T02:38:32.427Z · LW(p) · GW(p)
without those debates, I have no way to really challenge my own ideas.
It's also sometimes useful to arrange things - e.g., by making falsifiable predictions and comparing them to observed events -- so that observations of the world tend to correct our incorrect ideas.
Replies from: FloraFuture↑ comment by FloraFuture · 2013-04-03T01:30:15.153Z · LW(p) · GW(p)
You're right, but I don't think I'm alone in sometimes missing events that I should be taking into account, or not always being objective in the conclusions I make with them.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-03T04:24:17.724Z · LW(p) · GW(p)
I don't think I'm alone in sometimes missing events that I should be taking into account, or not always being objective in the conclusions I make with them.
Agreed.
Can you clarify the relationship between those things, on the one hand, and your belief that you can't challenge your own ideas without debates, on the other? I'm not sure I follow your reasoning here.
↑ comment by FloraFuture · 2013-04-04T21:05:05.290Z · LW(p) · GW(p)
Sorry, I didn't mean to say that I can't challenge myself at all. In practice I do try to challenge myself. I am saying that debates, where other people challenge me, help me fill in the gaps where I miss things, or am not being objective.
Sometimes my inner dialogue says, "The way I'm thinking about this makes to me, and it seems logical and sound. I have tried but I can't think of anything wrong with it." And then I'll explain my reasoning to someone who disagrees, and they might say for example, "but you haven't considered this fact, or this possibility." And they're right, I haven't. That doesn't necessarily mean I'm wrong, or that they're right, but it does mean that I haven't been 100% effective at challenging myself to justify my own positions.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-05T02:36:12.039Z · LW(p) · GW(p)
I'll explain my reasoning to someone who disagrees, and they might say for example, "but you haven't considered this fact, or this possibility." And they're right, I haven't.
Ah, I see.
Yes, agreed, other people can frequently help clarify our thinking, e.g. by offering potentially relevant facts/possibilities we haven't considered. Absolutely.
That said, for my own part I would eliminate the modifier "who disagrees" from your sentence. It's equally true that people who agree with me can help clarify my thinking in that way, as can people who are neutral on the subject, or think the question is ill-formed in such a way that neither agreement nor disagreement is appropriate.
The whole "I assert something and you disagree and we argue" dynamic that comes along with framing the interaction as a "debate" seems like it gets in the way of my getting the thought-clarifying benefits in those cases, and is usually a sign that I'm concentrating more on status management than I am on clarifying my thinking, let alone on converging on true beliefs.
Replies from: FloraFuture↑ comment by FloraFuture · 2013-04-08T18:40:03.095Z · LW(p) · GW(p)
People who agree definitely can offer that, but people who disagree are going to be better at it and more motivated. They push you harder to strengthen your own reasoning and articulate it well. If you try to compare the two in practice I think you'll notice a huge difference. I think it can be uncomfortable sometimes to challenge and be challenged, but it doesn't need to be about status or putting other people down. In fact, it can be friendly and supportive. I really recommend it to people who enjoy critical thinking and want to challenge themselves in unexpected ways.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-08T18:44:00.210Z · LW(p) · GW(p)
If you try to compare the two in practice I think you'll notice a huge difference.
My experience is that in general arguing with people pushes me to articulate my positions in compelling ways. If I want to clarify my thinking, which is something altogether different, other techniques work better for me.
But, sure, I agree that arguing with articulate intelligent people who disagree with me pushes me harder to articulate my positions in compelling ways than arguing with people who lack those traits.
↑ comment by Kawoomba · 2013-03-31T17:17:02.489Z · LW(p) · GW(p)
A few of you have met me on Omegle.
Ok, I'm interested. Describe what happened.
Replies from: FloraFuture↑ comment by FloraFuture · 2013-04-01T01:27:51.129Z · LW(p) · GW(p)
What do you mean? They were just friendly discussions, nothing super notable. I felt like all of them shared the same basic philosophy as me, so I felt like this was a community that I had a lot in common with.
Replies from: arundelo, Kawoomba↑ comment by arundelo · 2013-04-01T18:21:27.964Z · LW(p) · GW(p)
Just in case you're not sure what Kawoomba's alluding to, Omegle has such a reputation for being used for sexual stuff that Kawoomba was surprised to learn people use it for nonsexual stuff.
Replies from: FloraFuture↑ comment by FloraFuture · 2013-04-03T01:27:19.333Z · LW(p) · GW(p)
lol that makes sense, I forget sometimes about Omegle's reputation
comment by Petra · 2012-07-31T19:10:11.361Z · LW(p) · GW(p)
Hello!
I'm 18, an undergraduate at University of Virginia, pre-law, and found you through HPMOR.
Rationality has been a part of me for almost as long as I can remember, but for various reasons, I'm only recently starting to refine and formalize my views of the world. It is heartening to find others who know the frustration of dealing with people who are unwilling to listen to logic. I've found that it is difficult to become any better at argument and persuasion when you have a reputation as an intelligent person and can convince anyone of anything by merely stating it with a sufficiently straight face.
More than anything else, I hope to become here a person who is a little less wrong than when I came.
Replies from: John_Maxwell_IV, army1987, TheOtherDave, DaFranker, beoShaffer↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-08-02T01:28:46.884Z · LW(p) · GW(p)
This "intelligent reputation" discussion is interesting.
I had kind of an odd situation as a kid growing up. I went to a supposedly excellent Silicon Valley area elementary school and was generally one of the smartest 2-4 kids in my class. But I didn't think of myself as being very smart: I brushed off all the praise I got from teachers (because the villains and buffoons in the fiction I read were all arrogant, and I was afraid of becoming arrogant myself). Additionally, my younger brother is a good bit smarter than me, which was obvious even at that age. So I never strongly identified as being "smart".
When I was older I attended a supposedly elite university. At first I thought there was no way I would get in, but when I was accepted and got in I was astonished by how stupid and intellectually incurious everyone was. I only found one guy in my entire dorm building who actually seemed to like thinking about science/math/etc. for its own sake. At first I thought that the university admissions department was doing a terrible job, but I gradually came to realize that the world was just way stupider than I thought it was, and assuming that I was anything close to normal was not an accurate model. (Which sounds really arrogant; I'm almost afraid to type that.)
I wonder how else being raised among those who are smarter/stupider than you impacts someone's intellectual development?
Replies from: Petra↑ comment by Petra · 2012-08-02T01:45:43.900Z · LW(p) · GW(p)
generally one of the smartest 2-4 kids in my class
This is interesting. Do you think your aversion to what you saw as arrogance, but which turned out to be (at least partially) accuracy, might have been overcome earlier if, for example, you'd been the clear leader, rather than having even a small group you could consider intellectual peers? Was that how you saw them?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-08-02T02:05:02.120Z · LW(p) · GW(p)
It's possible. Although for me to have been the "clear leader" you probably would've had to remove a number of people who weren't in the top 2-4 as well. And even then I might have just thought of my family as unusually great, because there'd still be my terrifyingly smart younger brother.
Silicon Valley could be an odd place. I actually grew up in a neighborhood where most of the kids were of Indian descent (we played cricket and a game from India that I just found on Wikipedia called Kabaddi (I can't believe this is played professionally) in addition to standard US games). I didn't think to ask then, but I guess they were mostly children of immigrant software engineers? I haven't really lived anywhere other than the SF bay area yet, so I don't have much to compare it to. Right now I'm thinking I should prepare myself for way more stupidity and racial homogeneity.
Replies from: wedrifid↑ comment by wedrifid · 2012-08-02T04:06:58.174Z · LW(p) · GW(p)
Silicon Valley could be an odd place. I actually grew up in a neighborhood where most of the kids were of Indian descent (we played cricket and a game from India that I just found on Wikipedia called Kabaddi (I can't believe this is played professionally) in addition to standard US games).
It took me a few seconds pondering the playing of cricket as 'odd' to realize that I need to identify with the Indians in this story.
Replies from: shokwave↑ comment by A1987dM (army1987) · 2012-08-01T12:21:02.466Z · LW(p) · GW(p)
I've found that it is difficult to become any better at argument and persuasion when you have a reputation as an intelligent person and can convince anyone of anything by merely stating it with a sufficiently straight face.
Or even without a straight face. Sometimes I've made wild guesses (essentially thinking aloud) and, no matter how many “I think”, “may”, “possibly” etc. I throw in, someone who has heard that I'm a smart guy will take whatever I've said as word of God.
Replies from: Petra↑ comment by Petra · 2012-08-01T16:26:58.377Z · LW(p) · GW(p)
Yes. My personal favorite was in middle school, when I tried to dispel my assigned and fallacious moniker of "human calculator" by asking someone to pose an arithmetic question and then race me with a calculator. With a classroom full of students as witnesses, I lost by a significant margin, and not only saw no lessening of the usage of said nickname, but in fact heard no repeating of the story outside of that class, that day.
Replies from: DaFranker, army1987, TheOtherDave↑ comment by DaFranker · 2012-08-01T17:54:44.954Z · LW(p) · GW(p)
Beware indeed of giving others more bouncy walls on which evidence can re-bounce and double-, triple-, quatruple-, nay, Npple-count! I once naively thought to improve others' critical thinking by boosting their ability to appraise the quality of my own reasoning.
Lo' and behold, for each example I gave of a bad reasoning I had made or was making, each of them was inevitably using this as further evidence that I was right, because not only had I been right much more than not (counting hits and arguments are soldiers and all that), but the very fact that I was aware of any mistakes I was making proved that I could not make mistakes, for I would otherwise notice mistakes and thus correct myself.
TL;DR: This remains a profoundly important unsolved problem in large-scale distribution, teaching and implementation of cognitive enhancement and bias-overcoming techniques. It's even stated in Luke's "So you want to save the world" list of open problems as "raising the sanity waterline", a major strategic concern for ensuring maximal confidence of results in this incredibly absurd thing they're working on.
Replies from: Cyan↑ comment by Cyan · 2012-08-01T19:51:28.161Z · LW(p) · GW(p)
Npple
The term in common usage is "n-tuple".
Replies from: DaFranker↑ comment by DaFranker · 2012-08-01T20:09:27.615Z · LW(p) · GW(p)
Thanks. I paused for a second when I was about to write it, because I realized that I wasn't quite sure that that was how I should write it, but decided to skip over it as no information seemed lost either way and it had bonus illustrative and comical effect in the likely event that I was using the wrong term.
Replies from: wedrifid↑ comment by wedrifid · 2012-08-02T04:12:37.775Z · LW(p) · GW(p)
but decided to skip over it as no information seemed lost either way and it had bonus illustrative and comical effect in the likely event that I was using the wrong term.
Given all the evidence on 'bouncy' and 'npple-count' I must admit the comic illustration that sprung to mind may not have been the one you intended!
↑ comment by A1987dM (army1987) · 2012-08-01T22:22:35.272Z · LW(p) · GW(p)
Well... I just started to refuse to make calculations in my mind on demand, and I think I even kind-of freaked out a couple times when people insisted. It worked.
↑ comment by TheOtherDave · 2012-08-01T17:48:23.750Z · LW(p) · GW(p)
I try to keep this sort of thing in mind when interpreting accounts of the implausible brilliance of third parties.
↑ comment by TheOtherDave · 2012-07-31T21:02:48.421Z · LW(p) · GW(p)
it is difficult to become any better at argument and persuasion when you have a reputation as an intelligent person and can convince anyone of anything
Yeah, pretty much.
It is sometimes useful, at that point, to put aside the goal of becoming better at argument and persuasion, and instead pursue for a while the goal of becoming better at distinguishing true assertions from false ones.
↑ comment by DaFranker · 2012-07-31T19:37:31.733Z · LW(p) · GW(p)
Interestingly, the Authority Card seems subject to the Rule of Separate Magisteria. I'm sure you've also noticed this at some point. Basically, the reputedly-intelligent person will convince anyone of any "fact" by simply saying it convincingly and appearing to themselves be convinced, but only when it is a fact that is part of the Smart-person Stuff magisterium within the listener's model. As soon as you depart from this magisterium, your statements are mere opinion, and thus everything you say is absolutely worthless, since 1/6 000 000 000 = 0 and there are over six billion other people that have an opinion.
In other words, I agree that it constitutes somewhat of a problem. I found myself struggling with it in the past. Now I'm not struggling with it anymore, even though it hasn't been "solved" yet. It becomes a constant challenge that resets over time and over each new person you meet.
Replies from: Petra↑ comment by Petra · 2012-07-31T19:56:12.843Z · LW(p) · GW(p)
Of course, as a young person, this obstacle is largely eliminated by the context. Interact with the same group of people for a long period of time, a group through which information spreads quickly, and then develop a reputation for knowing everything. Downside: people are very disappointed when you admit you don't know something. Upside: life is easier. More important downside: you get lazy in your knowledge acquisition.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-08-01T12:23:53.853Z · LW(p) · GW(p)
Downside: people are very disappointed when you admit you don't know something.
This. Sometimes, when I tell people I don't know how to help them with something, they accuse me of being deliberately unhelpful with them because I'm selfish, angry with them, or something.
↑ comment by beoShaffer · 2012-07-31T19:37:20.452Z · LW(p) · GW(p)
Hi Petra! Minor nitpick, its rationality not rationalism. Rationalism is something completely different.
Replies from: Petra, army1987↑ comment by A1987dM (army1987) · 2012-07-31T22:10:09.806Z · LW(p) · GW(p)
Why the hell was that downvoted???
Replies from: DaFranker↑ comment by DaFranker · 2012-07-31T23:12:00.184Z · LW(p) · GW(p)
My most reasonable guess:
Because every cause wants to be a cult, and some unwary cultists of LessWrong could very easily fool themselves into thinking that any nitpicking over the use of similar words is misinterpretation of the Holy Sequence Gospel, because the Chapter of Words Used Wrong clearly states that words are meant to communicate and clarify ideas and meanings, and thus follows that arguing over words instead of arguing over their substance is inherently bad.
Replies from: DaFrankercomment by JohnEPaton · 2012-07-30T01:44:38.982Z · LW(p) · GW(p)
Hello,
My name is John Paton. I'm an Operations Research and Psychology major at Cornell University. I'm very interested in learning about how to improve the quality of my thinking.
Honestly, I think that a lot of my thoughts about how the world works are muddled at the moment. Perhaps this is normal and will never go away, but I want to at least try and decrease it.
At first glance, this community looks awesome! The thinking seems very high quality, and I certainly want to contribute to the discussion here.
I also write at my own blog, optimizethyself.com
See you in the discussion!
-John
comment by agravier · 2012-01-04T23:53:10.059Z · LW(p) · GW(p)
Hi Less Wrong, I'm a PhD researcher in Computational Neuroscience, with a background in AI and machine learning, and some past experience in the computing industry as software engineer. I live in Singapore, although I am French. Are there other members residing in Singapore?
comment by [deleted] · 2012-08-11T04:54:04.717Z · LW(p) · GW(p)
Hello,
I am a nearly seventeen year old female in the US who was linked by a friend to The Quantum Physics Series on LessWrong after trying to understand whether or not determinism is /actually/ refuted by Quantum Mechanics. I am an atheist, I suppose.
This all began as a fascination with science because I thought it would permit me to attain ultimate knowledge, or ultimate understanding and thus control of "matter". Later, I became fascinated with nihilism and philosophy, in search of defining "objectivity". It took off from there and now I am currently concerned with consciousness and usage of artificial intelligence to transfer our biological intelligence to a more effective physical manifestation.
I'm a little scared, naturally, because I think this would change a lot of what we currently understand as humans. As Mitchell Heisman describes, there exists a relationship between the scientist and the science. If the scientist is changed, I would think that the science, or knowledge, would in itself change. Some questions I have ATM: "Does objectivity exist? Can it be created? Can the notion or belief or idea of objectivity be destroyed? Will intelligence become disinterested in the ideas we are currently interested in and live in a universe free from these ideas and knowledge; can it perhaps eliminate knowledge rather than be ignorant of it? Will objectivity become so irrelevant as to not exist (as a possibility in our think-space)?"
So, I wonder, why, if so, is immortality more valuable than mortality?
I enjoy thinking about things, discovering new thoughts. I still have a lot of factual refining to do and I'm actively searching for resources to help me accomplish this. Thus I find myself here on lesswrong.org.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-08-11T07:03:48.315Z · LW(p) · GW(p)
Hello. I think you are the first person I've ever seen cite Mitchell Heisman as if he was just another thinker, rather than a weird guy who forced his ideas upon the attention of the world by committing suicide.
You're interested in the concept of "objectivity". It's certainly a crossroads concept where many issues meet. Maybe the major irony in the opposition between "objectivity" and "subjectivity" is that objectivity is a form of subjectivity! Here subjectivity is more or less a synonym for consciousness, and a subjectivity is a sensibility or a mindset: a state of mind in which the world is experienced and judged in a particular way.
Consciousness is a relation between an experiencing subject and an experienced object, and objectivity is consciousness trying to banish from its perceptions (or cognitions) of the experienced object, any influences which arise from the experiencing subject. In a lot of modern scientific and philosophical thought, this has been taken to the extreme of even trying to escape the existence of an experiencing subject.
Trying to catalogue and diagnose all the ways in which this happens would be a mammoth task, but one extreme form of the syndrome would be where the "scientific subject" achieves perfect unconsciousness of self, and exists in a subjective world that seems purely objective. That is, they would have a belief system that nothing exists but atoms (for example), and not only would they find a way to interpret everything they experience as "nothing but atoms", but they would also manage to avoid noticing their own mental processes in a way that would disturb this perception, by reminding them of their own existence.
A more moderate state of mind would be one in which self-awareness is allowed, but isn't threatening because the thinker has some way of interpreting their thoughts, and their thoughts about their thoughts, as also being nothing but atoms. For example, the brain is a computer, and a thought is a computation, and the computation has a "feeling" about it, and consciousness is made up of those feelings. A set of beliefs like that would be far more characteristic of the average materialist, than the previous extreme case, and it's also likely to be healthier, because the evidence of the self's existence isn't being repressed, it's just being interpreted to make it consistent with the belief in atomism.
The phenomenon of a personal existential crisis arising from equating objectivity with nihilism via "life has no objective meaning", is not something I remember ever experiencing, and I can't identify with it much. I can understand people despairing of life because it's bad for them and it won't stop, or even just doubting its value because their hopes have burned away, so it's not bad but it's not good either, it's just empty. But apparently I was never one of those people who thought life wouldn't be worth living if I couldn't find an objective morality or an objective meaning or an objective reason for living. This outlook seems to be a little correlated with people who were raised religious and then became atheists (I was raised as an agnostic), and I would think that sometimes the feeling of meaninglessness is more personal in origin than the one who experiences it realizes. In the religious case, for example, one may suppose that they felt personally uplifted back when they thought that reality had a purpose and this purpose included eternal life for human beings; so it may seem that the problem is one of there being "no objective purpose", but really the problem has more to do with the change in their personal ontological status.
I mention this because I think that there are "existential disorders" experienced by modern people which also have their origin in the belief in a scientific universe that doesn't contain subjects or subjectivity. Again, the forms are multitudinous and depend on what science is thought to be saying at the time. People having a crisis over epiphenomenalism are different from people having a crisis over "all possible things happen in the quantum multiverse". You don't say you're having a crisis, but there's a disturbing dimension to some of what you think about, and I would bet that it arises from another aspect of the attempt to "be objective" when "objectivity" seems to imply that you don't or can't exist, don't have any personal agency, or wherever it is that the scientific outlook seems to contradict experience.
I have been promoting phenomenological philosophy in discussions elsewhere, and phenomenology really is all about being objective about subjectivity. In other words, one is not taking one's consciousness and purging all evidence of its subjective side, just in order to be consistent with an imagined picture of reality. It's more like how western culture imagines Buddhism to be: you attend to your thoughts and feelings as they arise, you do not repress them and you do not embrace them. But the goals of phenomenology and of Buddhism are a little different - Buddhism is ultimately about personal salvation, removing oneself from the world of suffering by allowing attachments to reveal their futility; whereas phenomenology is more purely scientific in spirit, it's an attempt just to conceive the nature of consciousness correctly and objectively.
You mention artificial intelligence and possibly mind uploading. These days, the standard view of how the mind fits into nature is the computational one - the mind is a program running in the brain - with a bit of stealthy dualism allowing people to then think of their experiences as accompanying these computations; this is how the "moderate materialist", in my earlier description, thinks. Naturally, people go on from this to suppose that the same program running on a different computer would still be conscious, and thus we get the subculture of people interested in mind uploading.
Long ago I carried out my own investigations into phenomenology and physics, and came to disbelieve in this sort of materialist dualism. The best alternative I found came from entanglement in quantum theory. With entanglement, you have a single complicated wavefunction guiding two or more particles that can't be split into a set of simpler wavefunctions, one for each particle. (When the joint wavefunction can be split in this way, it's called "factorizable", it factorizes into the simpler wavefunctions.) There is some uncertainty about the reality implied by the equations of quantum mechanics, to say the least. One class of interpretations explains entanglement by saying that there are "N" different objects, the particles, and they just interact to produce the correlations. But another class of interpretations say that when you have entanglement, there's only one thing there, though it may be "present" in "N" different places.
My best idea about how consciousness works is that, first of all, it is the property of a single thing, a big entangled object in the sense of the second interpretation. Refining that hypothesis to make it detailed and testable is a long task, but I can immediately observe that it is in contradiction to the usual idea of mind uploading, according to which your mind is physically a large number of distinct parts, and it can be transferred from one place to another by piecemeal substitution of parts, or even just by creating a wholly new set of parts and making it behave like the old set. If a conscious mind is necessarily a single physical thing, all you can do is just move it around, you can't play the game of substituting transistors for neurons one at a time. (Well, if the "single physical thing" was a big bundle of entangled electrons, and neurons and transistors just host some of that bundle, then maybe you could. But the usual materialist conception of the mind, at work in this thought experiment of substitution, is that the mind is made of the neurons.)
I'm already notorious on LW for pushing phenomenology and this monadic idea of mind, and for scorning the usual approach as crypto-dualist, so I won't belabor the point. But it seems important that you should know there are radical conceptual alternatives, if you're engaged in these enjoyable-yet-scary meditations on the future of intelligence. The possibilities are not restricted just to the ideas you will find readymade in existing futurist discourse.
comment by erbeeflower · 2012-08-06T19:37:35.120Z · LW(p) · GW(p)
Hello people, 49 year old father of 4 sons, 17-27, eldest of 9,i come from a background of mormonism, my parents having been converted when i was 3.
So my reality was the dissonance of mormon dogma and theology vs what i was being 'taught' at school,vs what i experience for my self.
Now, having been through the divorce of my parents(gross hypocrisy if you're a mormon) the suicide of my brother and my own divorce,also finding myself saying i would die/kill for my beliefs,i began to realise what a mess i was and started asking questions,leaving the church (demonstrating with placards every sunday for 2 years) in 1996.
So i found myself wanting and needing a new philosophy! I'm particularly interested in learning how to 'be less wrong'! I'm still looking around and am currently interested in the non aggression principle.
I look forward to learning the tools i see here,so that i may make more considered choices.I recognise i'm a clumsy communicator and probably i'm somewhat retarded in comparison to a lot of you. Anyway i look forward to watching and learning,maybe even contributing one day! Tim.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-08-06T20:05:53.865Z · LW(p) · GW(p)
Hello, Tim! Welcome to Less Wrong. Don't be too impressed, we're all primates here. If you're interested in learning about the cognitive tools people use here, I recommend reading the sequences. They're a little imposing due to sheer length, but they're full of interesting ideas, even if you don't fully agree. Best of luck, and I hope you find something of value here.
-Dolores
comment by chloejune123 · 2012-05-07T20:28:24.947Z · LW(p) · GW(p)
Hi! I found LW by HPMoR like so many other people, and I have found a lot of interesting articles on here. I'm only 12, so there are tons of articles that I don't understand, but I am determined to figure them out. My name is Chloe and I hope that we can be friends!
comment by Rada · 2012-04-02T19:04:38.283Z · LW(p) · GW(p)
Hello to all! I'm a 17-year-old girl from Bulgaria, interested in Mathematics and Literature. Since I decided to "get real" and stop living in my comfortable fictional world, I've had a really tough year destroying the foundations of my belief system. Sure, there are scattered remains of old beliefs and habits in my psyche that I can't overcome. I have some major issues with reductionism and a love for Albert Camus ("tell me, doctor, can you quantify the reason why?" ).
In the last year I've come to know that it is very easy to start believing without doubt in something (the scientific view of the world included), perhaps too easy. That is why I never reject an alternative theory without some consideration, no matter how crazy it sounds. Sometimes I fail to find a rational explanation. Sometimes it's all too confusing. I'm here because I want to learn to think rationally but also because I want to ask questions.
Harry James Potter-Evans-Verres brought me here. To be honest, I hate this character with passion, I hate his calculating, manipulative attitude, and this is not what I believe rationality is about. I wonder how many of you see things as I do and how many would think me naive. Anyway, I'm looking forward to talking to you. I'm sure it's going to be a great experience.
Replies from: wallowinmaya↑ comment by David Althaus (wallowinmaya) · 2012-04-02T22:51:06.716Z · LW(p) · GW(p)
Hi Rada, welcome to Lesswrong!
I share your aversion for reductionism, at least from an emotional albeit not epistemical point of view. I'm afraid we have to deal with living in a reductionistic universe. But e.g. this post might persuade you that even a reductionistic universe can sometimes be quite charming, although by no means perfect.
Oh, and yay for Camus!
comment by [deleted] · 2012-03-26T11:53:29.932Z · LW(p) · GW(p)
I'm male, early 40s who grew up in the midwestern US but have lived in the UK for the past 10 years. I had a very strong evangelical/fundamentalist upbringing, but at the same time an obsessively "rational" attitude which developed in large part from my covert reading of period sf (the sort in which rebellious yet rational engineers outsmarted their rigid hierarchically-minded superiors and their extremely technologically advanced antagonists at the same time). No surprise therefore that my religious beliefs began to dissolve as soon as I went to university, finally coming out of the closet as a de-convert in 1999.
I'm a postdoctoral researcher in cognitive science - with secondary interests in philosophy of science, especially the manner of scientific inference and the different extents to which Bayesian inference has taken hold in different scientific domains at the present time. I've been lurking here for a few years after seeing posts or comments in various places elsewhere by people like ciphergoth and David Gerard (neither of whom I know in person).
I also tend to make way too many parenthetical statements when I write; even though I am completely aware I am overdoing it I just can't avoid it.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-03-26T13:03:50.012Z · LW(p) · GW(p)
I also tend to make way too many parenthetical statements when I write; even though I am completely aware I am overdoing it I just can't avoid it.
Hey, me too! (And it's damn annoying.)
comment by Konradical · 2012-01-19T21:06:21.369Z · LW(p) · GW(p)
Hello. My name is Konrad and I stumbled upon LessWrong a few weeks ago from Reddit. I've browsed some of the main pages since then, but until now haven't committed to reading much. I hope that after registering I'll be able to participate in the community and learn more. I'm 16 years of age and would describe myself as an agnostic theist. I'd also say that I'm curious about knowledge and the world so hopefully I'll learn a lot from this website.
Replies from: TimScomment by mbrubeck · 2012-01-02T10:26:33.810Z · LW(p) · GW(p)
[deleted]Replies from: Solvent↑ comment by Solvent · 2012-01-02T11:42:07.900Z · LW(p) · GW(p)
Nice to have you here. Those are some cool names you dropped.
I approve of attending the Quaker meeting: I don't think there's any better way to quickly meet good people than to find religious groups.
Did you take the 2011 Less Wrong survey, out of curiosity?
comment by glennonymous · 2011-12-31T12:33:35.914Z · LW(p) · GW(p)
Hi all,
My name is Glenn Thomas Davis. I am a 48-year old male living in Warren, NJ with my wife and 5-year old daughter. I was born and raised in Ketchikan, Alaska. I am a creative director for a pharmaceutical marketing agency. I have been interested in science and skepticism since reading Godel, Escher, Bach in my 20's, but became a really serious skeptic and atheist after I started listening to the Skeptics' Guide to the Universe podcast in 2005ish. I beacame a fan of Eliezer and the Singularity Institute after seeing him speak on Bloggingheads 3 years ago, and I recently subscribed to the Overcoming Bias NYC listserve.
Most of my online friends are from the San Francisco Bay Area where I lived for many years. Not exactly the world's most rational bunch, and they don't often appreciate my atheist rants. I have been delaying introducing myself here because I am resistant to putting in the effort and time to become a known presence from the ground up, or even to write a proper introductory post. However, it recently occurred to me I could just share pieces of writing I've already done for other, less like-minded groups. Here's one:
--
(In response to an otherwise rational person who trotted out the following canard in a post about religion)
None of this proves there is no soul (you can't prove a negative).
The statement "you can't prove a negative" is meaningless. Or you could say that it is true in a technical, superficial way, but useless.
This is because your statement applies equally well to ALL nonsensical claims. After all, I can't prove Santa Claus doesn't exist. True, we could fly to the North Pole right now and demonstrate there is no Santa Claus there, but you could always argue that his workshop is invisible. Or that Santa Claus is real, but his workshop is in an undisclosed chicken coop in Jamaica. Or... ?
Saying "you can't prove a negative" perpetuates a pernicious distortion, which is that science is about the black-and-white notion of proving and disproving things. As you know, that is NOT what science is about. Science is about reducing our level of uncertainty about how well our beliefs map onto reality. Looking at it this way gives us a useful way to address the question of whether Santa Claus exists.
To reduce our uncertainty about the existence of Santa Claus, we can try to find alternative explanations for the phenomena that are supposed to be explained by the existence of Santa Claus. Which of these claims is more likely to be true?
There is a real Santa Claus who travels on a flying sled and delivers presents to children everywhere each Christmas Eve.
Santa Claus is a fictional character. Children who receive Christmas presents usually receive them from their parents and relatives, who find it useful to lie to them sometimes about the existence of Santa Claus.
I can't completely prove or disprove either of these claims any more than I can prove or disprove the existence of any other supernatural character, but lines of evidence could be marshaled that would establish that 2 is more likely to be true than 1, beyond a reasonable doubt.
This applies equally well to the question of the existence of gods and ghosts:
Human consciousness resides in a disembodied energy field, called a 'soul', that persists after death.
Human consciousness resides in human brains, and perishes when a person's brain stops working, i.e. at death. The idea of a 'soul' is a myth left over from the days when humans lacked a detailed understanding of the way mental processes work.
The same thing I said WRT to the existence of Santa Claus applies to these two claims. I cannot prove or disprove either claim, but I can marshal a great deal of evidence for scenario 2, and little or no good evidence for scenario 1. Hence 2 is correct beyond a reasonable doubt, by which I mean beyond the doubt of a person who applies the same rules of evidence and logic to this question he applies to any question in which he has no investment in the outcome.
The existence of Santa Claus or the Easter Bunny is thus on EXACTLY the same footing as the existence of gods or ghosts of any variety. A person who takes action on the premise that there are invisible ghosts that will help or hinder them is in the same position as the person who doesn't buy any presents for their children, on the basis that their children have been good this year, surely Santa Claus will arrive to deliver presents under the tree on Christmas morning...
I respectfully urge you to therefore stop saying "you can't prove a negative," as if this somehow puts the existence of gods and ghosts in a special category where it isn't subject to the same rules of evidence to which we all subject all the other claims people make, every day.
--
Nice to meet you all --Glenn Thomas Davis
Replies from: Ezekiel↑ comment by Ezekiel · 2012-01-01T10:29:19.482Z · LW(p) · GW(p)
Well-put. Although, strictly speaking, you can prove a negative. Given the basic axioms of number theory, the statement ~0=S0 (zero does not equal one) is provable.
Replies from: glennonymous↑ comment by glennonymous · 2012-01-01T12:22:50.590Z · LW(p) · GW(p)
Great point, Ezekiel, thanks.
comment by Kouran · 2011-12-27T17:09:36.109Z · LW(p) · GW(p)
Hello Less Wrong community, I am Kouran.
What follows may be a bit long, and maybe a little dramatic. I'm sorry if that is uncourteous, still I feel the following needs saying early on. Please bear with me.
I'm a recently be-bachelored sociologist from the Netherlands, am male and in my early twenties. I consider myself a jack of several trades – among them writing, drawing, cooking and some musical hobbies – but master of none. However, I do entertain the hope that the various interests and skills add up to something effective, to my becoming someone in a position to help people who need it, and I intend to take action to approach this end.
I found Less Wrong through the intriguing Harry Potter fanfiction story called 'the Methods of Rationality.' The story entertains me greatly, and the more abstract themes stimulate me and I find myself wishing to enter discussions regarding these matters. Instead of bothering the author of the story I decided to have a look here instead. Please note that I write this before having read any of the Sequences and only a few smaller articles. I intend to get on that soon, but as introductions go I feel it is better to present myself first. I hope you will forgive any offense the following section may give.
My relation to rationalism is quite strained. I am more often in the position where I have to attack theories mostly concerned with rationality, than that I have to defend them. Often, I find arguments where people are assumed to be rational and to make informed choices are classist and uncritical of the way people are shaped by society and vice versa. Often the desirable outcome of an action or 'strategy' is taken to have been the goal that the actor deliberately attempted to attain. Often this is done at the cost of more likely explanations that make fewer unfounded assumptions. I do not at all mean that Less Wrong is implicated in this, in fact: I hope I am right to believe that quite the opposite is being attempted here, my point is that I am more used to denying people's rationality in arguments than invoking it as a way to explain social life.
That is not to say I deny that people can engage in rational thought. Rather, it appears to me that human beings are emotional, situationally defined social animals, much more than they are rational actors. Rational thought, as I see it, is something that occurs in certain relatively rare circumstances. And when it occurs it is always bound to people's social, emotional, physical lives. Often it is group membership and identification, rather than a objective calculation of merit, that defines the outcome of a deliberation, when a deliberation even takes place at all.
So then why am I here? For one thing, I would like to discuss these ideas with people who are knowledgeable about them, but who are also tolerant enough of dissidence that they'll do so in a relaxed and well, rational, way. For another, I believe that more rationality, as truly rational as we can make it, will help our species get through the ages and improve upon the fate of it´s members and the other beings it dominates. The Methods of Rationality and what little I´ve seen of this community has led me to believe that, despite having a perspective that differs from mine, people at Less Wrong are aware of some of the ways in which people are inherently not rational. That rationality is something that needs to be promoted and created, not something that is already the dominant cause of human action. For a third, I cannot deny that I am a person who engages in a lot of thinking. Despite differing perspectives I believe this community may be able to help me develope. My ´story of origin´, if I am to present myself to you as a rationalist, involves a change in my views regarding the false or harmfull style of rationalism I mentioned earlier in this post. I once struggled with the idea that rationalism itself is to blame for perceived injustice and failings of modernity. But at some point I came to the conclusion that this is not the case. At fault is not a human rationality that will forever remain at odds with our emotions, and with those people who were not sufficiently introduced to rationality. People should be able to deliberate rationally while understanding that most of their being is disinclined to yield to abstract models and lofty humanist ideals. At fault is not rationalism or the imperfection of our brians, but incomplete and erroneous rationalism that is employed to serve people who have no need or appreciation for a critical eye cast upon themselves.
I think the community of Less Wrong is very right to consider human rationality an art.
I thank you for your patience,
– Kouran.
Replies from: orthonormal, thomblake, lessdazed, fburnaby↑ comment by orthonormal · 2011-12-28T01:47:11.761Z · LW(p) · GW(p)
It sounds like the Straw Vulcan talk might be relevant to some of your thoughts on rationality and emotion...
Replies from: Kouran↑ comment by Kouran · 2012-01-04T14:37:28.559Z · LW(p) · GW(p)
Orthonormal, thank you for suggesting the Straw Vulcan talk to me. It was a fairly interesting talk I was encouraged to see rationality defined through various examples in a way that is useful, accepts emotionality and works with it. I did not myself have a Straw Vulcan view of rationality, far from it, but I do recognise a few of it's flawed features in rationalistic social theories.
However, even this speaker seemed to overstate people´s rationality. An example is given of teenagers doing dangerous things despite stating they consider the risks. The taking of the risk is attributed to flawed reasoning, miscalculation of risks and the like. From my perspective, it is much more likely that the teenagers considered the risks because they were warned against the behaviour and they realised that their peer group was about to do something their parent´s, guardians, etc. disagree with; the were somewhat anxious because they were aware of a moral conflict. However, their bond with the peer group, the emotional dynamic of the situation was not disrupted by the doubt, nor was it strong enough for them to exclude themselves from the situation (to leave), and so they took whatever risk they had pondered. I wouldn't appropriate this to flawed thinking; as I see it the thinking was fairly irrelevant to the situation, as it seems to me that it is to most situations.
Replies from: Richard_Kennaway, Swimmer963↑ comment by Richard_Kennaway · 2012-01-04T15:44:46.012Z · LW(p) · GW(p)
Consider this (and this related thread) from the genes' point of view. It may be worth having all of your carriers do risky things, if the few that die of them are more than made up for by the ones who survive and learn something from the experience (such as how to kill big fierce animals without dying).
For a gene, there's nothing reckless about having your carriers act recklessly at a stage in their lives when their reproductive survival depends on learning how to do dangerous things.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-04T15:18:08.064Z · LW(p) · GW(p)
An example is given of teenagers doing dangerous things despite stating they consider the risks.
It seems to be that there is a systematic bias in teenage thinking, especially of the male sex; many teenagers I know/have known in the past place a much higher weight on peers' opinions than on parents' opinions, and a considerably higher weight on 'coolness' than on 'safeness.' Cool actions are often either unsafe or disapproved of by the parents' generation. I've started to wonder whether there might be a good evolutionary reason for teenagers to act this way. After all, being liked and accepted by peers is more important to finding a mate than being accepted by the older generation. In an ancestral environment, young males' ability to confidently take risks (i.e. in hunting) would have been important to success, and thus a factor in attractiveness to girls. Depending on just how risky the 'cool' things to do are, and how tough the competition for mates, the boys who ignored their parents' warnings and took risks with their peer group might have had more children compared to those who were more cautious...and thus their actions would be instrumentally rational. If this hypothesis were true, the 'thinking' that leads modern teenagers to do dangerous things would be an implicit battle of popularity-vs-safety, with popularity usually winning because of an innate weighting.
This is a testable, falsifiable hypothesis, if I can find some way of testing it.
Replies from: MixedNuts, dlthomas, gwern↑ comment by dlthomas · 2012-01-04T17:11:14.381Z · LW(p) · GW(p)
If it is in the genetic interests of the children to perform actions with such-and-such a risk level relative to the reward in social recognition, why is it not in the genetic interests of the parent to promote that precise risk level in the child?
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-05T21:39:16.720Z · LW(p) · GW(p)
No idea, actually. The following is possible stuff that my brain has produced, i.e. pure invented bullshit.
It could be that this discrepancy used to be less of a problem, when society was more constant from one generation to the next and most 'risky' behaviours were obviously rewarding to both teens and adults . Based on anecdotal conversations with my parents, it seems like some things that are considered 'cool' by most of my own peer group were considered 'just stupid' by the people my parents hung out with when they were teenagers.
There's also the factor that in the modern environment, as compared to the ancestral environment, most people don't keep the same group of friends in their twenties and thirties as in their teens. The same person can be unpopular in high school, when "coolness" is more correlated to risk taking, and yet be popular in a different group later when they have a $100 000-a-year job and an enormous house with a pool in it, and nobody remembers that back in high school they had no friends. Parents who have survived this phase may consider it okay for their children to be less popular as teenagers in order to prepare for later "success" as they define it, but to a teenager actually living through it day by day, the (http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/) in their brain will still rate their peers' approval as far more important than safety, and adjust their pleasure and pain in different situations accordingly...since, in an ancestral environment of small groups that stayed together, impressing people at age 14 would have a much greater effect on your later success as an adult.
↑ comment by gwern · 2012-01-04T17:08:16.554Z · LW(p) · GW(p)
You may find the article in http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/5lkb well worth your time.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-05T21:09:51.452Z · LW(p) · GW(p)
Neat. Thanks.
↑ comment by thomblake · 2011-12-27T17:26:48.216Z · LW(p) · GW(p)
That's just about right. Humans are massively irrational; but we tend to regard that as a bug and work to fix it in ourselves.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T14:52:21.995Z · LW(p) · GW(p)
Hello Thomblake,
Thanks for the welcome! But I really can't agree with your statement.
Irrationality, which I would for now define as all human action and awareness that isn't rational thinking or that doesn't follow a rationally defined course of action, is not a 'bug'; rather it's most of the features that make us up and allow our continued existence. They make up a much greater part of what we are than those things/ faculties or moments/situations that we might call rational. And most of these deserve more respect than being called bugs. Especcially in an evolutionary perspective most of these traits and processes should definately be considered features to which we owe our continued existence. Often these things conflict with a rationality we hope to attain, but I think that at other times they are neccesary prerequisites to it. Emotions can be qualified, or 'legitimated' by reflexive rational thought, and we can try to purge emotions we deem to be personal hurdles, but still most of our lives take place outside the realm of rationality. Rationality should be used to improve the rest of our lives and to improve the way humankind is organised, how it organises it's sphere of influence. I think it's a mistake to think rationality could, or should, be everything we are.
Replies from: TimS, thomblake↑ comment by TimS · 2012-01-04T15:08:14.748Z · LW(p) · GW(p)
Irrationality, which I would for now define as all human action and awareness that isn't rational thinking or that doesn't follow a rationally defined course of action
Some of the disagreement is definitional. We define rationality as achieving your goals. Rationality should win. Any act or [ETA: mental] process that helps with achieving goals is rational.
There's a followup assertion in this community that believing true things helps achieving goals. Although not all people in history have believed that, it's hard to deny that human thinking patterns are not well calibrated for discovering and believing truth things. (Although they are better than anything else we've come across).
Replies from: Kouran↑ comment by Kouran · 2012-01-04T15:50:21.194Z · LW(p) · GW(p)
If 'effective' in the very loosest sense, is drawn into what is called rational, doesn't that confuse the term?
I mean, to my mind, having a diëtician for a parent ( leading to fortuitous fortitude which assist in the achievement of certain goals ) is not rational, because it is not something that is in any way tied to the 'ratio'. This thing that helps you achieve goals is simply convenient or a privilege, not rational at all.
Replies from: TheOtherDave, thomblake↑ comment by TheOtherDave · 2012-01-04T16:23:14.700Z · LW(p) · GW(p)
If I have a choice of parents, and a dietician is the most useful parent to have for achieving my goals, then yes, choosing a dietician for a parent is a rational choice. Of course, most of us don't have a choice of parents.
If I believe that children of dieticians do better at achieving their goals than anyone else, then choosing to become a dietician if I'm going to have children is a rational choice. (So, more complicatedly, is choosing not to have children if I'm not a dietician.)
Of course, both of those are examples of decisions related to the state of affairs you describe.
Talking about whether a state of affairs that doesn't involve any decisions is a rational state of affairs is confusing. People do talk this way sometimes, but I generally understand them to be saying that it is symptomatic of irrationality in whoever made the decisions that led to that state of affairs.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T16:49:13.447Z · LW(p) · GW(p)
Talking about whether a state of affairs that doesn't involve any decisions is a rational state of affairs is confusing. People do talk this way sometimes, but I generally understand them to be saying that it is symptomatic of irrationality in whoever made the decisions that led to that state of affairs.
What do you mean? Whose irrationality? Isn't it more straightforward (it's there among the 'virtues of rationality' no?) to just not call things 'rational' if they do not involve thinking?
Replies from: TheOtherDave, Vladimir_Nesov, TheOtherDave, TimS↑ comment by TheOtherDave · 2012-01-04T21:29:10.061Z · LW(p) · GW(p)
Incidentally, you've caused me to change my mind.
http://lesswrong.com/r/discussion/lw/96n/meta_rational_vs_optimized/
Replies from: Kouran↑ comment by Vladimir_Nesov · 2012-01-04T16:57:11.470Z · LW(p) · GW(p)
Isn't it more straightforward (it's there among the 'virtues of rationality' no?) to just not call things 'rational' if they do not involve thinking?
I don't think so, since that would be a trivial property that doesn't indicate anything, for there is no alternative available. Decisions can be made either correctly or not, and it's useful to be able to discern that, but the world is always what it actually is.
↑ comment by TheOtherDave · 2012-01-04T17:51:31.748Z · LW(p) · GW(p)
What do you mean? Whose irrationality?
It varies, and I might not even know. For example, if the arrangement of signs on a particular street intersection causes unnecessary traffic congestion, I might call it an irrational arrangement. In doing so I'd be presuming that whoever chose that arrangement intended to minimize traffic congestion, or at least asserting that they ought to have intended that. But I might have no idea who chose the arrangement. (I might also be wrong, but that's beside the point.)
But that said, and speaking very roughly: irrationality on the part of the most proximal agent(s) who was (were) capable of making a different choice.
Isn't it more straightforward (it's there among the 'virtues of rationality' no?) to just not call things 'rational' if they do not involve thinking?
Yes, it is.
For example, what I just described above is a form of metonymy... describing the streetsign arrangement as irrational, when what I really mean is that some unspecified agent somewhere in the causal history of the streetsign was irrational. Metonymy is a common one among humans, and I find it entertaining, and in many cases efficient, and those are also virtues I endorse. But it isn't a straightforward form of communication, you're right.
Incidentally, I suspect that most uses of 'rationality' on this site (as well as 'intelligence') could be replaced by 'optimization' without losing much content. Feel free to use the terms that best achieve your goals.
↑ comment by TimS · 2012-01-04T16:56:02.212Z · LW(p) · GW(p)
If there is no alternative, there doesn't seem to be a possibility of improvement. If improvement is impossible, what exactly are we worrying about?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-01-04T17:00:06.087Z · LW(p) · GW(p)
It's useful to know some things that are unchangeable.
Replies from: TimS, Kouran↑ comment by TimS · 2012-01-04T17:42:19.747Z · LW(p) · GW(p)
Sure, but asking the rational decision to make when there is literally no decision to make is not a well formed question.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-01-04T17:45:15.477Z · LW(p) · GW(p)
You use an invalid argument to argue for a correct conclusion. It doesn't generally follow that something that can't be improved is not worth "worrying about", at least in the sense of being a useful piece of knowledge to pay attention to.
Replies from: TimS↑ comment by TimS · 2012-01-04T17:49:08.686Z · LW(p) · GW(p)
What do you mean? Whose irrationality? Isn't it more straightforward (it's there among the 'virtues of rationality' no?) to just not call things 'rational' if they do not involve thinking?
It's a definitional dispute, mostly caused by my original failure to specific that I meant mental processes in this comment.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-01-04T18:01:43.781Z · LW(p) · GW(p)
It's all irrelevant to my point, which is a self-contained criticism of a particular argument you've made in this comment and doesn't depend on the purpose of that argument.
(Your quoting someone else's writing without clarification, in a reply to my comment, is unnecessarily confusing...)
↑ comment by Kouran · 2012-01-04T17:12:20.505Z · LW(p) · GW(p)
I don't think so, since that would be a trivial property that doesn't indicate anything....
I think it would indicate that not every action is being thought over. That some things a person does which lead to the achievement of a goal may not have beent planned for or acknowledged. By calling all things that are usefull in this way 'rational' I think you'd be confusing the term. Making it into a generic substitute for 'good' or 'decent'. To me, that seems harmfull to an agenda of improving people's rational thinking.
.>, for there is no alternative available.
I would like to propose the alternatives of 'beneficial' and 'usefull'. Otherwise we could consider 'involvement in causality' or something like that.
I think the word rationality could use protection against too much emotional attachment to it. It should retain a specific meaning instead of becoming 'everything that's usefull'.
Replies from: TimS↑ comment by TimS · 2012-01-04T17:40:49.244Z · LW(p) · GW(p)
I think the word rationality could use protection against too much emotional attachment to it. It should retain a specific meaning instead of becoming 'everything that's useful'.
I'm not in love with using the word "rationality" for what this community means by rationality. But (1) I can't come up with a better word, (2) there's no point in fighting to the death for a definition, and (3) thanks to the strength of various cognitive biases, it's quite hard to figure out how to be rational and worth the effort to try.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-04T17:57:07.952Z · LW(p) · GW(p)
I think various forms of "optimization" would probably fit the bill. That is, pretty much everything this site endorses about "rationalists" it would also endorse about "efficient optimizers."
But the costs associated with such a terminology shift don't seem remotely worth the payoff.
↑ comment by thomblake · 2012-01-04T16:12:34.506Z · LW(p) · GW(p)
I mean, to my mind, having a diëtician for a parent ... is not rational
Assuming for the moment that having a dietitian for a parent really does help one achieve one's goals, yes it is rational, to the extent that it can be described as an act or process. That is, if you can influence what sorts of parents you have, then you should have a dietitian.
Similarly, it would be rational for me to spend 20 minutes making a billion dollars, even though that's something I can't actually do.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T16:24:24.644Z · LW(p) · GW(p)
Whether a dietitian-parents could help you achieve all kinds of goals. Generally you'd be likely to have good health, you're less likely to be obese. Healthy, well-fed people tend to be taller, a dietician could use diet changes to reduce acne problems and whatnot. It is generally accepted that healthy, tall, good-looking people have better chances at achieving all sorts of goals. Also, dieticians are relatively wealthy highly-educated people. A child of a dietician is a child of privilege, upper middle class!
Anyway, my point is exactly that nobody can choose their parents.TimS said:
Any act or process that helps with achieving goals is rational.
I would consider parenthood a process. But having a certain set of parents instead of another has little to do with rationality, despite most parents being 'usefull'. In the same way, I would not consider it rational to like singing, even though the acquired skills of breathing and voice manipulation might help you convey a higher status or help with public speaking. To decide to take singing lessons, if you want to become a public speaker, might be rational. But to simply enjoy singing shouldn't be considered so, even if it does help with your public speaking. Because no rational thought is involved.
Replies from: TimS, thomblake, TheOtherDave↑ comment by TimS · 2012-01-04T16:53:32.405Z · LW(p) · GW(p)
Ha, you caught me using loose language.
At a certain level, instrumental rationality is a method of making better choices, so applying it where there doesn't appear to be a choice is not very coherent. Instrumental rationality doesn't have anything to say about whether you should like singing. But if want skill at singing, instrumental rationality suggests music lessons.
As an empirical matter, I suggest there are lots of people who would like to be able to sing better who do not take music lessons for various reasons. We can divide those reasons into two patterns: (1) "I want something else more than singing skill and I lack the time/money/etc to do both," or (2) "Nothing material prevents me from taking singing lessons, but I do not because of anxiety/embarrassment/social norms."
Again, I assert that a substantial number of people decide not to take singing lessons based solely on type 2 reasons. This community thinks that this pattern of behavior is sub-optimal and would like to figure out how to change it.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T17:22:47.871Z · LW(p) · GW(p)
Here I agree almost fully! My problem is that people aren't fully rational beings. That some of the people might want to take lessons on some level but don't can't be attributed only to their thoughts, but to their emotional environment. A persons thoughts need to be mobilised into action for something to take part. Sometimes this is a point of a person needing more basic confidence, sometimes a person needs their thoughts mirrored at them and confirmed. As in, speaking with a friend who'll encourage them. Thinking alone isn't enough.
I admire the community's mission to try and change people. But by the same line of argument I use above I think focusing only on how people think and how they might think better is not going to be enough. I think rationality should also be viewed as a social construct.
Replies from: Vladimir_Nesov, TimS↑ comment by Vladimir_Nesov · 2012-01-04T17:40:38.167Z · LW(p) · GW(p)
I admire the community's mission to try and change people. But by the same line of argument I use above I think focusing only on how people think and how they might think better is not going to be enough.
One level up, consider who does the focusing how. The goal may be to build a bridge, an tune an emotion, or correct the thinking in your own mind. One way of attaining that goal is through figuring out what interventions lead to what consequences, and finding a plan that wins.
↑ comment by TimS · 2012-01-04T17:35:57.819Z · LW(p) · GW(p)
people aren't fully rational beings.
That's what we've been saying. Not all of a person's thoughts are rational. And I certainly don't assert someone can easily think themselves out of being depressed or anxious.
I think rationality should also be viewed as a social construct.
I think that the goals people set are socially constructed. Thus, the ends rationality seeks to achieve are socially constructed. Once that is established, what further insight is contained in the assertion that rationality itself is socially constructed?
To put it slightly differently, I don't think mathematics is socially constructed, but it's pretty obvious to me that what we choose to add together is socially constructed.
↑ comment by Kouran · 2012-01-04T17:43:19.882Z · LW(p) · GW(p)
That's what we've been saying. Not all of a person's thoughts are rational. And I certainly don't assert someone can easily think themselves out of being depressed or anxious.
My point there wasn't that people's thoughts aren't all rational, though I agree with that. My point was that not all human actions are tied to thoughts or intentions. There are habits, twitches, there is emotional momentum driving people to do things they'd never dream of and may regret for the rest of their lives. People often don't think in the first place.
Once that is established, what further insight is contained in the assertion that rationality itself is socially constructed?
I think that, when one's goal is to improve and spread rationality, a elementary questions should be: When, and under which circumstances does a person think? How does a social situation affect your thinking? So instead of just asking how do we think and how do we improve that? It could also be usefull to ask when do we think and how do we improve that?
At some point in the future we could then inform people of the kind of social environment they might build to help them better formulate and achieve goals. Like people with anger problems being taught to 'stop! And count to ten' other people might be taught to think at certain recognisable critical moments they currently tend to walk past without realising.
↑ comment by thomblake · 2012-01-04T16:29:22.121Z · LW(p) · GW(p)
Yes, at this point we're just disputing definitions. But I think we're in agreement with all the relevant empirical facts; if you were able to chose your parents, then it would be rational to choose good ones. Also, one is not usually able to choose one's parents.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T16:44:47.492Z · LW(p) · GW(p)
Thanks for your quick replies. Yes we are agreed in those two points. I'm going to try something that may come off as a little crude, but here goes:
Point 1: If every act or process that helps me is to be called rational, then having a diëtician for a parent is rational. Point 2: The term rational implies involvement of the 'ratio', of thinking. Point 3: No rational thinking, or any thinking at all, is involved in acquiring one's parents. Even adaptive parents tend to acquire their child, not the other way around. Conclusion; Something is wrong with saying that everything that leads to the attainment of a goal is rational.
Perhaps another term should be used for things that help achieve goals but that do not involve thinking, let alone rational or logically sound thinking. This is important because thought is often overstated in the prevalence with which it occurs, and also in the causal weight that is attached to it. Thought is not omnipresent, and thought is often of minor importance in accurately explaining a social phenomenon.
Replies from: Vladimir_Nesov, thomblake↑ comment by Vladimir_Nesov · 2012-01-04T16:51:42.176Z · LW(p) · GW(p)
"Rationality/irrationality" in the sense used on LW is a property of someone's decisions or actions (including the way one forms beliefs). The concept doesn't apply to the helpful/unhelpful things not of that person's devising.
↑ comment by thomblake · 2012-01-04T16:51:25.917Z · LW(p) · GW(p)
I'd prefer to reject point 2. Arguments from etymology are not particularly strong. We're using the term in a way that has been standard here since the site's inception, and that is in accordance with the standard usage in economics, game theory, and artificial intelligence.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T17:26:51.831Z · LW(p) · GW(p)
You may be right in that the argument comes more from a concern with how a broader public relates to the term of ´rational´ than how it is used in the mentioned disciplines.
On the other hand I feel that the broader public is relevant here. LessWrong isn´t that small a community and I suspect people have quite some emotional attachment to this place, as they use it as a guide to alter their thinking. By calling all things that are usefull in this way 'rational' I think you'd be confusing the term. It could lead to rationality turning into a generic substitute for 'good' or 'decent'. To me, that seems harmfull to an agenda of improving people's rational thinking.
↑ comment by TheOtherDave · 2012-01-04T16:26:57.625Z · LW(p) · GW(p)
If I have a choice of whether to enjoy singing or not, and I've chosen to take singing lessons, I ought to choose to enjoy singing.
↑ comment by thomblake · 2012-01-04T14:58:50.369Z · LW(p) · GW(p)
See What Do We Mean By "Rationality".
Summary: "Epistemic rationality" is having beliefs that correspond to reality. "Instrumental rationality" is being able to actualize your values, or achieve your goals.
Irrationality, then, is having beliefs that do not correspond to reality, or being unable to achieve your goals. And to the extent that humans are hard-wired to be likely irrational, that certainly is a bug that should be fixed.
Replies from: Kouran↑ comment by Kouran · 2012-01-04T15:41:07.874Z · LW(p) · GW(p)
By that definition you might say that, but that still leaves the problem I tend to adress, that rationality (and by the supplied definition also irrationality) is suscribe to people and actions where thinking quite likely did not take place or was not the deciding factor of what action came about in the end. It falsely divides human experience into 'rational' and 'erroneously rational/irrational'. Thinkin is nog all that goes on among humans.
Replies from: thomblake↑ comment by lessdazed · 2011-12-28T00:52:13.613Z · LW(p) · GW(p)
Often, I find arguments where people are assumed to be rational
Often the desirable outcome of an action or 'strategy' is taken to have been the goal that the actor deliberately attempted to attain.
Diamond in a box:
Suppose you're faced with a choice between two boxes, A and B. One and only one of the boxes contains a diamond. You guess that the box which contains the diamond is box A. It turns out that the diamond is in box B. Your decision will be to take box A. I now apply the term volition to describe the sense in which you may be said to want box B, even though your guess leads you to pick box A.
Let's say that Fred wants a diamond, and Fred asks me to give him box A. I know that Fred wants a diamond, and I know that the diamond is in box B, and I want to be helpful. I could advise Fred to ask for box B instead; open up the boxes and let Fred look inside; hand box B to Fred; destroy box A with a flamethrower; quietly take the diamond out of box B and put it into box A; or let Fred make his own mistakes, to teach Fred care in choosing future boxes.
But I do not simply say: "Well, Fred chose box A, and he got box A, so I fail to see why there is a problem." There are several ways of stating my perceived problem:
Fred was disappointed on opening box A, and would have been happier on opening box B.
It is possible to predict that if Fred chooses box A, Fred will look back and wish he had chosen box B instead; while if Fred chooses box B, Fred will be satisfied with his choice.
Fred wanted "the box containing the diamond", not "box A", and chose box A only because he guessed that box A contained the diamond.
If Fred had known the correct answer to the question of simple fact, "Which box contains the diamond?", Fred would have chosen box B.
Hence my intuitive sense that giving Fred box A, as he literally requested, is not actually helping Fred.
If you find a genie bottle that gives you three wishes, it's probably a good idea to seal the genie bottle in a locked safety box under your bed, unless the genie pays attention to your volition, not just your decision.
--CEV
it appears to me that human beings are emotional, situationally defined social animals, much more than they are rational actors
You imply that there is a standard of rationality people are deviating from. Yes?
Replies from: Kouran↑ comment by Kouran · 2012-01-04T15:24:29.097Z · LW(p) · GW(p)
Lessdazed,
Thanks for your reply! I'm not quite sure how usefull that second quote you sent is. But if I ever do find a genie, I'll be sure to ask it whether it pays attention to my volition, or even to make it my first wish that the genie pays attention to my volition when fulfilling my other wishes ;)
My point in the section you quoted at the end of your post was not that there is a standard of rationality that people are deviating from. Closer to my views is that a standard of rationality is created, which deviates from people.
↑ comment by fburnaby · 2011-12-28T00:31:44.432Z · LW(p) · GW(p)
Hi Kouran, and welcome.
Your critique of "rationalism" as you currently understand it is, I think, valid. The goal of LessWrong, as I understand it (though I'm no authority, I just read here sometimes), is to help people become more rational themselves. As thomblake has already pointed out, we tend to believe with you in the general irrationality of humans. We also believe that this is a sort of problem to be fixed.
However, I also think you're being unfair to people who use the Rationality Assumption in economics, biology or elsewhere. You say that:
Often the desirable outcome of an action or 'strategy' is taken to have been the goal that the actor deliberately attempted to attain.
That's not an assumption that the theory requires. The Rationality Assumption only requires us to interpret the actions of an agent in terms of how well it appears to help it fulfill its goals. It needn't be conscious of such "goals". This type of goal is usually referred to as a revealed preference. Robin Hanson at Overcoming Bias, a blog that's quite related to LessWrong, also loves pointing out and discussing the particular problem that you've raised. He usually refers to it as the "Homo hypocritus hypothesis". You might enjoy reading some related entries on his blog. The gist of the distinction I'm trying to point to is actually pretty well-summarized by Joe Biden:
My dad used to have an expression: "Don't tell me what you value. Show me your budget, and I'll tell you what you value."
It's my own humble opinion that economists occasionally make the naive jump from talking about revealed preferences to talking about "actual" preferences (whatever those may be). In such cases, I agree with you that a disposition toward "rationalism" could be dangerous. But again, that's not the accepted meaning of the word here. I also think it might be just as naive to take peoples' stated preferences (whether stated to themselves or others) to be their "actual" preferences.
There have been attempts on LW to model the apparent conflict between the stated preferences and revealed preferences of agents, my favourite of which was "Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model". If I were to taboo the word "rationality" in explaining the goal of this site, I'd say the goal is to help people bring these two mental sub-agents into agreement; to avoid being a Homo hypocritus; to help one better synchronize their stated preferences with their revealed preferences.
Clearly, the meanings of the word "rationality" that you have, and that this community has, are related. But they're not the same. My goal in linking to the several articles in the above text, is to help you understand what is meant by that word here. Good luck and I hope you find the discourse you're looking for!
Replies from: Kouran↑ comment by Kouran · 2012-01-04T15:11:17.196Z · LW(p) · GW(p)
Fburnaby, thank you for the long reply.
I'm replying to you now before reading your suggestions, I've not had the time so far. They're on my list but for now I'd like to adress what you reply either way.
The Joe Biden quote is very effective, and I agree with the general sentiment. But not with how that relates to questions of rationality. I tend to use rationality as any thinking at all. Illogical thinking is may be bad rationality, but it is still rationality. My objection to assuming rationality isn't that you shouldn't look at how these or those actions may have some sort of function. My criticism is that, that when you do observe that a certain function is served, you shouldn't impose rationality upon the people involved. In my experience, as a bachelor of sociology and as a human being with a habit of self-reflection, people don't act upon their thoughts, but much more upon their knowledge of how to act in certain situations, on their social 'programming' and emotions, on their various loyalties.
We tend to define mankind as a being capable of thinking. I think we are wrong in this in the same way we would be wrong to define a scorpion as a being capable of making a venomous sting. The statement isn't false, but most of the time the scorpion isn't stinging anything. It's just walking, sitting, eating, grabbing something with it's claws. The stinging isn't everything that's going on, it's not nearly even most of what's going on.
Thanks again for the reply, I'll be looking around and I'll try to add something where I think it is fruitfull.
-Kouran
Replies from: fburnaby↑ comment by fburnaby · 2012-01-05T22:46:43.903Z · LW(p) · GW(p)
Hey Kouran,
I'm having trouble figuring out whether we agree or disagree. So, you tell me this:
My criticism is that, that when you do observe that a certain function is served, you shouldn't impose rationality upon the people involved.
and I agree that's an excellent assumption for the goal of doing good sociology (and several other explanatory pursuits). I think (hope!) it will become clearer to you as your read the things I linked you to that this attitude is both (1) a very good one to take in many many instances, and (2) not in conflict with the goal of becoming more rational.
I snuck a key word by in that last sentence: assumption. When thinking about humans and societies, it's become a very common and useful assumption to say that they don't deliberate or make rational decisions; they're products of their environments and they interact with those environments. At LessWrong, we usually call this the "outside view" because we're viewing ourselves or others as though from the outside.
Note that while this is a good way to look at the world, we also have real, first-hand experiences. I don't live my personal life as a bucket of atoms careening into other atoms, nor as an organism interacting with its environment; I live my day-to-day life as a person making decisions. These are three different non-wrong ways of conceptualizing myself. The last one, where I'm a person making decisions, is where the use of this notion of rationality that we're interested in comes along and we sometimes call this the "inside view". At those other levels of explanation, the concept of rationality truly doesn't make sense.
I also can't resist adding that you point out very rightly that most people don't act on their thoughts and pursue their goals, opting instead to execute their social-biological programming. Many people here are genuinely interested in getting these two realms (goals and actions) to synch up and are doing some amazing theorizing as to how they can accomplish this goal.
comment by thomblake · 2011-12-27T16:47:51.444Z · LW(p) · GW(p)
It's been a while, so I just wanted to express approval of these welcome threads. A glance over the comments we've gotten over the years should reveal that they really do make people feel welcome and help people get into discussion on the site.
comment by madison · 2011-12-27T12:59:26.661Z · LW(p) · GW(p)
Hi everyone. 23 year old south american software developer/musician here. I've been lurking around and reading for a couple of months now and I've found a lot of useful and interesting information here. It has actually triggered in me a lot of thinking about thinking, about reflexivity and the need for being aware of one's methods of thinking/learning/communicating etc.
I've been having some thoughts lately on the positive aspects of "rationality-motivated" socialization, mainly because of what I've learned of LW's weekly meetups and also because it has been, so far, pretty difficult to find someone who's interested about rationality. The first google searches took me nowhere, though I have still to look somewhere around philosophy/mathematics departments of local universities.
Anyway thanks for the information and the friendly welcome, and also for the big corpus of material you make available.
comment by xumx · 2011-12-27T12:24:33.944Z · LW(p) · GW(p)
I'm 22, Male, an undergraduate at Singapore Management University studying information systems. Interest in AI.
I want to live a "good" life, but different people/culture uses different value systems to view life... some focus on the 'Ending', some focus on the 'Journey', some sees no value at all... Therefore, I'm looking for a way to objectively measure the value of a person's life. (not sure if that is even possible)
Found LW while reading up on Singularity. Would love to make some LW friends. feel free to add me on facebook~ http://fb.me/mengxiang
Replies from: cousin_it, agravier↑ comment by cousin_it · 2011-12-27T13:25:09.290Z · LW(p) · GW(p)
some focus on the 'Ending', some focus on the 'Journey', some sees no value at all... Therefore, I'm looking for a way to objectively measure the value of a person's life. (not sure if that is even possible)
Try watching Daniel Kahneman's TED talk The riddle of experience vs memory, it's nice and seems relevant to your question.
Replies from: Dojan↑ comment by Dojan · 2011-12-27T17:23:08.333Z · LW(p) · GW(p)
Sam Harris also has a really good TED talk on "the Sience of Morality"
comment by wwa · 2012-11-23T01:45:05.433Z · LW(p) · GW(p)
Hi!
Long time lurker here.
I'm 26 years old, CS graduate living in Wrocław (Poland), professional compiler developer, cryptography research assistant and programmer. I'm an atheist (quite possibly thanks to LW). I consider the world to be overall interesting. I have many interests and I always have more things to do than I have time for. I'm motivated by curiosity. I'm less risk-averse than most people around me, but also less patient. I have a creative mind and love chellanges. While being fairly successful lone wolf until now, I seek to improve my people skills because I belive I can't get much further all by myself.
When I found LW for the first time, it absorbed me. It took me about 4 months at 4-6h a day to read all of the Sequences and comments. While I strongly disagree with some of the material, I consider LW to have accelerated my personal developement 2 to 3 times simply by virtue of critical mass and high singal to noise ratio. I don't know any better hub for thought (links welcome!). I joined becuse I finally have something to say.
W.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-23T02:09:45.738Z · LW(p) · GW(p)
Welcome!
I'm an atheist (quite possibly thanks to LW).
If you're interested in making a post, I bet lots of us would be interesting in hearing that story.
I have many interests and I always have more things to do than I have time for.
Join the club! It sounds like you've chosen a good career for someone who likes challenges, too.
It took me about 4 months at 4-6h a day to read all of the Sequences and comments. While I strongly disagree with some of the material, I consider LW to have accelerated my personal developement 2 to 3 times simply by virtue of critical mass and high singal to noise ratio.
Agreed–same for me. If anything, the Sequences that I've disagreed with were better for me, in terms of making me think...even if I still disagreed after thinking about it, they were mostly things I had never thought about to that degree of depth before.
comment by BrianLloyd · 2012-08-15T20:34:41.971Z · LW(p) · GW(p)
Hello; my name is Brian. It is with some trepidation that I post here because I am not entirely sure how or where I can contribute. On the other hand, if I knew how I could contribute then I probably wouldn't need to post here.
I seem to be a bit older than most people whose introductions I have read here. I am 58. I have spent most of my life as a software engineer, electrical engineer, technical writer, businessman, teacher, sailor, and pilot. (When I was young Robert A. Heinlein advised against specialization, an admonition I took to heart.)
My most recent endeavor was a 5-year stint in a private school as a teacher of science, math, history, government, engineering, and computer science/programming. The act of trying to teach these subjects in a manner that provides the necessary cross-connection caused me to discover that I needed to try to understand more about how I think and learn, as my ultimate goal was to help my students determine for themselves how they think and learn. Being able to absorb and regurgitate facts and algorithms is not enough. Real learning requires the ability to discover new understanding as well. (I am rather a fan of scientific method, as inefficient as it may be. Repeating an experiment is never bad if it helps you to cement understanding for yourself. Besides, you might discover the error that invalidates the experiment.)
So, now I have become interested in rational thought. I want to be able to cut to the meat of the issue and leave the irrational and emotional behind. I want to be better able to solve problems. Like Lara, I have also recently given up the search for religious enlightenment. It took time looking at my own assumptions to finally come to the conclusion that there is apparently no rational basis for religion ... as we know it. (I guess that makes me an atheistic agnostic?)
So, it is clearly a time for a change. I look forward to learning from you.
(English really does need a clear plural for the pronoun 'you'.)
Brian
Replies from: OrphanWilde, army1987↑ comment by OrphanWilde · 2012-08-15T20:38:02.858Z · LW(p) · GW(p)
Y'all!
There's an added bonus in that it annoys linguistic purists.
Replies from: BrianLloyd↑ comment by BrianLloyd · 2012-08-15T21:17:38.177Z · LW(p) · GW(p)
Until Y'all degenerates into the singular and then you need a plural for the plural, i.e. "all y'all." Don't believe me? Go to Texas. ;-)
↑ comment by A1987dM (army1987) · 2012-08-15T22:22:17.580Z · LW(p) · GW(p)
(English really does need a clear plural for the pronoun 'you'.)
You guys. (Unlike the singular, ISTM that the plural guys doesn't always imply ‘males’.)
comment by Erdrick · 2012-07-26T03:46:34.859Z · LW(p) · GW(p)
Greetings fellow Ration-istas!
First of all, I'd like to mention how glad I am that this site and community exist. For many years I wondered if there were others like me, who cared about improving themselves and their capacity for reason. And now I know - now I just need to figure out how to drag you all down to sunny San Diego to join me...
My name is Brett, and I'm a 28 year old Computational Biologist in San Diego, California. I've thought of myself as a materialist and an atheist since my freshman year in college, but it wasn't until after I graduated that I truly began to care about rationality. I realized that though I was unhappy with my life, as a scientist I had access to the best tools around for turning that around - science and reason.
I was born with a de novo genomic translocation on my 1st chromosome that left me with a whole raft of medical problems through-out my childhood - funnel chest, cleft palate, mis-fused skull, you name it. As a result I was picked on and isolated for most of my childhood, and generally responded to stress by retreating into video games and SF novels. So I went to school to study genetics and biology, and I graduated from college with a love of science - but also mediocre grades, a crippling EverQuest/World of Warcraft addiction, and few friends.
I suffered alone through a few months of a job that I hated before realizing I could use reason to improve my lot. And life has been one long, slow improvement after another ever since. Now I've got friends, a Master's in an awesome since, and a job that I enjoy... the only thing I was lacking was a community to discuss further improvements to myself and my capacity for reason to.
Then one of my most rationally minded friends pointed me towards Less Wrong and the Methods of Rationality in May, and here I am.
/b/
P.S. Barring a mass exodus to SD, I've also been considering moving to SF/SJ to be closer to friends and the LW meetups, assuming I could find work there. Does anyone know of any openings for a Bioinformaticist or Computational Biologist in the Bay by chance?
Replies from: candyfromastranger↑ comment by candyfromastranger · 2012-07-28T02:01:18.207Z · LW(p) · GW(p)
A lot of people that I know seem to think that logic and reason are mostly just important in science, but they can improve so much in everyday life.
comment by WingedViper · 2012-07-01T09:15:28.702Z · LW(p) · GW(p)
Hi,
I'm a German student-to-be (I am going to start studying IT in October) and I am interested in almost anything connected with rationality, especially the self improvement, biases and "how to save the world" parts. I hope that lesswrong will be (and it already has been to a certain amount) one of the resources for (re-)shaping my thinking and acting towards a better me and a better world.
I came here, like so many others ;-), because I wanted to check out the foundations/concepts behind HPMOR and I could not just leave again. So over the last few months I visited again and again to read some of the sequences and posts.
As I am interested in science, especially physics, maths, technology and astronomy, I have a question that I would like to ask the lesswrong community: What is a fast and secure way of determining the trustworthiness of scientists and scientific papers? I ask this because there is a lot of pseudoscience and poorly done science out there which often isn't easy to distinguish from unconventional/disrupting science (at least not for me).
all the best Viper
comment by LordSnow · 2012-05-09T17:24:03.062Z · LW(p) · GW(p)
Hi everyone! I am still a high school student but very interested in what I read here on LessWrong! I decided to register to contribute to discussions. Until now, I have been lurking but hopefully I will be able to join the conversation in a useful way.
Replies from: John_Maxwell_IV, Randaly↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-09T18:05:00.042Z · LW(p) · GW(p)
Replies from: LordSnowI am still a high school student
↑ comment by LordSnow · 2012-05-09T22:56:24.533Z · LW(p) · GW(p)
I find your jumping to conclusions somewhat offensive. In fact, I don't feel socially disadvantaged for my interests.
Replies from: John_Maxwell_IV, CuSithBell↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-10T02:02:28.459Z · LW(p) · GW(p)
No! I refuse to believe that high school could be anything but a terrible prison!
runs away screaming
↑ comment by CuSithBell · 2012-05-09T23:00:47.946Z · LW(p) · GW(p)
Excellent! I also find this picture of high school sorta baffling.
↑ comment by Randaly · 2012-05-10T04:00:40.658Z · LW(p) · GW(p)
Hiya LordSnow! If you want to get to know some of the other LW highschoolers, we have an (inactive) Google Group, and a Facebook Group.
comment by olalonde · 2012-04-24T22:54:20.737Z · LW(p) · GW(p)
Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was a kid, I've always dreamed of seeing an AGI within my lifetime. I'd be curious to know if there are some people here working on actually building an AGI. I was born in Canada, have lived in Switzerland and am now living in China. I'm 23 years old IIRC. I believe I'm quite far from the stereotypical LWer on the personality side but I guess diversity doesn't hurt.
Nice to meet you all!
Replies from: olalonde, Bugmaster↑ comment by olalonde · 2012-04-24T23:14:44.376Z · LW(p) · GW(p)
Before I get more involved here, could someone explain me what is
1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist
(I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)
Replies from: Nornagest, Bugmaster↑ comment by Nornagest · 2012-04-24T23:37:38.303Z · LW(p) · GW(p)
In the context of LW, all those terms are pretty closely related unless some more specific context makes it clear that they're not. X-rationality is a term coined to distinguish the LW methodology (which is too complicated to describe in a paragraph, but the tagline on the front page does a decent job) from rationality in the colloquial sense, which is a much fuzzier set of concepts; when someone talks about "rationality" here, though, they usually mean the former and not the latter. This is the post where the term originates, I believe.
A "rationalist" as commonly used in LW is one who pursues (and ideally attempts to improve on) some approximation of LW methodology. "Aspiring rationalist" seems to be the preferred term among some segments of the userbase, but it hasn't achieved fixation yet. Personally, I try to avoid both.
A "Bayesian rationalist" is simply a LW-style rationalist as defined above, but the qualification usually indicates that some contrast is intended. A contrast with rationalism in the philosophical sense is probably the most likely; that's quite different) and in some ways mutually exclusive with LW epistemology, which is generally closer to philosophical empiricism.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-24T23:41:31.026Z · LW(p) · GW(p)
As far as I understand, a "Bayesian Rationalist" is someone who bases their beliefs (and thus decisions) on Bayesian probability, as opposed to ye olde frequentist probability. An X-rationalist is someone who embraces both epistemic and instrumental rationality (the Bayesian kind) in order to optimize every aspect of his life.
Replies from: olalonde↑ comment by olalonde · 2012-04-25T00:23:50.202Z · LW(p) · GW(p)
You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical... Could you give some specific examples?
Replies from: Nornagest, Bugmaster↑ comment by Nornagest · 2012-04-25T01:45:20.954Z · LW(p) · GW(p)
As best I can tell it is impractical as an actual decision-making procedure for more complex cases, at least assuming well-formalized priors. As a limit to be asymptotically approached it seems sound, though -- and that's probably the best we can do on our hardware anyway.
↑ comment by Bugmaster · 2012-04-25T00:35:05.390Z · LW(p) · GW(p)
I thought I could, but Yvain kind of took the wind out of my sails with his post that Nornagest linked to, above. That said, Eliezer does outline his vision of using Bayesian rationality in daily life here, and in that whole sequence of posts in general.
↑ comment by Bugmaster · 2012-04-24T23:36:07.387Z · LW(p) · GW(p)
Most people here would probably tell you to immediately stop your work on AGI, until you can be reasonably sure that your AGI, once you build and activate it, would be safe. As far as I understand, the mission of SIAI (the people who host this site) is to prevent the rise of un-Friendly AGI, not to actually build one.
I could be wrong though, and I may be inadvertently caricaturing their position, so take my words with a grain of salt.
Replies from: wedrifid, olalonde↑ comment by wedrifid · 2012-04-25T00:44:13.903Z · LW(p) · GW(p)
As far as I understand, the mission of SIAI (the people who host this site) is to prevent the rise of un-Friendly AGI, not to actually build one.
I think they are kind of keen on the idea of not dying too. Improving the chances that a Friendly AI will be created by someone is probably up there as a goal too.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-25T00:52:25.971Z · LW(p) · GW(p)
I think they are kind of keen on the idea of not dying too.
Imagine that ! :-)
Improving the chances that a Friendly AI will be created by someone is probably up there as a goal too.
That's a different goal, though. As far as I understand, olalonde's master plan looks something like this:
1). Figure out how to build AGI.
2). Build a reasonably smart one as a proof of concept.
3). Figure out where to go from there, and how to make AGI safe.
4). Eventually, build a transhuman AGI once we know it's safe.
Whereas the SIAI master plan looks something like this:
1). Make sure that an un-Friendly AGI does not get built.
2). Figure out how to build a Friendly AGI.
3). Build one.
4). Now that we know it's safe, build a transhuman AGI (or simply wait long enough, since the AGI from step (3) will boost itself to transhuman levels).
One key difference between olalonde's plan and SIAI's plan is the assumption SIAI is making: they are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels. Thus, from their perspective, olalonde's step (2) above might as well say, "build a machine that's guaranteed to eat us all", which would clearly be a bad thing.
Replies from: wedrifid, TheOtherDave↑ comment by wedrifid · 2012-04-25T01:25:19.655Z · LW(p) · GW(p)
One key difference between olalonde's plan and SIAI's plan is the assumption SIAI is making: they are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels. Thus, from their perspective, olalonde's step (2) above might as well say, "build a machine that's guaranteed to eat us all", which would clearly be a bad thing.
A good summary. I'd slightly modify it in as much as they would allow the possibility that a really weak AGI may not do much in the way of FOOMing but they pretty much ignore those ones and expect they would just be a stepping stone for the developers who would go on to make better ones. (This is just my reasoning but I assume they would think similarly.)
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-25T01:32:17.388Z · LW(p) · GW(p)
Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario -- it's just using the developers' brains as its platform, as opposed to digital hardware. I don't know whether the SIAI folks would endorse this view, though.
Replies from: wedrifid↑ comment by wedrifid · 2012-04-25T01:52:51.512Z · LW(p) · GW(p)
Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario -- it's just using the developers' brains as its platform, as opposed to digital hardware.
Can't we limit the meaning of "self-improving" to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don't have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.
Replies from: Bugmaster↑ comment by TheOtherDave · 2012-04-25T01:09:34.241Z · LW(p) · GW(p)
[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.
Can you clarify your reasons for believing this, as distinct from "...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it's worth devoting effort to avoid even if the chance is relatively low"?
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-25T01:18:08.723Z · LW(p) · GW(p)
That's a good point, but, from reading what Eliezer and Luke are writing, I formed the impression that my interpretation is correct. In addition, the SIAI FAQ seems to be saying that intelligence explosion is a natural consequence of Moore's Law; thus, if Moore's Law continues to hold, intelligence explosion is inevitable.
FWIW, I personally disagree with both statements, but that's probably a separate topic.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-25T03:31:38.936Z · LW(p) · GW(p)
Huh. The FAQ you cite doesn't seem to be positing inevitability to me. (shrug)
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-25T20:31:00.135Z · LW(p) · GW(p)
You're right, I just re-read it and it doesn't mention Moore's Law; either it did at some point and then changed, or I saw that argument somewhere else. Still, the FAQ does seem to suggest that the only thing that can stop the Singularity is total human extinction (well, that, or the existence of souls, which IMO we can safely discount); that's pretty close to inevitability as far as I'm concerned.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-25T21:01:42.382Z · LW(p) · GW(p)
Note that the section you're quoting is no longer talking about the inevitable ascension of any given AGI, but rather the inevitability of some AGI ascending.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-26T20:36:34.746Z · LW(p) · GW(p)
I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren't a concern.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-26T21:10:27.587Z · LW(p) · GW(p)
OK, now I am confused.
This whole thread started because you said:
[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.
and I asked why you believed that, as distinct from "...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it's worth devoting effort to avoid even if the chance is relatively low"?
Now you seem to be saying that SI doesn't believe that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels, but it is primarily concerned with those who do.
I agree with that entirely; it was my point in the first place.
Were we in agreement all along, have you changed your mind in the course of this exchange, or am I really really confused about what's going on?
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-26T21:22:32.439Z · LW(p) · GW(p)
Sorry, I think I am guilty of misusing terminology. I have been using AI and AGI interchangeably, but that's obviously not right. As far as I understand, "AGI" refers to a general intelligence who can solve (or, at least, attempt to solve) any problem, whereas "AI" refers to any kind of an artificial intelligence, including the specialized kind. There are many AIs that already exist in the world -- for example, Google's AdSense algorithm -- but SIAI is not concerned about them (as far as I know), because they lack the capacity to self-improve.
My own hidden assumption, which I should've recognized and voiced earlier, is that an AGI (as contrasted with non-general AI) would most likely be produced through a process of recursive self-improvement; it is highly unlikely that an AGI could be created from scratch by humans writing lines of code. As far as I understand, the SIAI agrees with this statement, but again, I could be wrong.
Thus, it is unlikely that a non-general AI will ever be smart enough to warrant concern. It could still do some damage, of course, but then, so could a busted water main. On the other hand, an AGI will most likely arise as the result of recursive self-improvement, and thus will be capable of further self-improvement, thus boosting itself to transhuman levels very quickly unless its self-improvement is arrested by some mechanism.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-26T21:57:01.388Z · LW(p) · GW(p)
OK, I think I understand better now.
Yeah, I've been talking throughout about what you're labeling "AI" here. We agree that these won't necessarily self-improve. Awesome.
With respect to what you're labeling "AGI" here, you're saying the following:
1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and
2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high.
3) SIAI believes 1) and 2).
Yes? Have I understood you?
Replies from: Bugmaster↑ comment by olalonde · 2012-04-25T00:12:41.557Z · LW(p) · GW(p)
I understand your concern, but at this point, we're not even near monkey level intelligence so when I get to 5 year old human level intelligence I think it'll be legitimate to start worrying. I don't think greater than human AI will happen all of a sudden.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-25T00:29:35.489Z · LW(p) · GW(p)
The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively -- i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from "monkey" to "quasi-godlike" very quickly, potentially so quickly that you won't even notice it happening.
FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI's worries are way overblown, but that's just my personal opinion.
Replies from: wedrifid, olalonde↑ comment by wedrifid · 2012-04-25T00:37:09.391Z · LW(p) · GW(p)
i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet.
Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies - the threat is that it improves rapidly, not that it improves exponentially.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-25T00:42:48.362Z · LW(p) · GW(p)
Good point, though if the AI's intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we'd need to worry about. But you're right, the speed is what ultimately matters, not the growth curve as such.
↑ comment by olalonde · 2012-04-25T00:41:07.089Z · LW(p) · GW(p)
Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.
Replies from: Vulture↑ comment by Vulture · 2012-04-25T02:22:54.114Z · LW(p) · GW(p)
Uh... I think the fact that humans aren't cognitively self-modifying (yet!) doesn't have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don't really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.
Replies from: olalonde, adamisom↑ comment by adamisom · 2012-04-25T03:13:22.761Z · LW(p) · GW(p)
This is a really stupid question, but I don't grok the distinction between 'learning' and 'self-modification' - do you get it?
Replies from: Vulture↑ comment by Vulture · 2012-04-25T04:16:56.494Z · LW(p) · GW(p)
By my understanding, learning is basically when a program collects the data it uses itself through interaction with some external system. Self-modification, on the other hand, is when the program has direct read/write acces to its own source code, so it can modify its own decision-making algorithm directly, not just the data set its algorithm uses.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-25T04:48:00.293Z · LW(p) · GW(p)
This seems to presume a crisp distinction between code and data, yes?
That distinction is not always so crisp. Code fragments can serve as data, for example.
But, sure, it's reasonable to say a system is learning but not self-modifying if the system does preserve such a crisp distinction and its code hasn't changed.
comment by rejuvyesh · 2012-04-16T10:43:43.431Z · LW(p) · GW(p)
Hello everyone!
I am Jayesh Kumar Gupta. I am from Jodhpur, India. I have been interested in rationality for some years now. I came across this site via HPMOR. I had been reading posts on the site for some years now, while trying to wade my way through the gigantic Sequences, but was not confident enough to join this group, (people here seem to know so much). Right now I am an undergraduate student at IIT Kanpur. Hopefully I too will contribute something to the site in the future.
Thanks!
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-04-26T22:03:57.696Z · LW(p) · GW(p)
Welcome to Less Wrong! It's good to see that we're drawing an audience from all over the world.
Don't worry about not knowing enough--a good fraction of regulars (myself included) weren't confident enough to join for a while. Now that you're out of lurkerdom, you'll gain confidence quickly. LW can be intimidating, but we're not as scary as we look. :)
comment by thespymachine · 2012-04-03T18:10:39.701Z · LW(p) · GW(p)
Hello to the LessWrong universe.
I'm 23 years old. A lover of music (Last.fm): Ravel, Mozart, Radiohead, Sigur Rós, Animal Collective. And driven to learn.
My goal right now is to become a philosophy professor, and participate in radical, reason oriented movements to influence social change.
I value the intellect, the body, life, and the universe. I value learning - to improve the lives of others and myself, and to live most accordingly with 'nature.' I value those who direct themselves in a rational manner.
My rationality quest began when I was a child, always using legos to build new things and drawing. Eventually video games came into my life and problem solving drove me. However, due to immaturity and the social life of a middle/high schooler, I never really progress intellectually despite my love for science and 'deep' conversations with friends.
It wasn't until I was 20, and ended my relationship with a girl that philosophical thought dawned upon me. It was sparked by the breakup, because her family was religious and I molded myself to that lifestyle, but when it was over there was nothing there. I suppose, after losing who I thought was the love of my life, I began to search for 'purpose.'
A few philosophy courses and a dive into Stoicism pushed me to realms of thought I had never began to contemplate.
Since then I've been progressing my learning on my own through literature, philosophical writings, conversation, and free online references. And I found myself here because my StumbleUpon lead me to a blog from a philosophy professor who linked to this site.
I really want to become a value member of this community, to help myself and others.
Replies from: None↑ comment by [deleted] · 2012-04-03T20:11:06.540Z · LW(p) · GW(p)
Welcome to Less Wrong! :)
You sound like a pretty studious individual; you might enjoy some of the posts on inexpensive and efficient learning, if you haven't seen them already.
Out of curiosity, what was the name of the blog that led you here?
Replies from: thespymachine↑ comment by thespymachine · 2012-04-03T20:38:24.507Z · LW(p) · GW(p)
Thank you!
Wow, this post you linked to is quite amazing. Thanks a bunch. ("autodidact" - I finally have a word for what I do, ha ha)
anotherpanacea - the exact post is here
comment by [deleted] · 2012-03-30T16:04:10.376Z · LW(p) · GW(p)
Hi, my name is Alexey, and although I've been around the website for a while and have been an active LessWrongian in real life meetups, I haven't actually introduced myself on the website yet. So here it goes.
I am an undergraduate student at the University of Cambridge, specialising in synthetic biology and aiming to go on to do research in that field. I am interested in raising x-risk awareness within the SynBio community and advancing a safe approach to research in this area.
I was introduced to LW by a friend, and soon realised that there is actually a community of rational people interested in much the same things as I am. I have enjoyed reading the Sequences and have definitely learned a lot.
Since finding the LW website and community has been such a great experience for me, I introduced many of my friends to it, have participated in setting up the Cambridge meetup group; and more recently organised the first meetup in Budapest. I find it very rewarding to be able to talk to and make friends with fellow rationalists!
As for my interests within the scope of LW, I find that I am interested in self-improvement in terms of identifying and overcoming biases, building and expanding rationalist communities and working on x-risk reduction in synthetic biology. In fact, I find that biologists are underrepresented within the LW community and hope that my knowledge of the subject can translate into useful contributions to the discussions here on the LW website, and in real life LW meetups!
comment by Crouching_Badger · 2012-03-26T04:07:09.382Z · LW(p) · GW(p)
I'm reposting this here because there was a thread swap and I didn't get any takers in the former thread. Please let me interview you! It will be fun and wont take up too much time!
Hello, my name is Brett, and I am an undergraduate student at the University of North Texas, currently studying in the Department of Anthropology. In this semester, my classmates and I have been tasked with conducting an ethnographic study on an online community. After reading a few posts and the subsequent comments, LessWrong seemed like a great community on which to conduct an ethnography. The purpose of this study is to identify the composition of an online community, analyze communication channels and modes of interaction, and to glean any other information about unique aspects of the LessWrong community.
For this study I will be employing two information gathering techniques. The first of which will be Participant Observation, where I will document my participation within the community in attempts to accurately describe the ecosystem that comprises LessWrong. The second technique will be two interviews held with members of the community, where we will have a conversation about communication techniques within the community, the impact the community has had on the interviewees, and any other relevant aspects that may help to create a more coherent picture of the community.
It is at this point that I would like to ask for volunteers who would like to participate in the interview portion of the study. The interview will take from forty-five minutes to an hour and a half, and will be recorded using one of several applicable methods, such as audio recording or textual logs, depending on the medium of the interview. If there are any North Texas area members who would like to participate, I would like to specifically invite you to a face-to-face interview, as it would be most temporally convenient, though I am also available to use Skype, one of any other voice-based, online communication systems or the telephone to communicate.
If you are interested in participating, please send me a PM expressing your interest. If there are any questions or comments about the nature of the study, my experience with Anthropology, or anything else, please feel free to reply and create discourse. Thank you for your time.
Replies from: kpreid, Alicorn↑ comment by kpreid · 2012-03-26T18:44:12.309Z · LW(p) · GW(p)
I recommend making a post to Discussion instead of a comment for this purpose.
Replies from: Crouching_Badger↑ comment by Crouching_Badger · 2012-03-27T01:51:05.901Z · LW(p) · GW(p)
Thanks for the advice. I've already gotten two volunteers, though, so I don't think that will be necessary. I will definitely make sure to post there to discuss my research, though.
↑ comment by Alicorn · 2012-03-27T06:56:00.995Z · LW(p) · GW(p)
This survey may interest you in your pursuits.
Replies from: Crouching_Badger↑ comment by Crouching_Badger · 2012-03-27T18:54:44.298Z · LW(p) · GW(p)
Yes! That is great! Thank you so much.
comment by Raiden · 2012-02-24T05:40:19.429Z · LW(p) · GW(p)
Hi, I am Raiden. For most of my life I have been an aspiring rationalist, even though I didn't call myself by that name. I was raised to think that I was some sort of super genius (it was a big shock in my later elementary school years to discover that I wasn't the smartest person in the world). This had the effect of causing me to associate some of my identity with intelligence. This led me to be a traditional rationalist; I had much admiration for the Spock stereotype, and I have been a atheist since childhood despite a fundamentalist religious family. In my freshmen year of high school, I was exposed to some self-help books that led me to seriously consider other virtues besides intelligence to be of value. This slowly revolutionized my view of the world.
Over the course of the next summer, I was exposed to the philosophy of Objectivism, and quickly became a strong adherent to it. I was from the beginning in agreement with the "Open Objectivist" group which said Objectivism is not a complete philosophy. I agree that objectivism descended into some sort of cult, and that Ayn Rand was one of history's greatest hypocrites. I also came to believe that this didn't disqualify the soundness of the philosophy itself. Over time though, the philosophy began to lose its grip on my mind. I still consider myself to be some sort of Neo-Objectivist however, as many of Rand's ideas shape my opinions.
Very recently I have discovered Less Wrong and was expose to its version of rationality, which I came to wholeheartedly adore. I have so far I have at least skimmed the Sequences, and I believe I have a basic understanding of rationality. My goal right now is to scan and absorb all the Sequences and then read some rationality-related textbooks. With a fundamental understand of rationality down, I will then re-examine all of my important beliefs from philosophy to politics to religion. After I come to a better understanding of rationality and the world, I will decide on goals and values and systematically work for them. I also plan to contribute to Less Wrong.
I am at the age of sixteen, so please don't discriminate against me based on that. I consider myself to be far more mature than most people my age, and far more mature than I was even a few months ago. I am currently recovering from what may only be called an existential crisis, but in my outward behavior I am perfectly stable and sociable. Deep down inside I have a burning desire to know the truth. In my opinion, that is one of the greatest measures of one's character.
Replies from: Arran_Stirton↑ comment by Arran_Stirton · 2012-02-24T20:22:18.447Z · LW(p) · GW(p)
Have you read a Intuitive Explanation of Bayes Theorem? Personally, from a mathematical standpoint at least, I consider that almost required reading for getting to grips with the more mathy bits of the sequences. Instead of systematically re-examining all of you beliefs once you have a better understanding of rationality, you can just update as you go along, if you find it easier/more-fun that way. You know, "A burning itch to know is higher than a solemn vow to pursue truth." and all that jazz.
Forgive the digression but is your user name a MGS reference by any chance?
Replies from: Raiden↑ comment by Raiden · 2012-02-26T02:16:38.133Z · LW(p) · GW(p)
I have yet to read the Bayes Theorem article. I understand that it is very much a prerequisite article for many of the others, yet I simply have not. It is a very long and complicated article, and would take a significant time and intellectual investment in order to read. Procrastination has always been my single greatest flaw, one I struggle with every day. I recognize the importance of reading it, but that's only a belief. It seems very hard to integrate the importance of rationality to a level where I can really deep-down feel its importance. When I think or act irrationally, I recognize that I am doing so, and I consciously recognize that it is wrong. Yet, I find it hard to strongly feel it is so. It seems one needs to understand the importance of rationality intuitively before it can be applied. The elephant must first give the rider some control before he can do anything. Do you have any advice concerning that?
Oh and Raiden is my actual name.
Replies from: Arran_Stirton↑ comment by Arran_Stirton · 2012-02-26T06:21:27.835Z · LW(p) · GW(p)
I can't say I blame you for not reading it; it took me about three months to get through it! However Common Sense Atheism has An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem, it's much easier to read and explains many of the bits that Eliezer skips over.
As for integrating the importance of rationality I scarcely know where to begin; it's a large topic. First and foremost read this. Secondly realise how important opinions are, and that it's not okay to "have your own opinion" as schools will common condition their students to believe. That's not to say don't have different opinions to other people when you have a (justified) belief that your opinion reflects reality better, it's to say it's not okay to have false opinions. One of the reasons reading An Intuitive Explanation is important is that it helps convey the idea that your beliefs should correspond exactly to what you expect from reality.
It's hard to put into context why this is so important, but think if you will back to the cause of the WW2 Holocaust. All of that happened because of what people believed. To get a taste of how bad that sort of thing is look into current affairs, things happening in Syria, Bahrain, and so on. Religious extremists are another example of this, as are racists. All of these "evil" people are not inherently evil, they just have beliefs that make them think that what they're doing is right.
All of that injustice done, all of those people hurt and killed because people are not inherently rational. You could end up as one of those victims, but the worst part is you could be the one doing the evil and not even know it. It might help to read this.
On a separate note, regarding emotions and acting irrationally because of them the best thing is to reduce them. Either you'll find that the emotion is flagging up a real problem and you can take the appropriate action, or they'll end up dissipating.
Sorry that turned into a bit of an essay but it is a big topic and my experience of explaining it is limited. If you’ve got questions or anything I’m more than happy to answer/try to answer them. Hope all this is helpful.
And my bad, I’ve only ever encountered your name in relation to MGS, my apologies for that.
Replies from: Raiden↑ comment by Raiden · 2012-02-26T06:51:48.696Z · LW(p) · GW(p)
Thanks for your input, it was quite enlightening. I especially appreciate the Common Sense Atheism post. That's a wonderful blog and what originally led me to this site, but I had no idea that article was on there.
Concerning what you said about the Holocaust and such, that had actually occurred to me before, but in a different manner. I reasoned that even if I felt 99% certain that my moral beliefs were accurate, there was that 1% chance that they could be wrong. Hitler may well have felt 99% certain that he was correct. I became to afraid to really do much of anything. I thought, "What if it is in some weird way the utmost evil to not kill millions of people? It seems unlikely, but it seemed unlikely to Hitler that he was in the wrong. What if somehow similarly it is wrong to try to ascertain what is right? What if rationality is somehow immoral?"
Of course I never actually consciously thought that was true, but I fear my subconscious still believes it. That is my greatest debilitation, that lingering uncertainty. I now consciously hold the idea that it is at least better to try and be right than to not try at all, that it would be better to be Hitler than to be 40 years old and living with my mom, but my subconscious still hasn't accepted that.
I believe that is why I have difficulty integrating rationality. Some part of my mind somewhere says, "But what if this is wrong? What if this is evil? You're only 99.99999999% certain. What if religious fundamentalism is the only moral choice?"
Replies from: TheOtherDave, Arran_Stirton↑ comment by TheOtherDave · 2012-02-26T18:03:32.239Z · LW(p) · GW(p)
It's not a rhetorical question, you know. What happens if you try to answer it?
I have a pill in my hand. I'm .99 confident that, if I take it, it will grant me a thousand units of something valuable. (It doesn't matter for our purposes right now what that unit is. We sometimes call it "utilons" around here, just for the sake of convenient reference.) But there's also a .01 chance that it will instead take away ten thousand utilons. What should I do?
It's called reasoning under uncertainty, and humans aren't very good at it naturally. Personally, my instinct is to either say "well, it's almost certain to have a good effect, so I'll take the pill" or "well, it would be really bad if it had a bad effect, so I won't take the pill", and lots of studies show that which of those I say can be influenced by all kinds of things that really have nothing to do with which choice leaves me better off.
One way to approach problems like this is by calculating expected values. Taking the pill gives me a .99 chance of 1000 utilons, and a .01 chance of -10000 utilons; the expected value is therefore .99 1000 - .01 10000 = 990 - 100; the result is positive, so I should take the pill. If I instead estimated a .9 chance of upside and a .1 chance of downside, the EV calculation would be 99 - 1000; negative result, so I shouldn't take the pill.
There are weaknesses to that approach, but it has definite advantages relative to the one that's wired into my brain in a lot of cases.
The same principle applies if I estimate a .99 chance that by adopting the ideology in my hand, I will make better choices, and a .01 chance that adopting that ideology will lead me to do evil things instead.
Of course, what that means is that there's a huge difference between being 99% certain and being 99.99999999% certain. It means that there's a huge difference between being mistaken in a way that kills millions of people, and being mistaken in a way that kills ten people. It means that it's not enough to say "that's good" or "that's evil"; I actually have to do the math, which takes effort. That's an offputting proposition; it's far simpler to stick with my instinctive analysis, even if it's less useful.
At some point, the question becomes whether I feel like making that effort.
↑ comment by Arran_Stirton · 2012-02-26T23:42:19.352Z · LW(p) · GW(p)
Glad to be of help!
Well the thing about probabilities (in Bayesian statistics) is that they represent the amount of evidence you have for the true state of reality. In general being 50% certain means you have no evidence for your belief, less that 50% means you have evidence against it and greater than 50% means you have evidence for it. You'll get to it as you read more of An Intuitive Explanation.
The important thing to note is that to be 99% certain something is true as a rationalist you actually have to have evidence for it being true. Rather than feeling that you're 99% certain, Bayes theorem allows you to see how much evidence you actually have in a purely quantitative way. That's why there's so much talk of "calibration" here, it's an attempt at aligning the feeling of how certain you are with how certain the evidence says you should be.
You can also work out the expected value of what your actions would be if you are wrong. For Hitler, if he thought there was a 1% chance of him being wrong he could work out the expected number of wasted lives as 0.01*11,000,000 which is 110,000 (and that's using the lower bound of people killed during the holocaust). Hence, if I were Hitler, I wouldn't risk instigating the holocaust until I had much more information/evidence. Being rational is about looking at the way the world is and acting based on that.
The point is, the most moral thing to do is the most likely thing to be moral. If God turns out to exist (although there are masses of evidence against that) and he asks you why you weren't a religious fundamentalist, you'll have a damn good answer.
comment by Spectral_Dragon · 2012-01-15T03:31:59.569Z · LW(p) · GW(p)
Hello! I came here researching free will for a school project. I'm currently 18, studying science at a fairly basic level in a small town in Sweden. I've so far read a few articles and the sheer amount of interesting thoughts in the articles made me want to stay. When I read what Lesswrong stands for, I knew I wanted to be a part of it, to try to become a better, hopefully wiser person.
I've liked philosophy for a long time, and don't usually like "because" as an answer for anything. I want to find out reasons behind everything. I'm so far not as good as I wish, due to limited time and wanting to read a lot of the articles, but not having enough time. However, I find it difficult to abandon half-read articles, even though they can be a bit of a long read compared to what I'm used to, excluding books.
Since I'm easily influenced by new ideas, too, as long as they make sense, I'm expecting myself to switch a lot. Lesswrong seems interesting, anyway, and I want to know more. I want more perspectives and thoughts. So far Lesswrong seems wonderful, and I think I'll like it. Hoping the community can oversee shortcomings when needed, but I'm expecting you all to be a nice bunch.
For science, and a greater understanding. Hopefully I'll be able to learn from you. But it's late now, and I'll be going now. Just thought I'd say hi.
comment by jacobt · 2012-01-15T01:48:07.570Z · LW(p) · GW(p)
Hello. I'm a 19-year-old student at Stanford University majoring in Computer Science. I'm especially interested in artificial intelligence. I've been reading lesswrong for a couple months and I love it! There are lots of great articles and discussions about a lot of the things I think about a lot and things that I hadn't thought about but proceeded to think about after reading them.
I've considered myself a rationalist for as long as I can remember. I've always loved thinking about philosophy, reading philosophy articles, and discussing philosophy with other people. When I started reading lesswrong I realized that it aligned well with my approach to philosophy, probably because of my interest in AI. In the course of searching for a universal epistemology I discovered Solomonoff induction, which is an idea that I've been obsessed with for a couple years. I even wrote a paper about it. I've been trying to apply this concept to epistemology and cognitive science.
My current project is to make a practical framework for resource-bounded Solomonoff induction (Solomonoff induction where the programs are penalized for taking up too much time). Since resource-bounded induction is NP-complete, it can be verified in polynomial time. So I decided to create a framework for verifying generative models. Say we have a bunch of sample pieces of data, which are all generated by the same distribution. The distribution can be modeled as a program that randomly generates a sample (a generative model). The model can be scored by Bayes factor: P(model) P(sample[0]|model) P(sample[1]|model) * ... In practice it's easier to take the log of this quantity, so we have the total amount of information contained in the model + the amount of information the model needs to replicate the data. It's possible to prove lower bounds on P(sample[i]|model) by showing what random decisions the model can make in order to produce sample[i]. I've partially developed a proof framework that can be used to prove lower bounds on these probabilities. The model is also penalized for taking up too much time, so that it's actually practical to evaluate models. I've started implementing this system in Python.
Part of the reason why I created an account is because there are some deep philosophical issues that algorithmic generative models raise. Through studying these generative models scored by Bayes factor, I've come up with the generative explanation hypothesis (which I'll call the GEH): explanatory models are generative, and generative models are explanatory. A good generative model for English text will need to explain things like abstract ideas, just as a good explanatory model for English text does. The GEH asserts that any complete explanation of English text can be translated into a procedure for generating English text, and vice versa. If true, the GEH implies that explanatory ideas should be converted into generative models, both to make the models better and to understand the idea formally.
There are a few flaws I see so far with the GEH. The Bayes factor counts all information as equal, when some information is more important than other information, and a generative model that gets the important information (such as, say, an explanation for the way the objects in an image are arranged) right should be preferred to one that gets low-level information a little better (such as by slightly improving the noise prediction algorithm). Also if the model is penalized for taking too much time, it might use more time to optimize unimportant information. This seems to be a way that algorithmic generative models scored by Bayes factor are worse than human explanatory models.
Regardless of the GEH I think generative models will be very useful in AI. They can be used to imitate things (such as in the Turing test). They also can be used for biased search: if a program has a relatively high probability of generating the correct answer when given a problem, and this has been verified for previous problems, it is likely to also be good for future problems. The generative model scoring system can even be applied to induction methods: good induction methods have a high probability of generating good models on past problems, and will likely do well on future problems. We can prove that X induction method, when given problem P, has at least probability Q of generating (already known) model M with score S. So a system of generative models can be the basis of a self-improving AI.
In summary I'm interested in algorithmic generative models both from a practical perspective of creating good scoring methods for them, and from a philosophical perspective to see to what extent they can be used to model abstract ideas. There are a lot of aspects of the system I haven't fully explained but I'd like to elaborate if people are interested. Hopefully I'll be discussing these ideas as well as lots of others here!
comment by sabre51 · 2011-12-27T20:16:27.230Z · LW(p) · GW(p)
I've posted a few rationality quotes, so it sounds like time to introduce myself. I'm a 22 year old software project manager from Wisconsin, been reading LW since June or so when MOR was really going strong.
I've been a very rational thinker for my whole life, in terms of explicitly looking for evidence/feedback and updating behaviors and beliefs, but only began thinking about it formally recently. I was raised Christian, and I consider my current state the result of a slow process of resolving dissonance based on contradictions or insufficient/contrary evidence. I'm most interested in theory of government and achieving best results given the rather unreliable ability of voters to predict or understand outcomes of different policies.
I also think, though, that ethics is just as important as rationality- choosing the correct goals is just as necessary as succeeding towards those goals. I've seen appreciation of this within LW that, for me, really sets it apart, so I hope I can make a larger contribution. As someone once said, the choice between Good and Evil is not about saying one or the other, but about deciding which is which.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-28T01:55:25.663Z · LW(p) · GW(p)
Welcome! If you're near Madison, there's a regular meetup there (on Mondays) which I highly recommend.
As someone once said, the choice between Good and Evil is not about saying one or the other, but about deciding which is which.
Is that from The Sword of Good, or another source?
comment by Gob_Bluth · 2011-12-27T17:57:37.145Z · LW(p) · GW(p)
Hello, I'm a high school senior who discovered this site somewhere on reddit and deeply enjoyed this article (http://yudkowsky.net/rational/the-simple-truth) and decided to check out more posts. I'm planning on studying engineering in college but I try to have a well-rounded knowledge on a myriad of subjects apart from math and science. The content here is very enticing and intellectually stimulating, and I will probably frequent this site in the future.
Replies from: KPiercomment by Vaniver · 2011-12-26T22:39:49.299Z · LW(p) · GW(p)
It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation.
I'd like to note that while acceptable to ask for an explanation, it is downright counterproductive to be petulant. Don't bother getting upset until you know why.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-27T00:14:27.493Z · LW(p) · GW(p)
I'm not sure it's possible to avoid getting upset in the short run. However, it's a good idea to avoid showing upset until you find out something about what's going on.
comment by jeronimo196 · 2020-02-22T19:21:05.202Z · LW(p) · GW(p)
Hello lesswrong community!
"Who am I?" I am a Network Engineer, who once used to know a bit of math (sadly, not anymore). Male, around 30, works in IT, atheist - I think I'll blend right in.
"How did I discover lesswrong?" Like the vast majority, I discovered lesswrong after reading HPMOR many years ago. It remains my favourite book to this day. HPMOR and the Sequences taught me a lot of new ideas and, more importantly, put what I already knew into a proper perspective. By the time HPMOR was finally finished, I was no longer sure where my worldview happened to coincide with Mr. Yudkowsky, and where it was shaped by him entirely. This might be due to me learning something new, or a mixture of wishful thinking, hindsight bias and the illusion of transparency, I don't know. I know this - HPMOR nudged me from nihilism to the much rosier and downright cuddly worldview of optimistic nihilism, for which I will be (come on singularity, come on singularity!) eternally grateful.
"When did I became a rationalist?" I like to think of my self as rational in my day-to-day, but I would not describe myself as a rationalist - by the same logic that says a white belt doesn't get to assume the title of master for showing up. Or have I mixed those up and "rational" is the far loftier description?
"Future plans?" I am now making a second flyby over the Sequences, this time with comments. I have a few ideas for posts that might be useful to someone and a 90% complete plotline for an HPMOR sequel (Eliezer, you magnificent bastard, did you have to tease a Prologue?!!!).
Looking forward to meeting some of you (or anyone, really) in the comments and may we all survive this planet together.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-02-22T19:29:45.268Z · LW(p) · GW(p)
Welcome! Glad to have you join us!
Replies from: jeronimo196↑ comment by jeronimo196 · 2020-02-29T22:22:45.868Z · LW(p) · GW(p)
Thank you! You have no idea how happy your reply makes me! In an irrationally large part, because I've seen your name in a book, but I just cannot help myself. You are alive! (Duh!) More importantly, the lesswrong community is alive! (Double Duh!, but going through the Sequences' comments can be a bit discouraging - like playing the first levels of a MMORPG, while the experienced player base has moved on to level 50.) Hopefully, we'll have many interesting discussions once I catch up. So much to look forward to! Will Alicorn be there? Will TheOtherDave explain what happened to the original Dave? You guys are legends.
P.S. Sorry for the delayed response, I didn't notice the number next to the bell earlier. I'll make sure to check it frequently from now on.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-03-01T00:03:23.240Z · LW(p) · GW(p)
Glad to hear that! :) Looking forward to many future conversations, and sorry for the bell icon not being as obvious.
Replies from: jeronimo196↑ comment by jeronimo196 · 2020-03-02T12:19:28.343Z · LW(p) · GW(p)
No worries :) and no reason to be sorry- the bell is quite obvious on PC, but my android phone only shows it when scrolling. Probably an issue on my side.
comment by GESBoulder · 2012-06-14T00:31:06.968Z · LW(p) · GW(p)
Hello to the LW Community. My name is Glenn, 49, from Boulder, Colorado. After completing my Master's degree in Economics, I began a career in investment management, with a diversion into elected politics (a city council, a regional council of governments, then the Colorado state legislature, along with corporate on non-profit boards). My academic work focused on decision theory and risk analysis and my vocation on their practical application. Presently, I manage several billion dollars' worth of fixed-income portfolios on behalf of local governments and non-profits across the United States. I've also worked with the U.S. government doing training for centrist, pro-democracy parties in the emerging world.
My path to you was through a Youtube interview of Steve Omohundro. My path to him was general background research on AI, space exploration, energy, computer science and nanotech in my sometimes seemingly vain attempt to keep pace with the accelerating change in the world.
My beliefs on what is left of my religion, albeit starting off half way gone as a Presbyterian, after subjecting it to astrophysics (my original undergrad major), evolution, Jung, critical analysis of the Bible, skepticism, Lucifer (as in the light baring meme for the enlightenment and American Revolution), objectivism, experience, rationalism, is well outside of orthodoxy, say, Christian humanism. I remain very skeptical of the genius of anyone or any group to plan or scheme or act as a virtuous vanguard. I believe that power is best defused.
I bring to the table experience and knowledge of economics, finance, politics and public policy formation. I'll do a lot of deferring on other subjects. I think the work here on rationalism and at SI is of critical importance. You all have my highest regard. I too look forward to your influences on me becoming less wrong.
Replies from: Vaniver, Mitchell_Porter↑ comment by Mitchell_Porter · 2012-06-15T00:09:03.261Z · LW(p) · GW(p)
I've also worked with the U.S. government doing training for centrist, pro-democracy parties in the emerging world... I believe that power is best defused.
Geopolitical bomb disposal?
comment by Paul_G · 2012-06-08T02:19:27.972Z · LW(p) · GW(p)
Hi! My name is Paul, and I've been an aspiring rationalist for years. A long time ago, I realized implicitly that reality exists, and that there is only one. I think "rationality" is the only reasonable next thing to do. I pretty much started "training" on TvTropes, reading fallacies and the like there, as well as seeing ways to analyze things in fiction. The rules there apply to real life fairly well.
From there, I discovered Harry Potter and the Methods of Rationality, and from there, this site. Been reading quite a bit on and off over the past little while, and decided to become a bit more active.
Just visited a meetup group in Ottawa (which is about a 2 hour drive), and I no longer feel like the only sane man in the world. Meeting a group of Bayesian rationalists was incredibly enlightening. I still have a lot to learn.
comment by prashantsohani · 2012-06-01T17:20:49.851Z · LW(p) · GW(p)
Hello, everyone! I'm 21, soon to graduate from IIT Bombay, India. I guess the first time I knowingly encountered rationality, was at 12, when I discovered the axiomatic development of Euclidean geometry, as opposed to the typical school-progression of teaching mathematics. This initial interest in problem-solving through logic was fueled further, through my later (and ongoing) association with the Mathematics Olympiads and related activities.
Of late, I find my thoughts turning ever more to understanding the working and inefficiencies of our macro-economy, and how it connects with basically human thought and behavior. I very recently came to know of Red Plenty, which seems generally in line with the evolutionary alternative described in the foreword to Bucky Fuller's Grunch of Giants.. and that is what made me feel the need to come here, actively study and discuss these and related ideas with a larger community.
Having just started with the Core Sequences, looking forward to an enriching experience here!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-06-01T17:38:49.719Z · LW(p) · GW(p)
Well welcome, and hope you find yourself happy and interested here!
comment by e_c · 2012-05-14T15:44:23.027Z · LW(p) · GW(p)
Hello folks! I'm a student of computer science, found Less Wrong a few years ago, read some articles, found myself nodding along, but didn't really change my mind about anything significant. That is, until recently I came across something that completely shattered my worldview and, having trouble coping with that, I found myself coming back here, seeking either something that would invalidate this new insight or help me accept it if it is indeed true. Over the past few days, I have probably been thinking harder than ever before in my life, and I hope to contribute to discussions here in the future.
Replies from: athingtoconsider↑ comment by athingtoconsider · 2012-06-13T13:13:43.963Z · LW(p) · GW(p)
What's the insight?
comment by Ghatanathoah · 2012-05-10T07:39:21.447Z · LW(p) · GW(p)
Hi everyone. I have been lurking since the site started, but did not have the courage to start posting until recently. I am a male college graduate in his mid-twenties, happily engaged and currently job-hunting, and have been fascinated by science and reason since I was a child. I was one of those people who actually identified with the "Hollywood Rational" robots and aliens in science fiction and wanted to be more like them. Science and science fiction socialized me and made me curious about the inner working of the universe.
I love the sequences and consider them a major influence on the way I think. The insights into reasoning, psychology, and metaethics the sequences gave me helped make me who I am today. Less Wrong made me a consequentialist and an altruist. It helped me realize that ethical naturalism might be true after all. I learned about akrasia from LW, which caused me to reject the poisonous cynicism that Revealed Preference Theory had infected me with. It's helped me put my life in order a little better, although I'm still fighting akrasia.
My only regret is that I recently started suffering bouts of severe depression because something snapped and made me start thinking about existential risks in Near Mode instead of Far Mode. I suspect it was Robin Hanson's "em" posts, which made me realize that AI could still threaten the future of the human race even if the FOOM theory turned out to be incorrect. I sometimes wish with all my heart that I could bleach the em posts out of my brain and return to a higher level of happiness, start believing in Julian Simon and the promise of the future again. But on the other hand those posts have caused me to think about certain topics much harder and more clearly than I would have otherwise.
I'm not a very prolific poster so far, but I think it's high time I started being part of the community that's been part of my life for so long.
comment by farsan · 2012-04-16T06:56:12.540Z · LW(p) · GW(p)
Greetings, everyone.
My name is Francisco, and I am from Malaga, Spain. I am a dabbling rationalist, and a programmer/troubleshooter.
I started walking the path of rationality when I started keeping track of good luck/normal luck/bad luck events in order to check if Murphy's law was actually true, and then wondering why people actually believed in it. Later, I started reading about fallacies, and I finally arrived at LW via HMPOR, like many people.
I am currently reading my way through the Sequences, but my current project is to make Bayes' theorem more accessible to people without math backgrounds. I have a couple of ideas that I'd like to refine and share at this community, even if English is my second language.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-04-26T22:06:26.090Z · LW(p) · GW(p)
Welcome to Less Wrong!
I am currently reading my way through the Sequences, but my current project is to make Bayes' theorem more accessible to people without math backgrounds. I have a couple of ideas that I'd like to refine and share at this community, even if English is my second language.
Good for you! Having English as your second language may actually make you better suited to explaining Bayes' Theorem and discussing rationality in general.
comment by larsyencken · 2012-04-08T10:20:34.393Z · LW(p) · GW(p)
Hi all,
My name's Lars. I'm from Melbourne, Australia, and have a background in software/mathematics/languages. I've also tutored classes in logic and artificial intelligence. Like a lot of folks commenting here, I've been reading articles on LessWrong for a while, but now I'm keen to understand the community around it a bit more.
I've been interested in rationality for some years. One of my favourite posts so far is "Intellectual Hipsters and Meta-contrarianism". It helped me notice signalling in arguments, and reduce greatly the amount I do it myself.
I think people struggle to keep track of all the different aspects of big societal issues, so I'm very interested in tools to help people share their arguments, evidence and understanding better. I notice when we talk about issues, our short term memory severely limits the depth of what we can discuss. Writing is definitely better, but I wonder, is it the endpoint? Has anyone had much success with argument mapping tools, or other alternative ways of expressing reasoning and evidence?
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2012-04-08T14:37:22.249Z · LW(p) · GW(p)
Writing is definitely better, but I wonder, is it the endpoint? Has anyone had much success with argument mapping tools, or other alternative ways of expressing reasoning and evidence?
That's an excellent question. I haven't, but would be interested in exploring this if you have a preferred tool.
I've gotten some benefit when talking about complex issues from introducing formalisms such as labeling key entities and using those labels rather than vague pronouns, or being precise about "there exists an X" vs "for all X", and stuff like that. That said, there are signalling difficulties with doing that in most communities.
Replies from: larsyencken↑ comment by larsyencken · 2012-04-09T00:41:21.349Z · LW(p) · GW(p)
I've tried Rationale before (http://rationale.austhink.com/), but unfortunately it's not free. It's good at organising evidence and counter-evidence, teasing out premises, and trying to ground each one.
With larger arguments, it helps a lot in keeping track of all the parts -- better than writing. Where it fell down was in comparing the relative weight of different pieces of evidence, or in general handling uncertainty.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-09T01:11:42.483Z · LW(p) · GW(p)
Mm. Not really interested in paying for the privilege at the moment.
↑ comment by wedrifid · 2012-04-08T11:52:36.382Z · LW(p) · GW(p)
My name's Lars. I'm from Melbourne, Australia
Welcome Lars. There are quite a bunch of us from Melbourne.
Replies from: larsyencken↑ comment by larsyencken · 2012-04-09T00:42:34.486Z · LW(p) · GW(p)
Thank you. I understand there was a meetup last week. Do they run regularly?
Replies from: wedrifid↑ comment by wedrifid · 2012-04-09T08:10:38.592Z · LW(p) · GW(p)
Thank you. I understand there was a meetup last week. Do they run regularly?
They seem to run regularly, and I believe there are several different kinds of meetups. I'm not really the one to ask though - I haven't been to one. Something always seems to come up.
comment by Mutasir · 2012-04-06T08:39:46.825Z · LW(p) · GW(p)
Hello everyone! I'm 19 years old BA student of Finance & Accounting from Poland. For some time I have been interested in rationalism, yet in my country internet community oriented with it is rather fledgling and mostly just non-theist in nature. I was brought here by HPMOR. I know Bayes' Theorem from my statistics classes, but it wasn't until recently that I began to understood how it could influence my way of thinking.
Please forgive me if I make small language errors in my posts, while I understand mostly everything written here (barring things that I would not initially comprehend even if I were a native speaker of course ;) ), it has been a long time since I have written anything in English myself and my skills need a little polish.
Now I'm somewhere midway through Core Sequences but hope to participate in the discussions in the future. I am very happy to join you here :)
Mutasir
Replies from: TheOtherDave, MarkusRamikin↑ comment by TheOtherDave · 2012-04-06T14:19:02.757Z · LW(p) · GW(p)
Welcome! Don't let language anxiety keep you from participating here; your English seems more than adequate to the job.
↑ comment by MarkusRamikin · 2012-04-06T10:32:04.928Z · LW(p) · GW(p)
Witam.
comment by StephenCole · 2012-04-01T06:02:00.023Z · LW(p) · GW(p)
Hello, all.
I'm an agnostic artist and general proponent of thinking (although I hope to become a more specific proponent of thinking now that I'm here) who enjoys working behind the scenes.
I'm the new executive assistant for the Center of Modern Rationality, and look forward to doing what I can to help get the Center running as smoothly as possible. If I'm doing my job right, you shouldn't even know I'm here.
comment by FeatherlessBiped · 2011-12-31T00:55:49.790Z · LW(p) · GW(p)
(Reposted from the wrong thread, per Kutta's suggestion)
If by "rationalist", the LW community means someone who believes it is possible and desirable to make at least the most important judgements solely by the use of reason operating on empirically demonstrable facts, then I am an ex-rationalist. My "intellectual stew" had simmered into it several forms of formal logic, applied math, and seasoned with a BS in Computer Science at age 23.
By age 28 or so, I concluded that most of the really important things in life were not amenable to this approach, and that the type of thinking I had learned was useful for earning a living, but was woefully inadequate for other purposes.
At age 50, I am still refining the way I think. I come to LW to lurk, learn, and (occasionally) quibble.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-04T01:15:45.120Z · LW(p) · GW(p)
Welcome!
If by "rationalist", the LW community means someone who believes it is possible and desirable to make at least the most important judgements solely by the use of reason operating on empirically demonstrable facts
You'll be relieved to know that's not quite the Less Wrong dogma; if you observe that your conscious deliberations make worse decisions in a certain sphere than your instincts, then (at least until you find a better conscious deliberation) you should rely on your instincts in that domain.
LWers are generally optimistic about applying conscious deliberation/empirical evidence/mathematical models in most cases besides immediate social decisions, though.
Replies from: FeatherlessBiped↑ comment by FeatherlessBiped · 2012-01-08T05:51:04.293Z · LW(p) · GW(p)
Thanks for the introduction and welcome. Upvoted.
comment by Andrew-Psyches · 2011-12-31T00:18:17.131Z · LW(p) · GW(p)
Hi
I'm Andrew, a 41 year old actuary, living in Chicago (and Sao Paulo in the summers). I came to rationality under the influence of Ayn Rand and the writing of Richard Dawkins but actually found the site after being sent a link by my sister. I am not a computer programmer at all, but read extensively on subjects like behavioral psychology, physics, genetics, evolution, and anything interesting related to real science. I am trying to apply the lessons from behavioral psychology and many other fields (including game theory, space design, use of incentives and others) to the problem of getting people healthy. In that sense, I describe myself as a wellness actuary.
I'm an atheist and I'm looking forward to learning more by reading the posts on this blog and getting to know the interesting minds that seem to populate this community.
Andrew
comment by Laur · 2011-12-28T00:30:28.406Z · LW(p) · GW(p)
Hi, I'm Laur, I'm in my mid-thirties (wow, when did that happen?), a software developer from Romania, currently living in the Netherlands. I found this site, as many others, via MoR, and I've been lurking for a while now - I'm subscribed to the RSS feed and slowly working my way through the sequences.
When young (and arguably foolish), I've made a few "follow your heart' kind of decisions that resulted in significant damage to my personal life, finances and career. For the past seven years I've been working my way out of that hole mainly by analysing and double-checking my personal choices in a rational way and it has paid off in a big way. I learned that the heart does not think, and the first instinct is good for keeping you out of the reach of lions, but worthless when contemplating a complicated problem with far-reaching consequences.
I personally believe in a humanist approach to rationality, where people are taught, helped and guided along this path. I'd rather live in a world where most people are rational most of the time than in one where some people are rational all of the time. Working towards that end, I've recommended LW (and MoR) to most people I know.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-04T01:52:59.562Z · LW(p) · GW(p)
Welcome!
I'd rather live in a world where most people are rational most of the time than in one where some people are rational all of the time.
Could you expand on this? Being more rational, in the sense that LWers use it, isn't about acting like Spock all the time; instrumental rationality for humans includes relaxing, being silly, and all of the other things that make us more effective and happier overall.
comment by JohnW · 2011-12-27T16:40:33.285Z · LW(p) · GW(p)
Hi everyone. I am an engineering graduate student in the SF Bay area, and will be working at a tech company in the south bay starting in the summer.
I have been lurking on this forum for about a year and a half, but this post convinced me to register for an account. I serendipitously found Less Wrong through an interesting post about the Amanda Knox murder trial. I have read a few of the sequences and all of MoR. I hope to get more involved in the future!
comment by RogerS · 2013-02-28T00:19:13.973Z · LW(p) · GW(p)
Retired Mechanical Engineer with the following interests/prejudices.
Longstanding interest in philosophy of science especially in the tradition of Karl Popper.
Atheist to a first approximation but I can accept that some forms of religious belief can be regarded as "translations" of beliefs I hold and therefore not that keen on the "New Atheist" approach. Belong to a Humanist group in London (where I heard of LW). This has led me to revive an old interest in moral philosophy, especially as applied to political questions.
Happy to be called a Rationalist so long as that encompasses a rational recognition of the limits of rationality.
Regularly read New Scientist, but remain philosophically unconvinced by the repeated claim therein that Free Will is an illusion (at least as I understand the term).
Recently discovered Bayes Theorem as explained by Nate Silver and can begin to see why LW is so keen on it.
I've reached my own conclusions on a number of questions related to the above and am looking forward to discovering where they fit in and what I've missed!
comment by TheEleaticStranger · 2012-07-03T01:48:17.824Z · LW(p) · GW(p)
Hi, I am interested in the neurobiology of decision-making and rationality and happened to stumble upon this site and decided to join.
-Cheers.
Replies from: shokwavecomment by Jost · 2012-06-23T15:13:17.358Z · LW(p) · GW(p)
Hey everyone,
I'm Jost, 19 years old, and studying physics in Munich, Germany. I've come across HPMoR in mid-2010 and am currently translating it into German. That way, I found LW and dropped by from time to time to read some stuff – mostly from the Sequences, but rarely in sequence. I started reading more of LW this spring, while a friend and I were preparing a two day introductory course on cognitive biases entitled “How to Change Your Mind”. (Guess where that idea came from!)
I'm probably going to be most active in the HPMoR-related threads.
I was very intrigued by the Singularity- and FAI-related ideas, but I still feel kind of a future shock after reading about all these SL4 ideas while I was at SL1. Are there any remedies?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-25T14:39:08.801Z · LW(p) · GW(p)
There are two remedies: Thinking about the ideas, and reading other people's thoughts about the ideas.
I generally recommend the former first, followed by the second, followed by the first again - don't read too much without giving yourself time to think the ideas through for yourself.
My general rule with new ideas is to get the summary first and think it through - my personal goal is to have (at least) one criticism, (at least) one supporting argument, and (at least) one derived idea before I read other people's thoughts on the matter.
comment by Alerus · 2012-05-07T15:48:56.634Z · LW(p) · GW(p)
Hi! So I've actually already made a few comments on this site, but had neglected to introduce myself so I thought I'd do so now. I'm a PhD candidate in computer science at the University of Maryland, Baltimore County. My research interests are in AI and Machine Learning. Specifically, my dissertation topic is on generalization in reinforcement learning (policy transfer and function approximation).
Given this, AI is obviously my biggest interest, but as a result, my study of AI has led me to applying the same concepts to human life and reasoning. Lately, I've also been thinking more about systems of morality and how an agent should reach rational moral conclusions. My knowledge of existing working in ethics is not profound, but my impression is that most systems seem to be at too high a level to make concrete (my metric is whether we could implement it in an AI; if we cannot, then it's probably too high-level for us to reason strongly with it ourselves). Even desirism, which I've examined at least somewhat, seems to be a bit too high-level, but is perhaps closer to the mark than others (to be fair, I may just not know enough about it). In response to these observations, I've been developing my own system of morality that I'd like to share here in the near future to receive input.
comment by Helloses · 2012-04-03T23:29:33.349Z · LW(p) · GW(p)
Hi, I'm a long-time reader of Eliezer's various scribblings and I'm interested in getting a meetup group going in Minneapolis after we've had a few false starts. This is the post I'm trying to gather the karma to enable:
Meetup: Twin Cities, MN (for real this time)
THE TIME: 15 April 2012 01:00:00PM (-0600) THE PLACE: Purple Onion Coffeeshop, 1301 University Avenue Southeast, Minneapolis, MN
Hi. Let's make this work.
Suggested discussion topics would be:
- What do we want this group to do? Rationality practice? Skill sharing? Mastermind group?
- Acquiring guinea pigs for the furtherance of mad science (testing Center for Modern Rationality material)
- Fun - what it is and how to have almost more of it than you can handle
If you'd like to suggest a location closer to you or a different time, please comment to that effect. If you know a good coffeeshop with ample seating in Uptown or South Minneapolis, we could meet there instead. Also comment if you'd like to carpool.
If you're even slightly interested in this, please join up or at least comment.
Folks, let's hang out and take it from there.
I'm a concert pianist and freelance computer guy for small businesses. My hobbies include indie art games and reverse-trolling A.T.Murray/Mentifex on AGI-list. Cheers.
Replies from: arundelo↑ comment by arundelo · 2012-04-04T03:11:53.774Z · LW(p) · GW(p)
reverse-trolling [Ment.if.ex]
You have spoken his name. If this summons him, it's on you!
comment by [deleted] · 2012-04-01T01:49:54.341Z · LW(p) · GW(p)
Hello, I am a very likable, shy young person who lives in Austria and loves you guys.
Replies from: Barry_Cotter↑ comment by Barry_Cotter · 2012-04-01T02:36:14.908Z · LW(p) · GW(p)
Willkommen bei Lesswrong! Wie hast du die Seite gefunden? Was interessierst dich am meisten hier? Viel Glueck mit mitteilen. Am obersten links gibt es ein Briefumschlag, wenn es rot ist hast du eine oder mehrere neue Nachrichten.
Sorry but I use any excuse to inflict my German upon others.
Replies from: None↑ comment by [deleted] · 2012-04-01T10:26:59.642Z · LW(p) · GW(p)
Danke.:) Ich habe die Seite schon 2 Jahre fruh gefunden via google. My German is still very bad and I love excuses to practice, too! Are you up for some? Mich interessiert die Rationality Idee, es it das, dass ich habe immer gesucht aufs Lebens, weil ich brauche Grund und Zweck. Ich habe die meisten Sequences gereden, nicht mit einige Ziel, sonst weil die so interessant, denkanstöße sind. Interessiert mich, wie die Welt einen Guten Platz machen kann. Ich habve nur den Angst, das ich zu dumm bin und kann nicht genüg verstehe :'p. Sehr aufgeregt mitteilen zu nehmen und vielleicht andere Leute kennen gelernen.
comment by pleeppleep · 2012-02-27T00:08:01.393Z · LW(p) · GW(p)
Hi, I'm Josh. I found this site by way of HPMOR more than half a year ago, but just now got around to making an account. I hadn't seen any reason to until I actually had something to add to a conversation. After registering and leaving a few comments here and there, i figured i may as well introduce myself.
Im 17 years old and trying to narrow down what to do with my life. My long term goal, much like most patrons to this site, is to do as much as i can to aid the development of FAI. Im smarter than the vast majority of people, but i doubt that im anywhere near intelligent enough to add directly to the project, so the issue becomes finding a career that pays enough to allow large donations while also satisfying short term needs and pressures, (most of which are related to serving my ego which is of an astronomical size).
Im generally a slacker due to akrasia, with a C average for my first two years of high school, despite almost straight A's on exams ( ive raised it to a B average after finding Less Wrong, but im still putting off doing my homework even at this moment).
I spend a good deal of time trying to figure ways to introduce rationality to my friends and relatives, but without much luck. any advice on the issue would be helpful, but i think that question would be more appropriate for an open thread or discussion.
I'm motivated on the most basic level by the fact that something is horribly wrong with the world when it doesn't have to be. If i could sum up my life in any one purpose it would be ensuring that death is banished from the world never to touch mankind again. This is the same sentiment that led to the creation of this community and i will try to offer as much as i can.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-02-27T00:38:33.910Z · LW(p) · GW(p)
Welcome, Josh! It sounds like you're in a similar place to my brother right now, with similar interests. He goes by zephyrianr on LW, maybe you could send him a message if you're interesting in talking about these issues. Especially when I read your phrase: "If i could sum up my life in any one purpose it would be ensuring that death is banished from the world never to touch mankind again," I think you two would get along well.
comment by yaxy2k · 2012-02-07T06:04:02.229Z · LW(p) · GW(p)
Hi all! I am a 23 year old Singaporean student studying Computer Science in the United States. I'm interested in Psychology, Statistics, Math, Physics, Biology, Chemistry, Politics, and some other things. It is an exciting time to be young! I'm really looking forward to space elevators, and I'm still curious to see how quantum computers would change things. In the mean time, people's lives are being molded by the increasing amounts of available information that is presented in a way that is relevant to them. I am excited to see what the world would be like in 10, 20, 30... years. But for that to be a good time to be alive, my opinion is that peace is essential. Rising inequalities is making me a little worried about that, and I'm still reading up on it.
Trying to think logically as much as possible has always been a big part of me. I honestly can't remember when I really started - from the kid who asked many questions, to the kid who asked strange questions, to the teenager who blogged really long posts, to a young man who's sometimes "too logical" - I really can't remember any single event during which I became dramatically more inclined to think logically. I have to admit though, despite my inclination, I only did become more logical after the Knowledge and Inquiry course that I took in grade 11 and 12. I still think of it as the best course I have ever taken. I've put down my thoughts sporadically on my blog, which I've linked in my profile.
I would have joined this site earlier had I heard of it sooner. The articles make great readings. Having been accused of spamming people's facebook walls when I had gotten myself into debates several times, I'm really glad that there is a place where people are serious about having discussions that reach somewhere. Thank you so much to you all for making this work.
Xin Yang
comment by mesilliac · 2012-01-22T10:44:11.777Z · LW(p) · GW(p)
Hello Less Wrong.
I've been lurking for a while and just decided to register. I have occasionally wanted to comment, but felt i should have an intuitive understanding of the community and its values before doing so.
I consider myself to have been trained in rationality from a very young age. My father was a philosophy professor, and at many points in my life i have found myself referring back to conversations with him in which he attempted to demostrate how to think correctly. I also consider my mother to be a strong rationalist, and thus consider myself quite fortunate in my upbringing.
I came across this site after reading and enjoying Harry Potter and the Methods of Rationality by Eliezer Yudkowsky (up to ch. 77). I could say many good things about the work, but please let "thank you for the entertainment" suffice. I await any upcoming installments with mildly pleasant expectation.
I admire the basic premise of this site - that of being less wrong - and wish all others who follow the same path the best in life. I doubt i will become a prolific community member, but hope that i can contribute in some small way.
I have studied Mathematics and Physics to at least a BSc level, and also consider myself a competent programmer with interests in AI and reality modeling. I have many other interests, but prefer to keep them secret until called upon.
Thanks for reading :) Tommy / mesilliac
comment by harshhpareek · 2012-01-16T11:08:48.949Z · LW(p) · GW(p)
Hi, I've been lurking on LessWrong for quite a while now - around a year -, but saw this post and decided to comment. I hope this is useful as feedback to the admins.
I'm a 22 year old student at UT Austin. As of last Fall, I'm pursuing a PhD in Computer Science. My specialization is Machine Learning. And I'm committed to doing everything in my power to hasten the Singularity :P. I have a BTech in CS from IIT Bombay, India.
I've considered myself a rationalist for as long as I can remember. I found less wrong through Overcoming Bias and from Elizier's posts about Bayes' Theorem and Decision Theory related posts which are linked around the internet. I stuck around because of the Rationality quotes threads and the relation to the Singularity Institute. I didn't think of it as a community so much as a multiple-author blog back then. Then I came to Austin, and I started attending the weekly meetups here. We have a small group, but it's great to find a set of like-minded people, and it's an important part of my week. I've been following Less Wrong a lot more closely since then. The group also rekindled my interest in SciFi. I bought a kindle, and I've been reading a fair bit now, along with a healthy dose of Non-Fiction. I haven't been writing in the comment threads, primarily out of laziness, but I'm trying to force myself out of it. I'm currently rereading Methods of Rationality ( I stopped somewhere in the middle last time), and I'm reading the sequences on my Kindle now (so thanks to whoever converted them to MOBI!)
I am a vegetarian being born into a pious Hindu family. Religion wore off as I became an atheist in my early teens. But I continue to be a vegetarian for moral and environmental reasons.
comment by regis · 2012-01-11T14:24:15.614Z · LW(p) · GW(p)
Hello, I'd like to keep this short; hopefully that's ok. I am 22. I live in the SF bay area and have been living here for the last 5 years. I am a self-taught computer scientist, with a bachelor's degree in a more 'creative' field. Currently I am most interested in computer vision as well as various social aspects of technology. I've been making my way through the sequences in the past couple weeks, but I've been reading the LW discussions for about a year now.
comment by macronencer · 2012-01-11T12:50:47.330Z · LW(p) · GW(p)
Greetings from Southampton, UK.
Male, 46, Maths graduate, software developer, career in transitional state (moving into music composition - slowly!).
Until about the age of 30 I didn't really make an effort to identify my own biases and irrational beliefs, and I had a lot of unsupported beliefs in my mind. I've been gradually correcting this through online reading and thinking, but I feel that until recently I lacked one of the essential elements of wisdom: clarity of focus. I'm hoping to learn that now.
Since I was divorced in 2004, I've increasingly become someone who would self-identify as a Transhumanist, and I used to hang out with a H+ group in London for a while (UKTA).
I found LW very recently, when I was researching an online probability puzzle and I needed to refresh my memory on the Bayesian approach. I now regard this as a very happy accident and I look forward to a pleasant few months of digging deeply into LW and teaching myself to become as rational as possible.
I gobbled up HP:MoR in just a few days, losing significant sleep while doing so: it's extremely addictive :) I've not read the sequences yet but they look interesting, and it strikes me that the "titles, then summaries, then contents" approach to library conquest mentioned in the fanfic would be a good idea in this context.
Happy to be here!
comment by geebee2 · 2012-01-05T23:56:20.616Z · LW(p) · GW(p)
Hi, I'm 53 years old, from Gloucester, UK.
I work from home over the internet running IT systems.
I studied Maths for 2 years at Cambridge, then Computer Science in my 3rd year.
I came across this site after becoming interested in the trial of Amanda Knox and Raffaele Sollecito ( just subsequent to their acquittal in October 2011 ).
I made an analysis of the Massei report ( http://massei-report-analysis.wikispaces.com/ ) and concluded that the defence case was much more probable than the prosecution case.
I'm interested in a rational basis for assessing guilt in criminal cases. My idea ( as above ) is to compare the relative likelihood of each part of the defence and prosecution case, but this was perhaps not a good example, as I found that there was no credible, objective evidence against the defendants after looking closely at the evidence.
Maybe we could look at the recent conviction of Gary Dobson and David Norris. I would start from the position that they are probably guilty, but this is before examining in any detail the evidence against them, so this is based mainly on a general belief that UK courts do a fairly good job. The questions to be raised there would be whether we can really trust the forensic evidence, given that the police have powerful incentives to convict. And how do we eliminate prejudice against these unpleasant people ( both were clearly vile racists whether or not they committed the murder ).
Replies from: orthonormal↑ comment by orthonormal · 2012-01-06T16:17:42.334Z · LW(p) · GW(p)
Maybe we could look at the recent conviction of Gary Dobson and David Norris.
Well, I'm not touching that case with a ten-foot pole. But if you picked a less politically charged case, I'm sure there might be interesting things to say!
comment by faul_sname · 2012-01-02T08:21:16.076Z · LW(p) · GW(p)
Hello,
I've been reading LessWrong for a year or so, and made an account about two months ago to comment on the survey. Seeing as I have continued to comment, I suppose that I should introduce myself.
I am an 18 year old college student, majoring in neuroscience. I don't affiliate politically, though I do have opinions on specific policy issues. In particular, I think that we should allow more experimental policies if the potential risks are not too high, perhaps testing them locally.
I don't remember exactly how I came to start reading LessWrong, but I have suspicion that it may have been through the xkcd forums. I was drawn here by the sequences, and I stay because of the high quality of discussions on this site.
comment by Rukiedor · 2011-12-31T06:30:38.885Z · LW(p) · GW(p)
Hello, I've been lurking around Less Wrong for several months, mostly reading through the sequences. I especially enjoyed the ones on free will and happiness theory.
I finally created an account a week or so ago so that I could express interest in a Salt Lake City meetup. And now here I am introducing myself.
I’m a thirty year old white male living in Salt Lake City. I write point of sale software by day, and video games by night.
I think my primary motivation into rationality was my upbringing. I was raised in a very religious, and rather unhealthy home. That coupled with the facts that the LDS culture isn’t particularly friendly to nerds, and that I seemed to believe in a different God than most of my fellow churchgoers led to me being told all the time that I was wrong. So, the only way to ever be right was to painstakingly trace my beliefs back to original assumptions that anyone would agree with.
Less Wrong is actually the first source I’ve found on rational thinking, so my self taught methods seem a bit sloppy next to the elegance of the thinking that goes on here.
My big interest, the thing that drives me, is art. You know the feeling you get when you hear an amazing piece of music? Or see a fantastic movie? Or play an incredible game? I want to understand that, I want to know what it does to your brain, and how I could reproduce it.
Anyway, I look forward to being a part of the community. I probably won’t comment much unfortunately, still have some biases that tend to get in the way of that, but I’ll be here lurking, and watching.
Replies from: lessdazedcomment by monkeywicked · 2012-06-25T21:23:25.365Z · LW(p) · GW(p)
Hi.
I'm a fiction writer and while I strive towards rationalism in my daily life, I can also appreciate many non-rational things: nonsensical mythologies, perverse human behaviors, and the many dramas and tragedies of people behaving irrationally. My criteria for value often relates to how complex and stimulating I find something... not necessarily how accurate or true it may be. I can take pleasure in ridiculous pseudo-science almost as much as actual science, enjoy a pop-science theory as much as deep epistemology, and I can find a hopelessly misguided person to be more compelling and sympathetic than a great rationalist.
However, conveniently, it often turns out that the most interesting stories, the most mind-bending concepts, and the most impressive acts of creativity are born of rationalist thinking rather than pure whimsy. And so I can have my cake and eat it too, because the posts at LW are as likely to create the sensation of mental expansiveness that I associate with great fiction (or, I suspect, compelling theology) while also attempting to be, uh, you know, less wrong.
So it's fun to be here. And if it helps me think and experience the world more clearly and critically... that's gravy.
Recently I've been working on several sci-fi writing projects that involve topics that are discussed at LW. One is about the development of AI and one about the multi-world interpretation. Neither project is 100% "hard sci-fi," however I would ideally like them to be not totally stupid... since I think plausibility and accuracy often produce narrative interest--even if plausibility and accuracy are not, in of themselves, objectives. After doing a lot of research on the topics, I still have many questions. It seems to me that the LW community might be the best place to get clear, smart, informed answers in layman's terms.
I'll fire away with a couple questions and see what happens. If this works out, I'll probably have a lot more...
(I wasn't sure if these ought to be a comments at And the Winner Is... Many World If so, I can re-post there.)
In the MWI its often suggested that anything that could have happened will have happened. Thus, quantum immortality, etc. But this often puzzles me. Just because there are infinite worlds, why should there be infinite diversity of worlds? You could easily create infinite worlds by simply moving a single atom around to an infinite number of locations... but those worlds would be essentially identical. If Everett's chance of surviving each year is 100 - 1% for every year he lives, then wouldn't that mean his chance of being dead at 100 is 100%? Wouldn't that mean he's dead in all worlds? If you send an infinite number of light photons through the double slit their infinite possible locations on the wall are extremely limited. Couldn't the many worlds of the MWI resemble infinite photons being sent through the a double-slit experiment? Infinite in number, but extremely constrained in result?
Is it possible, within the MWI, to have a situation where all but one world experiences some event? E.g. event X happens at time 2 in world 2, time 3 in world 3 and so on so that X appears at some time in every world except world 1. Now say that X is a Vacuum Decay event... wouldn't that mean it is possible to only have ONE viable, interesting world even within the MWI?
David Deutsch, in The Fabric of Reality, claims that a quantum computer running Shor's Algorithm would be borrowing computational power from parallel worlds since there isn't enough computational power in all of our universe to run Shor's Algorithm. Does anyone know what would be happening in the worlds that the computer is borrowing the computational power from? Would those worlds also have to have identical computers running Shor's Algorithm? Or is there some more mysterious way in which a quantum computer can borrow computational power from other worlds?
Is there any hypothetical, theoretical, or even vaguely plausible way for an intelligent being in one world to gain information about the other worlds in the MWI? Interference takes place constantly between particles in our world and other worlds; is there any way to for this interference to be turned into communication or at least advanced speculation about the other worlds? Or is such a notion pure fantasy?
Thanks in advance! If anyone can answer any of these or redirect me to resources inside/outside of LW, I'd be grateful.
Cheers,
MW
Replies from: pragmatist, Zack_M_Davis, OrphanWilde, shminux↑ comment by pragmatist · 2012-06-25T22:04:07.110Z · LW(p) · GW(p)
Welcome to LessWrong! Here are some answers to your questions about MWI:
The space of possibilities in MWI is given by the configuration space of all the particles in the universe. The configuration space consists of every possible arrangement of those particles in physical space. So if a situation can be realized by rearranging the particles, then it is possible according to MWI. There is a slight caveat here, though. Strictly speaking, the only possibilities that are realized correspond to points in configuration space that are, at some point in time, assigned non-zero wavefunction amplitude. There is no requirement that, for an arbitrary initial condition and a finite period of time, every point in configuration space must have non-zero amplitude at some point during that period. Anyway, thinking in terms of worlds is actually a recipe for confusion when it comes to MWI, although at some level it may be unavoidable. The imporant thing to realize is that in MWI "worlds" aren't fundamental entities. The fundamental object is the wavefunction, and "worlds" are imprecise emergent patterns. Think of "worlds" in MWI the same way you think of "blobs" when you spill some ink. How much ink does there need to be in a particular region before you'd say there's a blob there? How do you count the number of blobs? These are all vague questions.
MWI does not play nicely with quantum field theory. The whole notion of a false vacuum tunneling into a true vacuum (which, I presume, is what you mean by vacuum decay) only makes sense in the context of QFT. The configuration space of MWI is constructed by considering all the arrangements of a fixed number of particles. So particle number is constant across all worlds and all times in configuration space. Unlike QFT, particles can't be created or destroyed. So the configuration space of a zero-particle world would be trivial, a single point. If you have more than one particle then all the worlds would have to have more than one particle. None of them would be non-viable or uninteresting. Perhaps it is possible to construct a version of MWI that is compatible with QFT, but I haven't seen such a construction yet.
Deutsch's version of MWI (at least at the time he wrote that book) is different from the form of MWI advocated in the sequences. According to the latter, "world-splitting" is just decoherence, the interaction of a quantum system with its environment. But a quantum computer will not work if it decoheres. So according to this version of MWI, in order for a quantum computer to work, we need to make sure it doesn't split into different worlds. Instead, we would have a quantum computer in a superposed state within a single world, which I guess you can think of as many overlapping and interfering computers embedded in a single larger world. So you're not really harnessing the computational power of other worlds.
On an appropriate conception of "worlds", interference does not take place between particles in our world and other worlds. Interference effects are an indication of superposition in our world, a sign of a quantum system that hasn't decohered. Decoherence destroys interference. It is possible for there to be interference between full-fledged worlds (separate branches of a wave function large enough to contain human beings), but it is astronomically unlikely. You can communicate with other worlds trivially, as long as those worlds are ones which will split off from your world in the future. But otherwise, you're out of luck.
↑ comment by monkeywicked · 2012-06-27T21:00:46.059Z · LW(p) · GW(p)
Thanks for the answers, Pragmatist. I'm still fairly confused. But I'll read more in the sequence and elsewhere. I appreciate the effort/time.
↑ comment by Zack_M_Davis · 2012-06-25T21:40:19.095Z · LW(p) · GW(p)
However, conveniently, it often turns out that the most interesting stories, the most mind-bending concepts, and the most impressive acts of creativity are born of rationalist thinking rather than pure whimsy.
Yes; I like Steven Kaas's explanation:
Truth is more interesting than fiction because it's connected to a larger body of canon.
↑ comment by OrphanWilde · 2012-06-26T15:56:52.704Z · LW(p) · GW(p)
One thing about the MWI which confused me at first -
The MWI is not a single interpretation, contrary to the name. There are several different versions of MWI floating around.
I believe the original interpretation had the many worlds existing, but generally independent from one another; a single world represents multiple possible states, but as soon as a state is determined (whatever you want to call this process), the world becomes independent, and ceases to interact with the possible states which weren't realized in that world. (Although different books will tell you different things, this is, as far as I've been able to divine, the original one.) In the original version, worlds split, permanently, from one another. So there would be no way to communicate with them. I believe this is the version Yudkowsky follows.
I've seen references to arguments that the fifth-dimensional variant (where worlds coexist and overlap, implying that some communication is possible) is impossible, but I've never seen the arguments themselves, in spite of looking.
↑ comment by Shmi (shminux) · 2012-06-25T22:08:37.276Z · LW(p) · GW(p)
Is there any hypothetical, theoretical, or even vaguely plausible way for an intelligent being in one world to gain information about the other worlds in the MWI? Interference takes place constantly between particles in our world and other worlds; is there any way to for this interference to be turned into communication or at least advanced speculation about the other worlds?
I don't know of any models that propose a mechanism for such a communication (assuming you mean actually sending messages back and forth). A model like that would move the MWI from the realm of interpretations back into something testable. It would be way cool, of course, but don't hold your breath :)
comment by [deleted] · 2012-06-20T08:32:07.160Z · LW(p) · GW(p)
Hey LW community. I'm an aspiring rationalist from the Bay Area, in CA, 15 years old.
I found out about this site from Harry Potter and the Methods of Rationality, and after reading some of the discussions, I decided to become a member of the community.
I have never really been religious at any time of my life. I dismissed the idea of any kind of god as fiction around the same time you would find out that Santa isn't real. My family has never been very religious at all, and I didn't even find out they were agnostic until I recently. That said, I would consider myself an atheist, because I don't have any doubts that there is no god.
I look forward to being a part of this community, and learning more about rationalism.
Replies from: Nisan↑ comment by Nisan · 2012-06-20T09:00:16.273Z · LW(p) · GW(p)
Welcome! There are regular meetups in Mountain View and Berkeley. Feel free to join a mailing list and attend!
comment by [deleted] · 2012-06-07T19:58:29.274Z · LW(p) · GW(p)
Hey guys. My name is Michael and I'm a business student living in Little Rock, Arkansas. I've recently become fascinated by the work of SI and I'm interested in participating any way I can. I've long considered myself a rationalist after I abandoned religion in my teens. However lately I realized I need to interact with other rationalists in order to further my development. I'm considering trying to attract more LessWrong members from where I live. If anybody has any advice concerning that I'd be happy to hear it.
Replies from: lessdazed, steven0461↑ comment by lessdazed · 2012-06-07T23:09:31.611Z · LW(p) · GW(p)
However lately I realized I need to interact with other rationalists in order to further my development.
1) What made you believe this?
2) At present, what do you think are the best reasons for believing this?
Replies from: None↑ comment by [deleted] · 2012-06-07T23:43:59.415Z · LW(p) · GW(p)
1.) Well I based this on observing that I learn a hell of a lot more from interacting with people smarter than me than I do reading or studying.
2.) None of us are perfectly rational. Other people often can spot fallacies that one of us could miss.
↑ comment by steven0461 · 2012-06-07T20:13:53.378Z · LW(p) · GW(p)
Welcome to LessWrong! It sounds like you may want to organize a meetup in your town if there isn't one already.
Replies from: Nonecomment by CWG · 2012-05-25T09:23:20.383Z · LW(p) · GW(p)
Greetings! I joined under my usual username a little while ago, that I use everywhere on the web. Then I realized - this is very public, and I'd rather not worry about potential clients or employers drawing conclusions from what I write about my akrasia, poor planning, depression or anything like that. So here's the version of me that's slightly less connected to my real life identity.
Very briefly:
- I feel pretty much at home here.
- Rationality is awesome.
- HP:MOR is not only awesome, it's also my favorite Harry Potter book by a long way.
- Rationality has not always helped me in having happy relationships. But sometimes it has.
- I'm a former Christian, and though it had many benefits, the useful part of what I learned in 9 years could be compressed into a part-time course of a few months, without the superstitious stuff.
- I struggle with planning and focus - I often have no sense of time.
- I could probably be described with terms like akrasia, ADD and executive dysfunction, and maybe even Aspergers, aka high-functioning autistic. I'm not throwing the terms around lightly - a counselor suggested I had ADD (and it makes sense) and a number of people in my family (grandfather, brother, nephew) show many of the signs of high-functioning autism.
- I work with a non-profit that I'm passionate about, but I want to be much more effective.
- I have a discussion question I want to post about project management tools, but I don't have the points. I'd just passed the 20 points needed on my old account, but I'm back to zero as "CWG". Upvotes will make me smile :-).
comment by Kindly · 2012-05-12T22:30:24.772Z · LW(p) · GW(p)
Hello!
I'm a graduate student in mathematics and came across Less Wrong by, uh, Googling "Bayes' Theorem". I've been putting off creating an account for the past month or so, because I've had absolutely no free time on my hands. Now that the semester's winding down, I've decided to try it out, although I may end up disappearing once things get going again in the fall.
Out of the posts I've read on LW so far, I'm the most impressed by the happiness and self-awareness material -- but also intrigued by the posts on math, especially probability, and will hopefully have something to contribute to those (because, well, probability is what I do). And then there's HPMOR.
We'll see what I end up doing now that I have the power to insert permanent impressions of my thoughts into the content of this website.
Replies from: beoShaffer↑ comment by beoShaffer · 2012-05-12T22:53:44.016Z · LW(p) · GW(p)
Hello Kindly! Were you taking a probability class or just interested in Bayes?
Replies from: Kindly↑ comment by Kindly · 2012-05-12T23:26:55.129Z · LW(p) · GW(p)
A bit of both. I knew about Bayes' theorem for a while, as a not-terribly-exciting mathematical statement. But I had a few discussions about the philosophy of it, if you will, when taking a class on information theory. That sort of thing is interesting to read about, and that's how I ended up typing it into Google.
By far the most useful introduction to Bayes' theorem I've read, though, was in this short story, which I found later. I don't often use Bayes' theorem, but when I do, I prefer to do the calculation in my head, because it impresses people. This is much easier to do as an odds calculation, the way Brennan does it in the story (actually, the odds calculation is even easier than what Brennan does -- keeping track of the sixteenths is excessive). Somehow this method didn't occur to me until I read the story and reverse-engineered it. Now I think of the story every time I use it.
comment by Schwarmerei · 2012-05-08T18:12:39.303Z · LW(p) · GW(p)
Hi,
I am the first in a family of budding rationalists to jump in to the LessWrong waters. I got my start as a Rationalist when I was born and was influenced very heavily through my childhood by my parents' endless boxes of hard sci-fi and old school fantasy. Special mention goes to The World of Null-A (and its sequel) in introducing the notions of a worldview being 'false to facts', and a technique the main character uses (the "cortical-thalamic pause") which is very similar to "I notice that I am confused." I read everything avidly and have a mountain of books on my shelves dealing with neuroscience and cognitive biases.
The fam: I'm first in a family of six kids who have always been confused by the illogic and muddled thinking of our peers. We've all grown up strongly under the sway of the aforementioned sci-fi/fantasy collection and like nothing better than to debate topics and point out each other's fallacies or gaps in logic. We are all slightly obsessed with HPMoR (I being the only one to have read Overcoming Bias / LW before the story's inception) and I personally find that Harry's thinking often mirrors my own to an eerie degree of similarity.
Several of us are also very interested in reforming education and are forming a tech company to that end (I'm a programmer / comp jack of all trades, and my almost-twin bro is a graphic designer*). I plan on diving into the sequences more rigorously in the upcoming months, as I'd like to integrate rationalist principles into the basic fiber of the products we produce (self guided, community assisted learning software).
(* While not all actively involved in the company, all six of us – including the girls and the 13 year old – can program.)
comment by Drewsmithyman · 2012-05-03T18:34:31.887Z · LW(p) · GW(p)
Hello community. My name is Drew Smithyman and I am an executive assistant at CFAR. I have not been with them long, nor have I been reading the sequences very long, but I intend to continue doing both.
I need to post a discussion thread about some interviews we need to do - could people please do me the favor of upvoting this comment twice so that I may start one as soon as possible?
Thank you.
comment by borntowin · 2012-04-07T07:11:50.381Z · LW(p) · GW(p)
Hello there people of LessWrong. I'm a 24 years old dude from a small country called Romania who has been reading stuff on this site since 2010 when Luke Muehlhauser started linking here. I'm a member of Mensa and got a B.A. in Management.
I have to admit that there are more things that interest me than there is time for me to study them so I can't really say I'm an expert in anything, I just know a lot of things better than most other people know them. That's not very impressive I guess but I hope that in 5 years from now there will be at least one think I know or do at an expert level.
My plan is to start my own company in the next few years and I think I know how to make politics to actually work. I love defining rationality as winning as you guys do and I think that I win more now, after reading articles on this website. Hopefully with time I might be able to contribute to the community too, there are some things that might just make LessWrong better.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-04-26T22:10:59.498Z · LW(p) · GW(p)
Hello and welcome!
I think that I win more now, after reading articles on this website.
I'd love to see a post on how specifically you've been able to win more. Hearing about how people use the info here is always enjoyable.
comment by [deleted] · 2012-01-17T00:55:56.478Z · LW(p) · GW(p)
Hey everyone,
I'm a 20 year old student of Serbian literature (from Serbia). I found this site while browsing through some math blogs and it seems very nice.
About me: Currently my main interest is writing short stories. I view them as arranging words so they appeal to my own emotions, intuition, subconscious, what not. I also like mathematics and I like to explore relations and find out new rules between numbers, lines, etc., although it sometimes bores me because my imagination has to be strictly inside the boundaries of logic there, while with literature I can do anything that pleases my taste, personal desires. Both are manifestations of human imagination (just like music, or drawing), but literature could be called 'dirty' because it is not stripped from our colorful humanness. I've been introduced to rationality when I was 15 as I started programming. Before that I liked to play alone and construct giant slides for marbles, I like to draw maps. My mother thought me to personify objects when I was a kid - "Do not tear the flowers, it hurts them!" - that lead me to think that such things, as well as houses, etc. have human properties. A house would have an intention, a toy would be sad if we leave it on the floor, cars would be happy or angry, etc. I know that such a worldview is not very helpful or practical, but it is sure fun to see things like this sometimes! What triggers it usually is when I enjoy purely sensual activities where meaning and logic are excluded (sex, e.g.). Walking out into the dark with such an attitude to reality can also be scary sometimes - everything is vibrant, full of life, emotion, humanness - I get oversensitive, I guess, like an unknowing little animal.
Although I have tried pushing myself in more practical directions, looks like that I turn back naturally to my art. So I decided to hope for writing some stories/books that will influence people in some peculiar way, depending on how they react to such ideas. I have felt some guilt because of my artistic attitudes before - "You should be a programmer, a physicist, a mathematician!" I thought to myself, because I thought that picking a job where pure rational thought is needed is what every man should strive for, while unnecessary things such as artistic tendencies (with all its quirks) should be left behind as some illness that people overcome when they realize how reality really works. But I act on behalf of my instincts - and if I try not to, I feel sad. So I listen to them, mostly.
One question I have been turning around in my mind recently - are there limitations of mathematical explorations? Or will things keep growing, branching and getting more complex year after year? Is a mathematician someone who is happy if he can grab a bucketful of water out of the ocean? The world seems amazingly broad to me and my work compared to it as just a pile of symbols which might trigger thoughts another system has learnt to associate with them. However, no matter how big and cold things are, I think there is no reason to be scared. Usually, sadness would come from denying the obvious truth. To live is a miracle indeed (sorry if I got cheesy by the end ;).
Replies from: None, TimS, Multiheaded↑ comment by TimS · 2012-01-17T01:21:59.051Z · LW(p) · GW(p)
Welcome to LessWrong. Our goal is to improve ourselves. Most importantly, we are trying to learn how to avoid believing things that are false. This is harder than it sounds, like someone trying to figure out whether she likes a painting because it appeals to her artistic sensibilities, or simply because her teachers unconsciously favored the painting when teaching her. We certainly don't think that you must abandon creativity or emotion in order to improve yourself. Maybe your self-improvement will help you express your creativity more effectively.
Anyway, we love questions and debate. Feel free to post in the current open thread.
Note: Misleading sentence removed.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-01-17T01:32:44.971Z · LW(p) · GW(p)
Often, that means learning what true things to believe.
You don't learn which true things to believe or which false things to disbelieve. You learn (how to figure out) which things are true or false.
Replies from: TimS↑ comment by Multiheaded · 2012-01-17T10:28:19.468Z · LW(p) · GW(p)
[deleted]
Oops. Looks like Eliezer's doing some night-and-fog again.
comment by Chriswaterguy · 2012-01-08T10:14:21.930Z · LW(p) · GW(p)
I'm 41, working on a wiki project for sustainability and development, which I love (and part-time on a related project which I like and actually get paid for). I use the same username everywhere, so if you're curious, you won't have trouble finding the wiki project.
I'm a one-time evangelical Christian. I think it was emotional damage from my upbringing that made me frightened to let go of that, and I stayed a believer for 9 years, starting in my late teens. I took it extremely seriously, and there were good things about that. But with hindsight, I would direct people to other places for their personal growth than becoming a believer. Later, just a few years ago, I did the Landmark Forum, which was very powerful and mostly very positive, though I wouldn't recommend that as a first step in working on personal development, unless you're already pretty successful and mature. I'm also a big fan of Nonviolent Communication, and I'd recommend that to anyone.
I learned about Less Wrong a year ago (from someone else on the wiki project) and loved it. I've been meaning to join, but the thing that prompted me now is that I need help, in the form of accountability, and this seems like a good place.
I do a lot of work, but I find myself distracted from the work I most need to do. The persistence of the problem leads me to carry out a "lifestyle experiment" for the next 3 weeks. I'm calling it my "3 week Serious Focus experiment", and the key ingredients are:
- Being sensible: doing stuff that I need to do, that will have a big positive effect on my life, before doing other stuff, no matter how good or enticing
- Being accountable: I'm posting here, and will do so on Facebook and G+, and will tell friends In Real Life.
- Regarding it as an experiment: I'm only committing myself to 10-30 Jan 2012, so I can play at being hardline with myself, like it's a bootcamp. I can extend or make new decisions at the end, but the time limit means it doesn't feel like a trap that I'm desperate to escape from.
- A focus on (a) livelihood - the stuff I'm already getting paid for, and (b) taking the wiki project to the next level, i.e. strategic work before maintenance or putting out fires.
The rules for the Serious Focus experiment are:
- Plan each night before bed - up to 6 items to work on the following day
- 3 hours solid work on the one or two top items (livelihood and strategy) before looking at email (except perhaps work related - I have email filters for that) or at work-related social media, or at messages I get on the wiki site. (Only exception is if it's so urgent that a colleague on the wiki project calls or IMs me - which is very rare.)
- Any work I'm tempted to do on secondary things (not among the 6 items, and taking more than 5 min) to be written down and put aside until the 3 solid hours are done.
- After the 3 hours are done, I loosen up a bit, but still focus on getting those items done.
- All items must be done before checking personal social media at all. (I'm allowed to post any time, but not look at replies or other people's statuses.) If I don't get all 6 items finished, that's ok - going without Facebook will do me good, even if I don't get a chance to check it for the whole 3 weeks!
I'm going to start now, but I'm making my official start date 2 days away, so there's time for feedback on the plan and to adjust it if needed, before I launch the experiment.
Glad to join you all at Less Wrong!
Replies from: orthonormal, macronencer↑ comment by orthonormal · 2012-01-18T05:27:56.071Z · LW(p) · GW(p)
How's your Serious Focus experiment going?
↑ comment by macronencer · 2012-01-11T13:12:36.094Z · LW(p) · GW(p)
I like this experiment! Maybe I'll do something similar myself; I'll be interested to hear how it turns out for you.
One of the major difficulties I have with the way my mind works is that although it's possible to identify the causal link between actions taken now and the results they bring about in the long-term future, unfortunately it's very hard to keep this connection in the forefront of the mind and take daily actions that are motivated by it. In other words, the problem of delayed gratification (I haven't read the Sequences yet but I think there's something in there about that). Have you ever taken a look at David Allen's GTD system? I've found it useful because it prescribes a cycle of doing/reviewing, which helps keep you on track even when the long-term objectives may be shifting.
Your reference to "work-related social media" is telling. I'm beginning to work in media music where networking is vitally important, and I am finding that the rationalisation of "it's important for work" significantly exacerbates the distraction caused by the temptation of such things as Facebook.
comment by matheist · 2012-01-08T02:08:07.465Z · LW(p) · GW(p)
I discovered this community through HP:MoR; I joined the discussion because there was a comment about the work which I wished to make. I've started reading the articles as well and am enjoying doing so.
Looking forward to all the shiny ideas!
comment by hesperidia · 2012-01-07T22:45:02.438Z · LW(p) · GW(p)
Hi,
I recently found myself making a rather impassioned defense of how living logically does not preclude living morally. As I have found monitoring my actions to be more reliable than introspection, this was a much better confirmation of "I think this is the right thing to do" than my saying to myself that I think this is the right thing to do.
Other proximate causes include TVTropes via Methods of Rationality (obviously), one of my acquaintances linking several articles in succession from this site, and the fact that I find myself extremely prone to hero-worshipping anyone who happens to be more intelligent than I am.
I have historically had some hang-ups around the concept of "right" and "true" and am currently attempting to disentangle my rather weird upbringing (and its non-religious but nevertheless absurd repurposing of the concept of "not being wrong") from the practice of matching map to territory.
Meanwhile I am an 18-year-old psychology/biology major in college who enjoys actually reading from scientific journals on subjects that include evolutionary psychology and theories of autism spectrum disorders.
Personally, I have some unusual experiences involving actually caring about large numbers of people, the topic of which I am not sure I want to broach immediately. (That, however, is why I'm excited about transhumanism. If my mind is augmented then I can coherently think about large numbers of people without either compressing or ignoring them. And my day won't be ruined if I happen to accidentally read yet another news story about hundreds of thousands of people dying. Suffice it to say, screw the 24-hour news cycle, I have to remain ignorant of most news in general for my own sanity - if you guys have anything for that, please let me know.)
Replies from: MixedNuts↑ comment by MixedNuts · 2012-01-07T22:50:32.106Z · LW(p) · GW(p)
How do you feel about news about large numbers of people being saved?
Replies from: hesperidia↑ comment by hesperidia · 2012-01-07T23:16:41.711Z · LW(p) · GW(p)
I seem to underweight such news, mostly because of the difficulty of speculating on what "would have happened", although another contributory factor is that such news is rare and frequently suffixed with a number of disclaimers about how it could still happen somewhere else, etc. etc. (Yes, I am glad that the Fukushima nuclear reactor didn't end up exploding; but other reactors in other parts of the world are at least as old and prone to breaking down if they're looked at wrong.)
Replies from: MixedNuts↑ comment by MixedNuts · 2012-01-07T23:19:11.233Z · LW(p) · GW(p)
I was thinking in terms of cures or vaccines for diseases. So we know a lot of people died, and will continue to die from other diseases, but these bits aren't new, while the reduction in death toll is. (Bonus: they're also a lot less rare than I thought.)
Replies from: hesperidia↑ comment by hesperidia · 2012-01-08T00:36:47.594Z · LW(p) · GW(p)
Hm. Well, this I have not thought about in detail, but my immediate emotional reaction is "so what?" which is not really helpful to me on any count.
This is probably exacerbated by the aforementioned difficulty in determining "could have beens". I will sit on this question overnight and see what happens.
comment by scmbradley · 2012-01-03T19:11:38.519Z · LW(p) · GW(p)
Hi. I'll mostly be making snarky comments on decision theory related posts.
Replies from: windmil, orthonormal↑ comment by orthonormal · 2012-01-06T16:26:17.718Z · LW(p) · GW(p)
That's fairly specific. Do you have a particular viewpoint on decision theory?
Replies from: scmbradley↑ comment by scmbradley · 2012-01-08T21:43:45.478Z · LW(p) · GW(p)
I have lots of particular views and some general views on decision theory. I picked on decision theory posts because it's something I know something about. I know less about some of the other things that crop up on this site…
comment by [deleted] · 2011-12-31T02:59:04.616Z · LW(p) · GW(p)
Hi! I'm Eric, a freshman at UC Berkeley. I've been lurking on Overcoming Bias/Less Wrong for a long time.
I had been reading OB before LW existed; I don't even remember when I started reading OB (maybe even before high school!). It's too long ago for me to remember clearly, but I think I found OB while I was reading about transhumanism, which I was very interested in. I still agree with the ideas of transhumanism, and I guess I would still identify myself as a transhumanist, but I don't actively read about it much anymore. I read LW less than I used to, but I'm starting to read it more now; LW and OB are basically the only transhumanist-related blogs that I still read.
I guess I like this site because I like things that are interesting and make me think; there are a lot of good and interesting ideas floating around here, and the quality of the posts and comments is excellent. I don't think I've encountered a single other site with such good comments. I like to say that I'm too curious and have too many interests; I spend a lot of time reading about things that interest me and I don't know what I want to do.
Why am I introducing myself and no longer lurking after all these years? I used to be really bad at expressing myself in writing: I wrote slowly and badly, and reading my old blog comments makes me cringe. I was good at reading, and at producing grammatically correct sentences, but I was terrible at actually using written language to get my ideas across. For example, just two years ago (in 11th grade), I scored 80/80 on the Writing Multiple Choice section of the SAT, but only 7/12 on the essay! Now, though, I find that I can write just fine (and I have no idea why this suddenly happened). So I'm finally introducing myself because I can finally write decent posts and comments :)
A quick question for more experienced LW commenters: I posted a comment (http://lesswrong.com/lw/8nr/intuitive_explanation_of_solomonoff_induction/5k5b) on an old, non-promoted post, and it didn't show up in the recent comments section. As a result, no one seems to have even seen it, and I don't know whether my addition was useful or not. How can I make these kinds of contributions visible in the future?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-12-31T04:02:40.197Z · LW(p) · GW(p)
I posted a comment (http://lesswrong.com/lw/8nr/intuitive_explanation_of_solomonoff_induction/5k5b) on an old, non-promoted post, and it didn't show up in the recent comments section
It's currently non-intuitive, but the "recent comments" for the main section appear only to those who've selected to see the main section, and the "recent comments" for the discussion appear only to those who've selected the discussion. This is one of the silliest aspects of this site's design.
Replies from: daenerys↑ comment by daenerys · 2011-12-31T23:54:32.758Z · LW(p) · GW(p)
It's currently non-intuitive, but the "recent comments" for the main section appear only to those who've selected to see the main section, and the "recent comments" for the discussion appear only to those who've selected the discussion.
Oh wow! I have been on this site for almost 2 months and have not realized this until you commented on it. Thanks for mentioning!
(Also, yes, this is a very counter-intuitive, and difficult interface feature that should be changed)
comment by coltw · 2011-12-29T06:38:03.857Z · LW(p) · GW(p)
Hey, okay, so, I'm Colt. 20, white, male, pansexual, poly, Oklahoma. What a mix, right? I'm a sophomore in college majoring in Computer Engineering and minoring in Cognitive Science, both of which are very interesting to me. I grew up with computers and read a lot of sci-fi when I was younger (and still do) which I attribute to making me who I am today. A lot of Cory Doctorow's work, along with Time Enough for Love by Heinlein and Vinge's A Fire Upon the Deep are some of my favorites. I found HPMoR a while back and eventually found my way here, maybe last summer or so. I've been reading the sequences voraciously, and really need to start applying some of it to my life. :P
Like I said, I like computers, and I'm really interested in the mind and technology and programming. I'd like to work with new challenges and maybe on some AI (with enough mathematical rigor, of course) when I get out of college, but time will tell.
comment by [deleted] · 2013-03-04T15:57:45.481Z · LW(p) · GW(p)
I'm a new member, and I want to say hello to this awesome community. I was led to this website after encountering a few people who remarked that many of my opinions on a wide range of subjects are astonishingly similar to most of the insights that have been shared on LessWrong so far. Robert Aumann is right -- rational agents cannot agree to disagree. ;-)
I am sure there are many things I can learn from other LW readers, and I look forward to participating in the discussions whenever my busy schedule allows me to. I would also like to post something that I wrote quite some time ago, so I'll do the shameless thing and ask for upvotes -- please kindly upvote this comment so that I will have enough karma points to make a post!
Replies from: MugaSofercomment by SamuelHirsch · 2012-08-13T07:20:57.890Z · LW(p) · GW(p)
Hello!
I am joining this site as a senior in Engineering Science (most of my work has biomedical applications) in college. I am 22 years old, and despite my technical education, have less online presence (and savvy) than my Aunt's dog. As a result, I apologize in advance for anything improper I may do or cause.
Some personal background: I grew up in the Appalachian foothills of northwestern New Jersey, USA with two brothers in a (mildly observant, Conservative) Jewish household. I mention this because the former explains my insular upbringing, as opposed to the latter, which was the main encouragement for me to reach out to this site and others in an effort to better rationalize my own beliefs and world-view. These relative causes and effects appear to be somewhat unique from what I've observed in casual conversation with others, as well as a brief skimming of this site before I realized I simply had to join it. (Forgive my squee as I step into the unknown of online forums and blogs.)
Where I am (or would like to be) headed: I will be working as an EMT until I can get the few post-bacc credits needed before I apply to medical school. Those credits may stretch into a Masters in BioMedical Engineering, but that is still up for grabs. For whatever reason, the race consciousness' need for progeny runs strong in me, although I'm not picky on if the children come from my genes, so long as it's legal. :). The reason I mention this, is that one of the most pressing issues I am currently facing is determining whether the girl I've been seeing for several years is the one. Please, do not feel compelled to respond with date tips - I only included this information as this selection is one of the driving forces behind my search for more logical and rational thinking.
(What a segue! I'm getting better at this introduction as it continues.)
Why I am here: Ha, I wish I could answer that question. But really, the reason I came to Less Wrong can not be pointed at any one issue, although there are some stronger points. One, I've just mentioned. Another can be pointed at my belief system. (It may have the trappings of religion, and I may have been the Religious Affairs Liaison to my university's student government, but I dislike that word for reasons longer than I can enumerate at this moment.) Simply put, I was unsatisfied with my religious (note!) upbringing's ability to explain my experiences, so I 'checked out' many belief systems until I ironically persuaded myself into my current situation of being a more .....devout/observant/adjective-that-doesn't-call forward-the-word-psychotic Jew than anyone else in my family. Certainly, I welcome any discussion on the topic, both because I wouldn't want to dissuade anyone from speaking openly to me and because this is still in a state of flux. That is, infact how I arrived at the site, when I followed a link while searching for a personal chavruta. My third and final motivation that I'll mention is that I simply and truly wish to clarify my own side while directly understanding others' in all aspects of my life. This is hardly new to me, but I've only recently learned the tools for self-improvement may be found outside the mind and I am thus reaching out to you.
Ultimately, I hope to get out of this site as much as I put into it (which I plan to be a lot). As you watch me grow, don't hesitate to correct me. I will certainly make an effort to ensure my future posts are not as long, nor as full of paranthetical comments. (Although really, I come from a not easily summarized background, and between being easily distracted and recently filling out application forms with limited characters, I just couldn't help myself.) I honestly am honored that anyone is even reading this far down into my words, as they're the first I've ever posted and I realize I've gone on quite enough. In that spirit, thank you all so mucb for your time and contributions across this site. I look forward to getting to know you, myself, and maybe even some online etiquette. Goodnight to all,and to all a good night.
Yours, SamuelHirsch (Samuel on COW)
Replies from: SamuelHirsch↑ comment by SamuelHirsch · 2012-08-13T07:40:55.575Z · LW(p) · GW(p)
This is probably a tremendous faux pas but after waking up my girlfriend (work at 4am), I realized I could potentially make myself look less idiotic and stave off great frustration while risking the wrath of self-commenting haters. To wit, I did in fact know Less Wrong existed but wrongly assumed that it was a forum for self-aggrandizement, where one could simply type enough large words and be thought correct, rather than a platform for self-betterment. The irony in that sentence notwithstanding, this prejudice against bouncing ideas and methods of analysis off other people has held me back in the past. I will do my best to overcome it, both here and elsewhere. Thanks for your patience - I hope that provided a little insight into some of my limitations as I move forward.
comment by Reiya · 2012-07-25T01:39:00.304Z · LW(p) · GW(p)
Hello! I found this site due to a series of links that impressed me and tickled my curiosity. It started out with an article an author friend of mine posted on FB about "Incognito Supercomputers and the Singularity". It points out a possible foreshadowing of the advent of avatars as written about in his and his brother's books.
I am female, 55 years old, and tend to let my curiosity guide me.
I call myself a spiritual atheist. It wasn't until I could reconcile my intangible (spiritual?) experiences with my ongoing discovery that religion's definition of god was useless to me that I could use the term atheist and feel like it fit. Ironically, I found myself outgrowing my religious upbringing (Mormon and born-again Christian) when I desired a more honest relationship with god. It took several years of paying attention to what lined up and held together, and noticing what no was no longer intellectually tenable that I first came to the realization I could no longer call myself a Christian. The change to atheism with Buddhist leanings was not very hard after that.
I have been a massage therapist for almost 20 years now. I also enjoying using the symbolism and synchronicity of astrology for spiritual and psychological points of view. I suspect that many spiritual experiences have to do with right brain functions. I am currently reading, FINGERPRINTS OF GOD, What Science is Learning About the Brain and Spiritual Experience, by Barbara Bradley Hagerty.
I honestly don't know much about logic and reason from a scientific or mathematical basis. I hope to change that as I can spend time here reading and listening and thinking and changing as needed. I suspect I am right brain dominant, and I learn in very different ways. Memorization is tricky for me, I learn best by doing and using my hands. It's a good thing I am a massage therapist.
Off the bat, I can say that I am delighted to see people willing to change as they get better data and I am appreciating the idea of Crocker's rules. It is sometimes impossible to really exchange ideas if one has to stop and mop up the offended feelings of someone who doesn't understand the exchange of information for it's own sake.
Thanks for doing this site and I'm looking forward to lurking for a while and then learning more about myself and others.
comment by Nighteyes5678 · 2012-06-08T20:03:57.797Z · LW(p) · GW(p)
Hey all. i figured that after a few long months of lurking, I might as well introduce myself (that way when I post elsewhere, someone doesn't feel obligated to smack my nose politely with a rolled-up newspaper and send me here), even though I can never figure out what to say.
I've now finished all the Sequences and I've successfully resisted the urge to argue with comments that are years old, and I think I've learned a lot. One of the high moments was that I had just finished reading the Zombie sequence when I met a friend of a friend, who started to postulate the Zombie world and concept. Thanks to my reading here, I'd already done some thinking about the matter and could engage with him intelligently. How awesome is that?
One of my biggest struggles is coming up with how some of the stuff on Lesswrong is applicable to normal life. I'm not a IA researcher, I get confused with computers, and I'm a fairly normal person. I'm into the outdoors, writing (dream job, right there), teaching, history, and board games. A lot of times, then, I wish the Sequences had parts after each post that suggested ways that the principles impacted normal life. Trying to figure out how to connect the Bayes way to more normal decisions is challenging. Perhaps this is already been addressed - Lesswrong is also a labyrinth for newbies. ^_^
As far as posting go, I'm still finding the right line between investigating and defensive/aggressive. Generally, I'm impossible to offend and I don't take things personally. I'll try and live that creed as well as just say it, but now it's on record. I also believe strongly in giving someone the benefit of the doubt, or taking their statement in the best possible light.
I'm not sure what else to say, but if there's one thing I've learned here, it's that people are always happy to point out areas that are lacking in both information and depth. Hope to see y'all around and I'm looking forward to exploring various things with awesome folk.
comment by geneticsresearcher · 2012-05-22T01:31:07.810Z · LW(p) · GW(p)
Hello, everyone. It's a pleasure to be here. I look forward to participating in discussions.
Replies from: JoshuaZcomment by curiosity · 2012-04-19T04:58:19.317Z · LW(p) · GW(p)
hello! I was introduced to LessWrong through HPMOR. I find rationality interesting as someone who was brought up in an extremely religious household, and trying to wade through what I actually believe rather than what I was taught.
I'm seventeen and am interested in the rationality summer summer camp, but the "gifted in math" part is stopping me short. I'm in honors and ap classes, but I'm not especially amazing at math, nor am I especially bad at it. Is genuine interest in the subject matter enough?
Replies from: Nisan, Bugmaster↑ comment by Nisan · 2012-04-19T06:22:06.966Z · LW(p) · GW(p)
The announcement for the May, June, and July minicamps don't mention a "gifted in math" requirement. You should definitely apply!
EDIT: The May, June, and July minicamps differ from the August minicamp in having less advanced math. This doesn't mean they're less useful! They do cost money but there may be scholarships available. And there's no reason you can't apply to multiple camps.
↑ comment by Bugmaster · 2012-04-19T05:10:35.950Z · LW(p) · GW(p)
Is genuine interest in the subject matter enough?
I may be jaded, but IMO having a "genuine interest" in math would already put you in the 99th top percentile of the population. This might not be as good as being "gifted" (whatever that means), but it should at least be close enough for a rationality camp.
Edited to add:
DISCLAIMER: I myself have never been through the rationality camp, so I'm just guessing here.
comment by VKS · 2012-04-08T12:08:05.400Z · LW(p) · GW(p)
Hello!
I should have read this post before I started posting.
I'm here because figuring out how thinking works is something I am interested in doing. I'm a freshman student in mathematics somewhere on planet Earth, but I know an unpredictable amount of mathematics beyond what I am supposed to. Particularly category theory. <3 Cat. Terrible at it for now though.
I hope I can say things which are mostly interesting and mostly not wrong, but my posting record already contains a certain number of errors in reasoning...
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-04-26T22:08:48.841Z · LW(p) · GW(p)
(Slightly belated) welcome!
<3 Cat.
Can you explain this bit?
Replies from: VKS, dlthomas↑ comment by VKS · 2012-04-27T00:07:47.417Z · LW(p) · GW(p)
As dlthomas says, Cat is the category of all (small) categories. (The small is there in certain (common (?)) axiomatizations only, in which CAT is the quasi-category of all categories.) In abjectly terrible metaphor, a category can be taken as a mathematical structure which represents a particular field of mathematics. So you have things like Grp, the category of groups and group homomorphisms, for group theory, Top, which contains topological spaces and continuous transformations for topology, Set for set theory, etc, etc... This is why they are called categories, as they categorize mathematics into the study of the things in various categories.
So what I'm saying is that I like Cat, which is the category of all categories, which is the same as saying that I like Category Theory. (It also sometimes, depending on your axioms, means that I like all of mathematics, which is also true.) Which is what I (redundantly) said in the text.
In other words, you probably didn't actually miss anything ;p
(P.S.: If you meant to ask why about <3 (so why I like it) rather than why about the Cat, I have badly misinterpreted your message.)
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-04-27T12:20:36.699Z · LW(p) · GW(p)
Thank you, that helps.
comment by HungryTurtle · 2012-03-01T15:40:47.523Z · LW(p) · GW(p)
Hello,
I have been coming to this site for about a month now. I would prefer to be known as HungryTurtle if that is okay.
I have a friend who I like to play with who recommended this site to me. Honestly, I was coming to this site hoping to find some fun people to play with. When I say "play" I do not mean it in a condescending way. My concept of play is similar to the idea of The Beginner's Mind in Buddhism. Anyway, being here a month, I have realized that the ideas on this blog have great meaning to its members, and that to not address them in this way would be crude. So for now I guess I am just trying to interact as humbly as I can. I would classify myself as a moderate rationalist, meaning that while rationalism is important to me it is not as important as moderation. I do think there is such a thing as rational irrationality. If I ever get 20 karma points I would love to write a post about it, even though it will probably bankrupt my karma. I am currently working for an American Non-profit called Teach For America. I am not sure what I will do after that. If there is anything else anyone would like to know feel free to ask.
comment by krey · 2012-02-09T13:02:09.123Z · LW(p) · GW(p)
Hello, I am Kris,
I study Mathematics and Computer Science at Oxford, I am interested in learning about Bayesian statistics/machine learning and its principles (Cox's theorem, Principle of MaxEnt) and tend to do things (overly) rigorously.
From my very limited experience, it appears that lesswrong applies these principles to real life, which is interesting as well, but at the moment I am more focused on Jaynes' "robot model".
I really like Jaynes' book, however it has come to my attention that some parts are outdated/unrigorous and I'm hoping that this forum will tell me what the state of the art is.
Looking forward to becoming part of the community :)
comment by naturelover7 · 2012-01-25T16:30:25.098Z · LW(p) · GW(p)
I just discovered this page today after goggling "believe in beliefs." I was searching for discussions much like what found here. You see, I am nether theist nor atheist. I am what I refer to as "naturalist". I also identify myself as libertarian, hippie, free thinker. There maybe another name for this belief system of I mine but I have yet to have found it. I identify the "God" of the Bible and Koran as what science refers to as our "Universe". I believe they are one in the same after studying the context of the literature realizing the primitive knowledge of science during those ancient times. As far as my education, I have BS in geography and history. I believe religion can be very beautiful. I also appreciate any positive, good will thoughts directed my way and find it very disturbing when others refuse to see it that way, which is part of why I am here now. I believe in universal love, hope, and peace. I admire aspects of all religions. What I do not understand is when an atheist bashes a particular religious group with the same disrespect as they accuse that same religious group. I do realize that is a reflection on an individual's experiences and education, etc. But this is something that I cannot seem to escape, despite my own beliefs. I must find a way to understand what I find to be hateful and silly behavior. I do not believe bashing others to be so wise. It just keeps the hate moving full circle. As far as the passion of the secular folk, I understand it. It is how they believe. Only death will change it. Those with spiritual beliefs have existed far longer than any of us studying the discoveries found in modern science that answers the age old questions that mans' religion sought to answer. I do have other concerns in rationality but its way too soon to determine if this is an appropriate place to discuss them since they are of a taboo nature.
Replies from: TimScomment by Freetrader · 2012-01-03T22:37:19.172Z · LW(p) · GW(p)
Hi everyone,
I am Freetrader, 31, from Barcelona. I am an engineer and I worked in the industry for some years, especially in the fields of operations management and quality, since I enjoy analyzing stuff and creating systems.
I have a very eclectic nature and I'm a bit of a hack, jumping from one thing to another (which is a trait I don't like very much of myself), anyway this led me to often change jobs from one company to another (luckily it seems I am good at getting new jobs, for some reason), until I finally realized that I was not good at getting the paycheck at the end of the month and what I really wanted is to be my own boss.
So, long story short, I also failed at entrepreneurship, however by the end I found out about day trading, and I good hooked by it. I've been studying and practicing day trading on the currency markets for almost a year now, and I'm finally starting to see some (small, weak) success.
(By the way, I'd like to mention my nickname Freetrader doesn't come from being a trader -I was using it long before-, but from the RPG Traveller, where the Free Traders are the small ragtag cargo ships that go from planet to planet taking the odd jobs no one else would and living adventures - think Firefly or Han Solo. I always wanted to be like that, and to a point I realize I managed my business like that, which didn't work so well, but was fun.)
Just recently I found Less Wrong in my RSS aggregator, however I can't remember how it got there (I kind of recall I found the link while doing some research on planned cities and I thought to myself: "this could be important, save it for later", then forgot about it). Anyway when I started reading it, it hit a cord inside myself and for the last weeks I've been reading articles and some sequences with great interest and delight. So I'll stick around. Maybe contribute a little, but I'm shy.
Finally I'd like to take the chance to ask if someone else here are fellow traders. I noticed many of the articles mention trading in examples, however I have not found any article specifically about it, or from the perspective of a trader. Indeed day trading seems like an excellent path for a rationalist with good bayescraft and who doesn't mind staring at a computer screen all day if it makes them good money. There's lots of irrationality in that field, especially within the world of what I call "pop traders", self-taught traders who trade their own money from home (like myself, so it's the world I know). Still many cowboy traders have hacked a way into making a pretty decent living from it, so a good rationalist will be able to get much more.
In summary, I'd really like to meet a rationalist trader and compare experiences. Or well, anyone interested by the topic in general. Or any nice people, so feel free to say hi!
Replies from: Chriswaterguy↑ comment by Chriswaterguy · 2012-01-08T09:28:26.470Z · LW(p) · GW(p)
I used to trade the stock market, getting into Bollinger Bands and other kinds of chart analysis. Had some successes, but the times that losses came, they were sudden and brutal. In the end, I decided I didn't enjoy it enough to do it well. And I wasn't quite sure I had the ability - the charts seem to work in hindsight, but there were a lot of factors that made looking at patterns in old charts deceptive - the fact that bankrupt stocks were removed from the data history by my data supplier was one obvious problem. And almost every other trader I knew seemed to be hopeful of making a buck, rather than already making a buck - with only one exception, a guy who did brilliantly, but I could never work out his methods.
I'm now earning some money as a consultant, and when I've got enough to put in the market, I'll be doing it longer term, probably in some variation of the "Dogs of the Dow" methodology, with a basic ethical filter. Or if that's too much work, an index fund. Maybe I could have been richer if I'd dedicated myself to paper trading and then working hard on real life trading, or maybe I would have lost more money. Either way, I'm happier with my life now - but that's just me.
Good luck!
Replies from: Freetrader↑ comment by Freetrader · 2012-01-28T15:07:48.208Z · LW(p) · GW(p)
In my experience most technical analysis and indicators are unreliable, and most of the patterns that many traders use and teach others are spurious. Like with the Bollinger Bands, let's say the price approaches the extreme of the band, people will tell you it means it's moving and it will break out, or that it is an extreme value and soon will go back the the average. But which one is it? No one can really tell you, and if you try to calculate the probabilities with past results it comes out around 50/50, as you would expect by the theory of efficient market. In hindsight it works every time, but when you are trading you might as well toss a coin. And the same happens with many techniques people trade with, trusting them without questioning them, blinded by winning streaks, or by apparently excellent results in hindsight.
The only thing I found useful for day trading is that the price most of the time moves in sudden little bursts, so if you can detect a burst early you can hop in and capture 5 to 30 pips from the move (1 pip is 0.01% of a currency value). It's a work-intensive way of trading, but I am having good results with it so far. Another problem is that it relies a lot on intuition. Most people who do this kind of thing cannot explain very well why they took a position, they just felt the move was about to happen after lots of experience in the market. I am sure bayesian analysis can help with that.
Replies from: CWGcomment by DasAllFolks · 2011-12-28T02:23:08.879Z · LW(p) · GW(p)
Hi all,
I'm a 23 year old male living just outside of Philadelphia, PA, and this is my first post to LessWrong after having discovered the site through HPMoR the Summer of 2010. I have been reading through the Sequences independently for the past year and a half.
To make a long story short, I came to consider myself an aspiring rationalist after I used rational methods to successfully diagnose myself with Tourette Syndrome this past May (confirmed by a neurologist) after my symptoms, which I had exhibited since age three and a half, had been missed or misdiagnosed by my family and medical professionals alike for 20 years. Successfully hacking such an insidious threat within my own mind inspired me to investigate how else rationality might enable me to optimize both my own life and the world, hence my increasing participation on this site.
I am presently doing some independent consulting work while I await audition results from graduate programs in Opera; if I do not gain acceptance to any program which I consider to be worth the time and money, I plan to return to school in the Fall to retrain in Physics (my original major was in Electrical & Computer Engineering and I worked as a software engineer for roughly a year, but my study of rational methods allowed me to realize, upon careful reflection, that my path to this field was quite haphazard and largely influenced by sunk costs. Armed with a better-trained brain, I intend to do better than this going forward).
LessWrong-related topics which interest me heavily (although I still consider myself a fledgling learner in nearly all of these) include Transhumanism and indefinite life extension, mind/body hacking for optimal health and self-improvement, methods for achieving optimal happiness, seasteading, and, as my experience with Tourette Syndrome might suggest, evolutionary psychology and using rational methods to overcome severe genetic and learned biases. I would be particular eager to write an article analyzing the wide prevalence of neurological disorders such as Tourette Syndrome, Obsessive Compulsive Disorder, and ADHD in the general population through the lens of evolutionary psychology once my experience with the Sequences is somewhat more comprehensive.
Other personal interests include music (Classical voice, trumpet, piano, and conducting to varying degrees, with an eye to learning composition), entertainment technology/theme parks, theatre, physics, reading, writing, film, computer science (though not to the level of depth of many on this site), economics, travel, entrepreneurship, and a slew of other topics.
I would be particularly eager to create a regularly scheduled LessWrong meetup in Philadelphia or the Philadelphia suburbs, as it appears that the Philadelphia chapter has lain dormant for more than a year now. Please feel free to comment on this post if you are in the Philadelphia area and would be interested in doing this; I will happily take care of the scheduling logistics if it helps this group become a regular fixture!
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-28T02:45:06.898Z · LW(p) · GW(p)
I live in Philadelphia and I'm interested in meet-ups.
Replies from: DasAllFolks, DasAllFolks, loxfordian↑ comment by DasAllFolks · 2011-12-29T05:25:08.310Z · LW(p) · GW(p)
↑ comment by DasAllFolks · 2011-12-29T04:46:51.039Z · LW(p) · GW(p)
I'm happy to hear it, Nancy. I'll be announcing a Philadelphia meetup shortly; I hope to see you on the comment thread!
↑ comment by loxfordian · 2011-12-28T04:10:26.040Z · LW(p) · GW(p)
I live right outside Philly and would be interested in meet-ups too!
Replies from: DasAllFolks, DasAllFolks↑ comment by DasAllFolks · 2011-12-29T05:25:16.485Z · LW(p) · GW(p)
↑ comment by DasAllFolks · 2011-12-29T04:47:19.369Z · LW(p) · GW(p)
I'm happy to hear it! I'll be announcing a Philadelphia meetup shortly; I hope to see you on the comment thread!
comment by habanero · 2013-04-21T21:15:12.586Z · LW(p) · GW(p)
Hello everyone!
I'm a 21 years old and study medicine plus bayesian statistics and economics. I've been lurking LW for about half a year and I now feel sufficiently updated to participate actively. I highly appreciate this high-quality gathering of clear-thinkers working towards a sane world. Therefore I oftenly pass LW posts on to guys with promising predictors in order to shorten their inferential distance. I'm interested in fixing science, bayesian reasoning, future scenarios (how likely is dystopia, i.e. astronomical amounts of suffering?), machine intelligence, game theory, decision theory, reductionism (e.g. of personal identity), population ethics and cognitive psychology. Thanks for all the lottery winnings so far!
comment by PatSwanson · 2012-12-03T21:49:08.235Z · LW(p) · GW(p)
Hi!
I'm 29, and I am a programmer living in Chicago. I just finished up my MS in Computer Science. I've been a reader of Less Wrong since it was Overcoming Bias, but never got around to posting any comments.
I've been rationality-minded since I was a little kid, criticizing the plots and character actions of stories I read. I was raised Catholic and sent to Sunday school, but it didn't take and eventually my parents relented. Once I went away to college and acquired a real internet connection, I spent a lot of time reading rationality-related blogs and websites. It's been a while, but I'd bet it was through one of those sites that I found Less Wrong.
comment by MikoMouse · 2012-08-13T00:00:52.097Z · LW(p) · GW(p)
Hullo everyone
It's nice to be here. I think. I'm not quite sure about any of this but, hope to be able to understand it someday. If not soon. Hopefully this site will be able to broaden my mind and help with my dismal opinion of the world and it's people as of late.
My name is Tamiko, or Miko if you prefer. I have been living in Southern California for the last 12 years and am currently 17 and a half years old. Recently I have been reading a certain fan-fic called Harry Potter and the Methods of Rationality. That is what lead me to this site. What pulled me in though is the concepts that this site promoted. I want the truth and all it entitles. I am curious and will not be satisfied until I have the answers to most if not almost all of my inquiries.
I hope we can all work together to make this world better. Thank you all for your time.
comment by TGM · 2012-07-12T00:30:56.452Z · LW(p) · GW(p)
There appears to be two "Welcome to Less wrong!" blog posts. I initially posted this in the other, older one:
I’m 20, male and a maths undergrad at Cambridge University. I was linked to LW a little over a year ago, and despite having initial misgivings for philosophy-type stuff on the internet (and off, for that matter), I hung around long enough to realise that LW was actually different from most of what I had read. In particular, I found a mix of ideas that I’ve always thought (and been alone amongst my peers in doing so), such as making beliefs pay rent; and new ones that were compelling, such as the conservation of expected evidence post.
I’ve always identified as a rationalist, and was fortunate enough to be raised to a sound understanding of what might be considered ‘traditional’ rationality. I’ve changed the way I think since starting to read LW, and have dropped some of the unhelpful attitudes that were promoted by status-warfare at a high achieving all-boys school (you must always be right, you must always have an answer, you must never back down…)
I’m here because the LW community seems to have lots of straight-thinking people with a vast cumulative knowledge. I want to be a part of and learn from that kind of community, for no better reason than I think I would enjoy life more for it.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-07-12T19:25:02.486Z · LW(p) · GW(p)
Welcome. Even though we're already PMing, I thought I'd clarify: There are many Welcome to LessWrong threads - I think there are more than two, but there may not be. Since the page doesn't display more comments than 500, we make a new thread every now and again, so that it displays all of them.
Edit: I guess by this metric, we need to make a new one again... There was a 600 comment or so infanticide discussion in the first few months of 2012's I think. Which led to this filling up.
comment by witzvo · 2012-05-27T00:17:50.071Z · LW(p) · GW(p)
You can call me Witzvo. My determination of whether I'm a "rationalist" is waiting on data to be supplied by your responses. I found HPMOR hilarious and insightful (I was hooked from the first chapter which so beautifully juxtaposed a rationalist child with all-too-realistic adults), and lurked some for a while. I have one previous post which I doubt ever got read. To be critical, my general impression of the discussions here is that they are self-congratulatory, smarter than they are wise, and sometimes obsessed with philosophically meaningful but not terribly urgent debates. However, I do value the criteria by which karma is obtained. And I saw some evidence of responses being actually based on the merits of an argument presented, which is commendable. Also, Eliezer should be commended for sticking his neck out so far and so often.
I was born into a sect of Christianity that is heretical in various ways, but notably in that they believe that God is operating all for the (eventual) good of mankind, and that we will all be saved (e.g. no eternal Hell). I remain agnostic. Talk about non-falsifiability and Occam's razor all you like, but a Bayesian doesn't abandon the possibilities to which he assigns prior mass without evidence, and even then the posterior mass generally just drops towards 0, not all the way. Still, my life is basically secular; I don't think there's an important observable difference in how I live my life from how an atheist lives, and that's pretty much the end of the matter for me. Oh, perhaps I have times of weakness, but who doesn't?
I have formal training in statistics. I am very sympathetic to the Savage / de Finetti schools of subjective Bayesianism, but if I had to name my philosophy of science I'd call it Boxian, after George Box (c.f. http://www.jstor.org/stable/2982063; I highly recommend this paper AND the discussion. Sorry about the pay walls).
I find the Solomonoff/Kolmogorov/AIXI ideas fascinating and inspiring. I aspire to compute for example, (a computationally bounded approximation to) the normal forms of (a finite subsequence of) a countable sequence of de Bruijn lambda terms and go from there. I do not see any lurking existential crisis in doing so.
In fact, maybe I've missed something, but I have not yet identified an actionable issue regarding one of the much-discussed existential crises. I do not participate much in the political system of my country or even see how that would help particularly except and unless through actual rational discussion and other action.
I find far more profit in exploring ideas, such as say, Inventing on Principle (http://vimeo.com/36579366), or Incremental Learning in Inductive Programming (http://www.cogsys.wiai.uni-bamberg.de/aaip09/aaip09_submissions/incremental.pdf), either of which I would be happy to discuss.
I am also intellectually lonely.
That's probably more than enough. Go on and tell me something less wrong.
Replies from: None, CWG↑ comment by [deleted] · 2012-05-29T06:47:16.079Z · LW(p) · GW(p)
Well, the standard response to the whole 'agnostic' debate is that while probability is subjective, pobability theory is theorems: You and I are only ever allowed to assign credence according to the amount of evidence available, and the God hypothesis has little, so we believe little. This gives me the mathematical right to make the prediction "the Jeudo-Christian God does not exist" and expect to see evidence accordingly. We say ~God because that is what we expect.
Other than that, welcome to less wrong. If you have time to read a book draft significantly longer than The Lord of the Rings Trilogy, written in blog posts, I reccomend reading the sequences in chronological order (use the article navigation at the bottom).
Replies from: witzvo↑ comment by witzvo · 2012-05-29T13:13:58.313Z · LW(p) · GW(p)
Thank you for the chronological suggestion; I think that will help it to make sense.
I would comment that the consistency theory for posteriors only guarantees convergence to "the truth" WHEN the likelihood terms are capable of making distinctions between the different possible models. If data is unavailable that distinguishes them... no convergence, just stabilization to potentially different subjective posteriors supported on the indistinguishable set. Since all we care about is making decisions, this is usually a distinction with no difference, except if there is a future event to come where they will make different predictions.
Accordingly my "God hypothesis" posterior probability has not updated in a very long time. I am, for example, not aware of what evidence there exists AT ALL with which to update this prior, but I suppose this is just because my posterior mass, conditional on the God hypothesis, came to rest, long ago, on a very non-falsifiable, (arguably science-compatible) version of belief in God. I.e. belief in a God who "operates all in accord with His will", but who also does this exactly through the naturalistic mechanisms that we see operating all the time and not through miraculous interventions. Or, to be more precise, another indistinguishable possibility is that God is perpetrating miraculous interventions continuously, and just consistently wills to do them in exactly the way that the true but unknown "laws of nature" predict. It remains possible that God could revoke this policy at some future time: this does not require much Kolmogorov complexity. I can rule out the ordinary miraculous part of the mass ("why did it happen? Because God wanted it that special way in this special case") because it's not predictive. Accordingly my posterior does not give me "the mathematical right" to assert God's non-existence.
Indeed, unless posterior mass is actually 0, I don't think this is a matter of "mathematical rights" anyway. We're making decisions under uncertainty, and we have some model for what the loss is if we make a mistake. You have, for example, the right to, say, not believe in God but not want to talk about it anymore, and otherwise act in accord with your belief, but I don't see how it gives you the right to assert God's non-existence. Sometimes we assign mass 0 to some possibilities, effectively, because we are not perfectly rational Bayesians, either we'd never thought of those possibilities, or it was computationally expedient to ignore them. In either of these cases we have no such "right".
If I had the misfortune to have my prior "initialized" by my environment somewhere with a "vengeful God" hypothesis, who waits in Secret to test if you are really "one of the chosen" faithful, I'd be in a more difficult predicament. Hopefully, if I'd been born in such a sect, they'd have had some other clearly falsifiable beliefs that come along with the dogmatic ride, so that I'd have a way to identify the error. The worrisome part here, I notice, is that my thinking is rather driven by how it was initialized.
Replies from: None, Desrtopa↑ comment by [deleted] · 2012-05-29T13:48:42.156Z · LW(p) · GW(p)
Well, you are certainly a lot better at Bayesian Statistics than I am. But if I am to base my "physics-defying, benevolent, superintelligent sky wizard" hypothesis on evidence such as badly written holy books that look spuriously like hodge-podge culture dumps, the general religious disagreement, the continued non-answering of prayers, failure to divine simple mathematical or physical truths, and how science is significantly more productive, well... Every time a prayer goes unanswered I can theoretically update on it for a lower credence. Every conflicting holy scripture allegedly bequeathed by the divine, claiming to be the one truth is a major hit.
Meanwhile the hypothesis that what most people call religious experiences is located in the temporal lobe gains credence from every cognitive neuro-scientific study of it.
If you haven't updated your credence of immortal physics defying super-intelligent friendly sky wizards in a while, I am going to tell you to look harder. The evidence is there. From what you state it seems that you might suffer from motivated cognition and belief hysteresis, so take this opportunity to get a feel on those so you might recognize them in the future.
Now, for the hypothesis of "immortal physics-defying wizard doing whatever makes the world seem normal" is strictly dominated by the hypothesis of "whatever makes the world seem normal". That is Minimum Message Length/Solomonoff Induction right there.
But given that you already have Bayes down to a science, I don't see why you should become a fine Lesswrongian in just a short time. Welcome to lesswrong.
Replies from: witzvo, army1987↑ comment by witzvo · 2012-05-30T01:48:41.020Z · LW(p) · GW(p)
Thanks for mentioning belief hysteresis.
It's a straw dummy to say that I haven't updated my priors about insert ridiculous thing here. Those are not the things I described having residual posterior mass on; my "sky wizard" is currently content to play by very different rules. As for the epi-phenomena of how a society embraces religion, or does not, I agree, this does suggest that it would be foolish to base one's beliefs SOLELY on the "evidence" of the crowd as you grow up.
Also, incidentally, MML/Kolmogorov complexity and Solomonoff are not the same here. The former do not give a posterior on models, they select one model. The later could and would easily put posterior mass on multiple programs that were observationally the same up to the amount of data we have currently observed.
Consider the sequence of numbers: 1,2,3. MML would (for most Turing machines anyway) select the program: next=old+1. Solomonoff would have posterior mass on many other sequences (e.g. do a query on http://oeis.org/). It'd put most of it's mass on that program, but leave nonzero mass on other programs, like one that computes this sequence, say: http://oeis.org/A006530, and if the next numbers reveals the sequence: 1, 2, 3, 2, 5, 3, 7, 2, 3, 5 Then you can bet that's where a lot of the mass will shift.
Replies from: witzvo↑ comment by witzvo · 2012-06-13T08:16:24.462Z · LW(p) · GW(p)
... And the really subtle bit, that I don't want you to miss, is that it doesn't all shift there, it's still not a point posterior. There's still mass on:
- Use http://oeis.org/A006530 for the first Googol entries, and then, all zeros.
That mass won't go away for a LONG time (that is if the data keeps coming from http://oeis.org/A006530 for at least Googol/2 steps.
And there are many possibilities like this out there. Sure, they have less mass individually, but there ARE many of them. And some don't even have appreciably less prior mass (under a Solomonoff prior and a "reasonable" Turing machine). Computability is a whole 'nother issue.
↑ comment by A1987dM (army1987) · 2012-05-29T14:50:18.719Z · LW(p) · GW(p)
immortal physics defying super-intelligent friendly sky wizards
If I understand what you mean, put a hyphen between “physics” and “defying”, because I had mis-parsed that the first time.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-29T15:24:20.846Z · LW(p) · GW(p)
Hee!
I sort of like the idea of immortal physics defying super-intelligent friendly sky wizards, actually.
"Hamlet, in love with the old man's daughter, the old... man... thinks."
↑ comment by Desrtopa · 2012-05-29T14:23:34.185Z · LW(p) · GW(p)
Accordingly my "God hypothesis" posterior probability has not updated in a very long time. I am, for example, not aware of what evidence there exists AT ALL with which to update this prior, but I suppose this is just because my posterior mass, conditional on the God hypothesis, came to rest, long ago, on a very non-falsifiable, (arguably science-compatible) version of belief in God. I.e. belief in a God who "operates all in accord with His will", but who also does this exactly through the naturalistic mechanisms that we see operating all the time and not through miraculous interventions. Or, to be more precise, another indistinguishable possibility is that God is perpetrating miraculous interventions continuously, and just consistently wills to do them in exactly the way that the true but unknown "laws of nature" predict. It remains possible that God could revoke this policy at some future time: this does not require much Kolmogorov complexity.
What evidence was there in the first place to promote to your attention the hypothesis of a naturalistic god over no god at all? If you don't have any particular evidence that favors a naturalistic god over no god, surely the hypothesis of no god requires less complexity.
A Bayesian might never abandon possibilities to which he or she assigns prior mass without new evidence, but in addition to evidence that shifts the posterior, one can also revise a belief on information that suggests that an inappropriate prior was assigned to begin with.
Replies from: witzvo↑ comment by witzvo · 2012-05-30T01:29:37.641Z · LW(p) · GW(p)
Well, I was raised on it. If one day your Mom says, "don't touch the stove, it'll hurt", and voila she's right, you start to think maybe you ought to pay attention to what they're telling you some times, including when they talk about "God." Theres no way to distinguish one form of advice from the other until you get more experience. On this basis many things are acquired by making inferences based on the actions of people around us as we are growing up. "Everyone is wearing pants. Hmm. I guess I should too" is a pretty good heuristic Bayesian argument for many things, and keeps us out of trouble in unfamiliar experiences more often than not [cite some darwin page on here].
If I hadn't been raised that way, probably nothing would have promoted it to my attention.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-30T01:44:12.522Z · LW(p) · GW(p)
Knowing more about the processes that actually gave rise to your parents' pronouncements on religion, do you think you were right to assign as much weight of evidence to them as you originally did?
Replies from: witzvo↑ comment by witzvo · 2012-05-30T05:55:04.442Z · LW(p) · GW(p)
Ah. Well, you've got me there. I'll think about it. Your comment makes me think, though, about a more general issue. Is there a name for a bias that can happen if you think about an issue multiple times and get more and more convinced by, what actually, is essentially only one piece of evidence?
Replies from: Desrtopa↑ comment by CWG · 2012-05-29T05:49:45.480Z · LW(p) · GW(p)
Carl Sagan described himself as agnostic, and it's a rational position to hold. As Sagan said:
"An atheist is someone who is certain that God does not exist, someone who has compelling evidence against the existence of God. I know of no such compelling evidence. Because God can be relegated to remote times and places and to ultimate causes, we would have to know a great deal more about the universe than we do now to be sure that no such God exists. To be certain of the existence of God and to be certain of the nonexistence of God seem to me to be the confident extremes in a subject so riddled with doubt and uncertainty as to inspire very little confidence indeed".
However, I personally attach zero likelihood to anything like the Christian, Muslim, Jewish or Hindu god or gods existing. Technically I might be an agnostic, but I think "atheist" represents my outlook and belief system better. Then again, "a-theism" is defined in terms of what it doesn't believe. I prefer to minimize talking about atheism, and talk about what I do believe in - science, rationality and a naturalistic worldview.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-05-29T14:58:32.194Z · LW(p) · GW(p)
0 and 1 are not probabilities anyway, so refusing to call someone an atheist (or a theist) because they assign a non-zero (or ‘non-one’) probability to a god existing seems pointless to me, because then hardly anyone would count as atheist (or a theist). (It's also a fallacy of gray, because assigning 0.1% probability to a god existing is not the same as assigning 99.9% probability to that.)
Replies from: witzvo↑ comment by witzvo · 2012-05-30T01:22:04.589Z · LW(p) · GW(p)
This kind of comment completely throws me off. I will have to read Eliezer's arguments more carefully to see the meaning of these things further, but "0 and 1 are not probabilities"? What?! Under what mathematical model for Bayesianism is this true? I read "it's more convenient to use odds when you're doing Bayesian updates" and some discussion of the logistic transformation, some reasonable comments about finite changes under updating, and that MAYBE we should try to formulate a probability theory without 0 and 1, (resp. -inf, inf) to the title's apparent claim: "0 and 1 are not probabilities". Talk about a confusing non-sequitur. Same thing with the fallacy of gray. What? Eliezier rejects P(not A)=1-P(A) too?? I haven't read that yet but whatever form of Bayesianism he ascribes too, it's not a standard one mathematically.
EDIT: nevermind about the gray. I misread it. gray is just ignoring the difference between different probabilities., This applies to the word "agnostic" (i.e. are you a high agnostic or a low agnostic) but, then, nobody forced me to declare my probability numerically. I was just introducing where I fit in the usual spectrum. P({ of possibilities in which there is a God} | life)>1/100. EDIT2: Thanks for the "gray" post. I liked this best: "If you say, "No one is perfect, but some people are less imperfect than others," you may not gain applause; but for those who strive to do better, you have held out hope. No one is perfectly imperfect, after all." EDIT3: deleted a nonsense remark.
comment by blackhole · 2012-05-25T22:00:38.667Z · LW(p) · GW(p)
Hello everyone. I’ve joined this site because I have a goal of being a very rational person. Intelligence and logic are very important to me. Actually I have spent many years seeking truth and reality. Probably the same as everyone else spending time here. I’m not here to prove anything but rather to learn and have my own ideas tested and checked. I’m hoping to remember the rules and etiquette so that I don’t come across the wrong way ( very easy to do when not face to face ) or waste any ones time. I’m a family man who is concerned about my children’s future because of the swift pace of technological change and its resultant social effects. For example, the smartphone phenomena and the increased socialization this allows. Entranced texters on an unrelenting zombie like invasion makes one ask, what the hell is going on here? To me, it’s an emotional issue that is detracting from intellectual growth and the evolution of intelligence. Is it the fall of Rome? Can a few brilliant minds come up with the tools (A.I ?) that will change the masses from heading blindly down a path leading to destruction ? Help required !
comment by GoldenWolf · 2012-05-13T16:31:08.396Z · LW(p) · GW(p)
Found HPMOR, changed my life, etc. Been reading for a couple years, and I figure it's finally time to start actually doing something. Not an academic at all. I'm in the Army and spend my free time with creative writing, but I understand most of the material, and I am capable of applying it.
I have a question that's not in the FAQ. I recently read The Social Coprocessor Model. I want to reread it again in the future without keeping a tab permanently open. There is a save button near the bottom, and I clicked it. How exactly does this work? I can't figure out how to access the post from the main page. I suppose I could always keep a document with my favorite links or clutter up my browser's favorites, but it seems stupid if there's already a system in place here.
Replies from: Randaly↑ comment by Randaly · 2012-05-13T16:44:04.990Z · LW(p) · GW(p)
Welcome to LessWrong!
If you get to either the main or the discussion page by clicking on either button, you should see a smaller row of buttons immediately beneath the two big buttons ("Main" and "Discussion"). One of them should read "Saved"; if you click on that, you'll see all of the posts you've saved.
Replies from: GoldenWolf↑ comment by GoldenWolf · 2012-05-13T17:06:37.685Z · LW(p) · GW(p)
Thanks!
comment by Hang · 2012-05-07T18:58:49.776Z · LW(p) · GW(p)
I'm a master's candidate to Logic at UvA. Rationality is one of my interests, altough I seem to come from the opposite side of the specter of everyone at LessWrong (from metaphysics and philosophy to rationality).
I am very interested in observing the reductionist approach, even more so after learning Eliezer values GEB so highly.
comment by Plubbingworth · 2012-03-03T05:19:48.726Z · LW(p) · GW(p)
Hello there. I am Plubbingworth. I am twenty, and I first caught wind of the delicious stench of Rationality all the way from where I was before, but only after I began to seek it. HILARIOUS COINCIDENCE: I read about the Less Wrong Community and read HPMOR completely separately without realizing the connection, how funny is that?!
Anyway. I was reading and absorbing and learning as much as I could from every facet of this wonderful website, when I realized, to my dismay, that there was not much of a concentration in the use of Rationality in this fine state of Tennessee. Which is a real shame. Maybe something should be done about that!
I can't say what I plan to learn from this. Time and effort will tell. But a sense of community is always nice. Also, I felt like a creeper.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-06-13T16:11:54.379Z · LW(p) · GW(p)
Welcome to LW! Nice to see another Knoxvillian. But we've been talking via PM, so I won't continue that conversation here.
comment by iDante · 2012-03-01T05:31:09.682Z · LW(p) · GW(p)
Hello everybody! My name is Fish and I'm almost 20. I'm at a decent enough university studying physics, mathematics, and computer science. My GPA in math and science courses is 4.0 so I applied to a better college a few months ago. Hopefully I'll get in :D. I'm currently interested in quantum computing as a career, but obviously that's not final.
Having two molecular biologists as parents, I grew up understanding evolution, the scientific method, and other such Important Things. I was never religious, despite the fact that my neighbors dragged me to church twice a week for 5 years or so.
I got introduced to rationality while discussing free will with my religious friend and it all just made sense. I've taken a lot of psychology courses so I have a pretty good grip on that sort of thing, and obviously quantum physics isn't a problem for me. Bayesian reasoning was new but I think I understand it now, at least in principle if not in practice.
That's all I have to say about me.
comment by Douglas_Reay · 2012-02-19T15:54:21.051Z · LW(p) · GW(p)
Hello, and thank you for the welcome.
The panoply of my writings on the Web more resembles a zoo or cabinet of curiosities than a well groomed portfolio. None the less, for your delectation (or, at least, amusement), here is a smattering:
- From 'Is' to 'Ought'
- Precedent Utilitarianism
- DIKEW
- The Topos Of Paradox
- Tide of Light
- Amicog
- Human Swarms
- Dyson Bubbles
- Big Numbers
The Thoughtful Manifesto
Thought is good.
Thought is the birthright of every human being. Having a brain capable of rational thought is what distinguishes people from animals. To disparage thought is a betrayal of our achievements, our history, of our very identity.
It is the duty of every society, of every parent, of every thoughtful person to encourage thinking - to praise it, to practice it, to improve it, whatever the context. Because the more you think, the easier it becomes. Take joy in thinking, make a habit of it, turn it into a strong tool that you can trust and rely on. Cherish it.
Because thought is the highest freedom. Because without freedom of thought there is no meaning to freedoms of speach, belief or travel. The person who tries to stop you being able to use your brain to its fullest or who tries to disuade you from practicing rational though is as much your enemy as the person who tries to lock you in prison. Shun them. Do not tolerate it, not for an instant.
Take pride in thought. Stand up for it, defend the practice of thinking where 'ere it may be attacked. Thought is your friend and ally, but it is under siege. "Conform" the non-thinkers say, "Don't stand out, don't do what I don't do". Fear, envy and laziness are the enemies of thought. Thought is the enemy of the abusive, the mediocre and the thoughless.
Considerate people are thoughful of others. Creative people are thoughtful of new ideas. Great leaders are thoughtful of what must be done. Whatever you do or strive for, thinking helps. Thought is the blessing of society, the hope of the future, the essence of life.
Think!
comment by fluchess · 2012-02-02T11:05:42.222Z · LW(p) · GW(p)
Hi everyone, I am a 19 year old undergraduate science student majoring in statistics living in Australia. For fun I play chess and flute which I am quite mediocre at but find them both stimulating and challenging. I am always trying to improve myself in one way or another, whether it be learning or practicing skills.
I have an academic interest in maths, statistics and biology and would eventually want to be a biostatistician. I was originally seen as academically gifted, however after years of not working hard, I am trying to regain my academic vigor and educate myself, which I have neglected for a number of years. I am working to educate myself in quantum physics, cryonics, philosophy and AI as I think these are all important issues that I don't know a lot about.
I value truth, as I think it always for the best in the long run. I believe "That which can be destroyed by truth should" is valid and I try to follow the 12 virtues of Rationality. I'm sure I value a lot of other things but haven't thought about them much.
My journey as a rationalist has been an interesting one. I have always affiliated with being a rationalist since when I can remember. However, I have found that being rational hasn't gone well with my more emotionally-inclined friends. I have found that being logical is off-putting to them and that they don't like being shown when they are wrong. This made me question whether rationality is a good thing in my life and whether it was beneficial to my life. This prompted to me to reassess rationality and read more on Less Wrong. This reaffirmed my opinion that rationality a good thing and I should work on not offending people.
That is all I feel like writing for the moment as that took me a few hours. Hopefully that is enough of an introduction. If there is anything you think that I should read, please link me. I always want to learn new things. I look forward to engaging in discussion with many of you. Kind Regards,
Fluchess
Replies from: kilobug↑ comment by kilobug · 2012-02-02T13:01:23.659Z · LW(p) · GW(p)
Hi, welcome to Less Wrong !
For things that you should read, I can give you three very classical (for LW) hints :
The book Goedel, Escher, Bach: an Eternal Golden Braid by Douglas Hofstadter.
Read the Sequences, they are really worth it.
The Less Wrong list (started by lukeprog, expanded by others) of the best textbooks on every subject
↑ comment by Jayson_Virissimo · 2012-02-02T13:18:12.096Z · LW(p) · GW(p)
Thinking, Fast and Slow by Daniel Kahneman should be part of the Less Wrong canon by now.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2021-05-06T21:47:58.869Z · LW(p) · GW(p)
Actually, several of the chapters of this book are very likely completely wrong and the rest are on shakier foundations than I believed 9 years ago (similar to other works of social psychology that accurately reported typical expert views at the time). See here for further elaboration.
I'm on the fence about recommending this book now, but please read skeptically if you do choose to read it.
comment by William_Kasper · 2011-12-29T13:41:51.145Z · LW(p) · GW(p)
Hello. I'm William. I am a thirty year-old undergraduate student in the University of Wisconsin--Madison's Industrial and Systems Engineering department, with some additional study in Computer Science.
The study of logic and rational thought have always been hobbies of mine. My interest in mathematical optimization techniques has also been developing for decades, but this interest in these dark arts started taking steroids when I realized simple ways to apply the techniques to video games and Poker.
I originally stumbled upon this site two years ago, while Google searching various paradoxes. The Allais Paradox was the first post that I read here. I was outraged upon reading it. I checked a few other sources to see what they had to say about Allais. The worst was a video posted by some ponytail-having MBA, because of his poor choice of words in describing the paradox (e.g. "wrong" instead of "inconsistent").
I have reread the Allais post about a dozen times since then. It no longer outrages me.
The interesting subject matter brought me here originally.
The intuitive and elegant explanations of the subject matter brought me back.
The shockingly high level of class and quality in comments and discussion has made this site one of my favorites.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-04T01:01:15.888Z · LW(p) · GW(p)
There's a good regular meetup here in Madison; you should check it out!
And yeah, the Allais paradox was one of the biggest "wow" moments for me with the original sequences, too.
comment by RogerG · 2012-08-21T14:42:45.971Z · LW(p) · GW(p)
Hi Everyone,
I came across this website, LessWrong, from a philosophy forum. I'm new to this type of thing. I'm not a writer, nor a philosopher, but only someone that is interested in knowing the real truths, whether good, bad, or ugly. It appears to me that most people seem to believe in that which is most palatable to them, that which makes them feel best. I think I am different.
As I see it, all of reality exists ‘only’ from within my mind. All that I know about ‘anything’ come from the thoughts and feelings within my mind. Without thoughts and feelings, I don’t really exist, or at least I wouldn’t know it if I did. The only reference point to experience reality comes from only within my mind, and nowhere else. That is all I have to work with. There are very many things in life to ponder deeply upon, and many of which I have already jumped into. But now I must get out and relook at where I am jumping. Before jumping into any of these again, it makes sense that I back up, way back, to the pondering machine itself, my mind. If reality truly is a figment of my mind, then it makes sense to ‘first’ try to understand and validate my mind (thoughts and feelings) before jumping into the middle of trying to understand any of life’s big (or small) questions. How do they (thoughts and feelings) come about? Where do they come from? Can I trust them? If these cannot be trusted, then it would be truly senseless for me to try to understand anything. Should we just trust our thoughts and feelings without question? Why? Or are these fair-game to be studied? Since there are a large variety of views, understanding, and beliefs by many people, of many questions in life, it seems obvious to me that not everyone’s thoughts and feelings are valid. Whose are valid, whose are not?
Anyways, I'm hoping to learn lots from you all, --RogerG
comment by Xenophon · 2012-07-22T05:19:40.976Z · LW(p) · GW(p)
Hey Less Wrong,
My name is Wes von Hochmuth and I am a 21 yr-old college Junior at the University of Puget Sound in Tacoma, Washington, nearby Seattle. I'm studying History and Neuroscience with a minor in Computer Science. It's an interesting combination. Tell you the truth, I am studying History because of my interest in Futurology and Futurism. Details aside, a thesis of mine has been, "If we study the past with history, how is it we study the future, history's opposite?"
This interest has driven me to start a community of my own, called r/Futurology under the name Xenophon1 on Reddit.com. Please check us out and subscribe,
http://www.reddit.com/r/Futurology/
I've a passion for everything from A.I., to Transhumanism, Singularitarianism, and Futurism. We've grown exponentially in just 6 months, following Singularity-like growth curves. I am always looking for mods and help running the sub so PM me if your at all interested. I am a local Oakland, CA, native and I recently just applied for an internship at the Singularity Institute for this summer and I'm crossing my fingers. I have read through most of the sequences and a lot of the historical best posts, and look forward to showing up at the Wednesday Berkeley LW meetups soon.
I discovered LW through a random friend on reddit, who I noticed was intelligent and asked him to be a Moderator on Futurology. It turns out he is an intern at The Millenium Project for Global Futures Studies and Research.
Its a pleasure and an honor to be a part of the LW community, amongst rationalists and a lot of brilliant people.
Our human future is infinite,
Wes
comment by Evercy · 2012-06-24T01:06:37.525Z · LW(p) · GW(p)
Hello!
I am a university student studying biology in Ontario. I've actually known about lesswrong for a few years before I joined. My good friend likes to share interesting things that he finds on the internet, and he has linked me to this site more than once. Over time, lesswrong has grown increasingly relevant to my interests. Right now, I'm mainly reading posts and dabbling in the sequences. But I hope that I will be able to contribute some ideas in posts or comments once I get used to how things work around here. Some things that interest me are rhetoric, anthropology, software engineering, cloning and transhumanism. Oh, and biology of course, since that is my field of study (but something about NEEDING to study it, instead of voluntarily doing so, diminishes my enthusiasm for it haha). I hope I'll get you know all better!
comment by Blackened · 2012-06-12T00:59:38.722Z · LW(p) · GW(p)
Hello, LessWrong. I'm 20 years old, originally from Bulgaria, living and studying Software Engineering in London (just finished my 1st year). I have always wanted to know a lot about human thinking, because of my need to be as optimal as possible plus my interest in technical things plus my tendency to seek rigorous explanations. I still have a deep interest in psychology and I see some potential very powerful applications I'll feel inefficient without. The second thing I love is programming.
As a rationalist, I'm very strict to myself. I always go for the expected outcome, which usually brings me to sacrifice whatever brings short-term pleasure and happiness in favor of self-improvement (my time-management is too created with this in mind). Despite that, I'm usually quite happy in life. My ideal for spending my day is reading and studying and practicing programming, maybe exercising. Unfortunately, I can't even spend half of my day so efficiently because of procrastination (btw I'm writing this in the efficient parts of today :D), but I'm gradually overcoming it, and I'm putting a lot of effort to battle it. While still battling it, I can use LessWrong a lot, as it's productive and fun - hopefully it'll replace less efficient activities.
Google brought me here - I was reading Heuristics and Biases: The Psychology of Intuitive Thinking (2002) and I sought additional information on a certain thingy. It was then when I saw this community and my heart started beating fast - I already had my own idea of rationalism and I knew a few people who follow it and own it as much as I do (they are also my closest friends). Eventually, I found my idea to be more extremely rational than this community's idea. I enjoyed Yudkowsky's Harry Potter a lot and I'm quite similar to Harry Potter, although there are many cases where I consider his actions to be irrational (I'm quite convinced that the author is aware of those, as some of them can even be explained by simple biases) - despite this, I'm very much looking forward to the latest chapters.
I am currently looking forward to meeting any rationalist (online), as I'm looking for exchange of information and I always have tons of questions, and rationalists are expected to have many of the answers I'm seeking, some of them are so hard to get. I have useful information to share as well.
I will also post in "tell your rationalist story".
comment by Salemicus · 2012-05-10T22:10:28.343Z · LW(p) · GW(p)
Hi everyone. I've been lurking here for a couple of years, but decided to register so I could contribute. I work in software and am in my early 30s.
I found this site through overcomingbias, which in turn I came across through the GMU-linked economics blogs. However, I wouldn't describe myself as a rationalist - I find the discussions here interesting, but I think that, by and large, folk wisdom is pretty accurate.
I love the sequences and Eliezer's writings generally - they are what first got me reading the site, and I have been greatly enjoying following the reposts. The ones on zombies in particular have really caused me to re-evaluate my thinking.
Thanks, and look forward to meeting you all!
comment by wirov · 2012-05-07T18:39:57.560Z · LW(p) · GW(p)
Hi, I'm a (white, male) physics student from Germany and 20 years old. My main reason for not believing in any religion is Occam's razor. (I'm not sure whether this makes me atheist or agnostic. Any thoughts on that would be appreciated.)
I stumbled across HPMoR by accident in 2010 and read "Three Worlds Collide" and some other texts on Eliezer's personal website. During 2011, I did some Sequences-hopping (i.e. I started at one article and just followed inline links that sounded interesting, thus causing a tab explosion) I finally registered a few weeks ago to join the recent MoR discussion threads. For the future, I plan to read the Sequences in the intended order (which will probably take me until at least 2013) and join some other discussions from time to time.
comment by Ben Pace (Benito) · 2012-03-14T22:05:48.676Z · LW(p) · GW(p)
Hello all, I am a man of indiscriminate age (not true) and of indiscriminate gender (also not true). I hope you've learnt a lot about me. I'm curious about nearly all things. Thanks.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-14T22:07:32.727Z · LW(p) · GW(p)
I am a man of ... indiscriminate gender (also not true)
In this case, it's not true by definition :-)
Replies from: None↑ comment by [deleted] · 2012-03-14T22:28:36.525Z · LW(p) · GW(p)
Unless "man" is taken to mean "member of humanity."
Also, gender isn't necessarily the same as biological sex.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-14T22:35:58.620Z · LW(p) · GW(p)
Good point.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2012-06-29T08:44:56.745Z · LW(p) · GW(p)
Well, I am glad we cleared that up. ;-) I'll make sure to remember, In case I ever forget my gender.
comment by blashimov · 2012-03-13T03:06:33.676Z · LW(p) · GW(p)
Hello Everyone,
I heard about this site during my time at Yale as an undergrad, I am now a PhD student at Rice University in Environmental Engineering. I noticed the meetup for Houston seems to have died in May, if that turns out to be true I would like to start one. I am enjoying HPMOR immensely. I am very interested in raising the sanity waterline, and I am something of a policy wonk. I tend to follow separation of church and state issues, as well as science policy/creationism in the classrooms especially. I did read the intro on partisanship in the forums.
Replies from: Cog↑ comment by Cog · 2012-04-30T23:04:55.553Z · LW(p) · GW(p)
Hey Blashimov,
I don't even know if you are still paying attention to LW, but I just found this response. Yes, there is some semblance of a LW group in Houston, mostly focused around the Rice/Med center area, unsurprisingly. Right now, we're on temporary hiatus. There are three regular members, including myself. We have been going over ET Jayne's "The Logic of Science", but our collective workloads have gone up recently, and we were no longer able to put in the effort required to get something out of it. Hopefully we will get started back again in the summer when we all have more time.
It also looks like there will be a club focused on the singularity and surrounding issues at Rice soon. There are a dozen or so people interested in learning neural networks, computational biology/neuroscience, bayesian stat, etc. I plan to help with that a lot, and it's a good possibility that a Houston LW group might be tightly integrated with that group. One of our problems was lack of people, and this would be a good pool to draw from. I'll send you more information about it if you want.
Replies from: blashimovcomment by anotherblackhat · 2012-03-01T15:05:36.760Z · LW(p) · GW(p)
So much interesting stuff.
I've been reading through the sequences, and one peeked my desire to post, so I created an account. There are actually many things being discussed here that interest me. I'm not sure I'm a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
I'm interested in personal identity, not "Quantum Mechanics and Personal Identity", but where does "me" end.
The sound bite is "Am I my hat?" or to be more verbose, is my hat an extension of myself, and thus a part of me.
Some would say "of course not".
If you're thinking that, then imagine I started beating you with my hat.
Would you ask my hat to stop, or would you ask me?
Where do we stop being "us" and start being "them."
Is our hair part of us? What about when it's cut? What if we weave it into a hat, and does it matter if we cut it first?
Let me be clear, I don't really care how a particular definition of "me" would resolve hair clippings, I'm interested in what definitions people actually use.
↑ comment by [deleted] · 2012-03-01T15:27:49.922Z · LW(p) · GW(p)
I think you need to dissolve the question "Am I my hat?" as well as the "us vs. them" issue.
See points 5, 10, 11, 12, 13, 14, 15, 16, 17 and 29 and then play a game of taboo.
Now, if I am to argue "Am I my hat?" from my world view, I would say that when I remove my hat, peices of my skin are on the hat, and fibres of the hat are in my hair. That is one point of view.
Now, say you created an exact copy of yourself, or me, or any other hat wearing person, only one of these two identical people was without headwear; would it still be the same person?
When do groups of people begin reacting in a dynamic that might be described as "us vs. them" mentality?
Hope that helps :)
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-02T06:18:04.529Z · LW(p) · GW(p)
I think you need to dissolve the question "Am I my hat?" as well as the "us vs. them" issue.
See points 5, 10, 11, 12, 13, 14, 15, 16, 17 and 29 and then play a game of taboo.
Yes, almost exactly, though perhaps the question to dissolve is "what is me/self/I". "Am I my hat?" is just one, purposely bad, example of trying to do that.
Now, say you created an exact copy of yourself, or me, or any other hat wearing person, only one of these two identical people was without headwear; would it still be the same person?
"making an exact copy" or "exact copy sans hat" seems to require already knowing the answer to "what is me/self/I". I.e. If the definition of "me" includes a specific set of atoms, then it's not even possible, on the other hand if "me" is just a collection of thought processes, then hats are not required. The precision needed is of course, dependent on the definition used, and ultimately the purpose. I'd say I'm most often thinking about it when the idea of making a copy comes up, as in, would that really be a copy.
Hope that helps :)
Indeed, playing Taboo, or rather, thinking about how I would play Taboo was surprisingly helpful.
I've recently tried thinking of myself as a pattern which likes reproducing imperfect copies of itself. This as a goal, (I would want more imperfect copies like me developed in the future) and not a bug of trying to produce exact copies of myself and failing at it.
Interesting, I never considered defining the me in terms of goals I'm attempting to accomplish, yet now that you mention it, it seems obvious. hindsight bias in action
And it also brings to mind similar categories;
who I know,
who I'm friends with,
social standing,
ownership (gah, define one horribly fuzzy concept with yet another) ...
HungryTurtle mentioned the roles one plays.
↑ comment by Shephard · 2012-03-04T03:06:24.223Z · LW(p) · GW(p)
Imagine a snowball that's rolling down an infinite slope. As it descends, it picks up more snow, rocks, sticks, maybe some bugs, I don't know. Maybe there are dry patches, too, and the snowball loses some snow. Maybe the snowball hits a boulder and loses half of its snow, and what remains is less than 10% original snow material. But it still can be said to be this snowball and not that snowball because its composition and history are unique to it - it can be identified by its past travels, its momentum, and the resulting trajectory. If this can be taken to be one's life (an analogy that I hope was obvious), then the "I" that we refer to in our own lives isn't even the whole snowball but merely the place where the snowball touches the ground.
↑ comment by [deleted] · 2012-03-02T12:01:31.594Z · LW(p) · GW(p)
What question is left to ask? there is some fibres that does not originate in your biochemistry stuck in yout hair, and there are some materials that originate in you stuck in that bunde of fibres that was previously resting on your cranium.
Why is it important to have a sharp definition of "self," is it not to presume it has intristic meaning? What you refer to as "you" is an emergent system that has causes in your entire past light cone and repercussions in your entire future lightcone. There is a constant flux of matter and energy sorrounding the substrate that runs your consciousnness program. It is a continuous construct, there isn't a line to be drawn at all.
↑ comment by [deleted] · 2012-03-01T16:48:59.787Z · LW(p) · GW(p)
I'm not sure I'm a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
Can you tell me more about this? this statement piqued my curiosity, but I don't know enough about what you meant to ask anything specific, so I'm left with vauge questions like "Which lies?" and "Under which circumstances?"
To contribute by answering your question about a definition of "me", I've recently tried thinking of myself as a pattern which likes reproducing imperfect copies of itself. This as a goal, (I would want more imperfect copies like me developed in the future) and not a bug of trying to produce exact copies of myself and failing at it.
It seems to be working so far, but really, I haven't held the belief long enough for it to be hardened enough to be confident that it's likely to work out in the future at all. I would currently not be very surprised if someone were to say "Actually, Michaelos, that's a flawed way of thinking, and here is a link to why.", and for my reaction to be "Yep, you're correct, I missed something. Let me update that belief." I suppose another way of putting it is that my current belief on that is fresh off the revolutionary and the apologist hasn't had time to come up with a lot of defensive evidence.
I was starting to discuss the "us vs. them" issue as well, but I think MagnetoHydroDynamics's link says it better than I do.
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-02T06:03:57.465Z · LW(p) · GW(p)
I'm not sure I'm a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
Can you tell me more about this? this statement piqued my curiosity, but I don't know enough about what you meant to ask anything specific, so I'm left with vauge questions like "Which lies?" and "Under which circumstances?"
It's a qualification on me, I've decided to join the discussion but I'm not sure about joining the group. In other words, statements from me shouldn't be viewed as rationalist statements since I'm (probably) not one. let me expand;
Some lies are worth preserving, I think a rationalist would be in favor of keeping works of fiction for example. We all know that Hamlet isn't real. These sorts of lies aren't what I was talking about, since they are not destroyed by the truth. Pointing out that Hamlet is fiction doesn't diminish its value or effectiveness.
I was mainly thinking of "polite" lies that happen in everyday situations;
- Your deformity doesn't bother me.
- Whatever you want, it's all the same to me.
- I'm sure my intentions toward you daughter are every bit as honorable as yours were toward her mother.
Lies that you tell when it's likely that the other person knows you are lying, yet the sentiment behind them is such that we accept them without comment. Sure, you're freaked out by the one armed man, but you intend to do your best to act as if he were a normal person. Both parties might even be aware that these lies are in fact, lies, but they both maintain the fiction.
The truth in these situations would be very revealing, in a world were everyone could instantly know the truth of all statements, these would be destroyed, instantly. We'd know the truth, yet I feel we would not be better off for it.
Then there are lies like "You're a good boy."
More of a wish than a truth. Yet by lying in this way, a parent is hoping to cause it to become true.
A kind of self fulfilling prophecy (they hope).
Those are also lies I think are worth preserving.
Then there are the lies like "Santa Clause is a real person who will bring you presents if you're good." I think destroying this kind of lie would be a good thing, but I'm not certain.
Replies from: katydee↑ comment by katydee · 2012-03-02T09:31:15.295Z · LW(p) · GW(p)
Would you kindly http://lesswrong.com/lw/nu/taboo_your_words and try posting again? I think that many individuals that describe themselves as rationalists would be in favor of "white lies" and I'm confused as to why you perceive this as a big difference between yourself and the group.
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-03T04:16:02.483Z · LW(p) · GW(p)
I assume you meant "more in the same vein" rather than simply "again".
I perceive this as a difference between myself and the group because of the
large numbers of posts I've read that say rationalists should believe what is true,
and not believe what is false. The sentiment "that which can
be destroyed by truth should be" is repeated several times in several
different places. My memory is far from perfect, but I don't recall any
arguments in favor of lies. You claim most rationalists in favor of "white lies"?
I didn't get that from my reading.
But then I've only started in on the site, it will probably take me weeks
to absorb a significant part of it, so if someone can give me a pointer,
I'd be grateful.
I am much more inclined to go along with the "rationalists should win" line of thought. I want to believe whatever is useful. For example, I believe that it's impossible to simulate intelligence without being intelligent. I've thought about it, and I have reasons for that belief, but I can't prove it's true, and I don't care. "knowing" that it's impossible to simulate intelligence without being intelligent lets me look at the Chinese Room Argument and conclude instantly that it's wrong. It's useful to believe that simulated intelligence requires actual intelligence. If you want me to stop believing, you need only show me the lie in the belief. But if you want me to evangelize the truth, you'd need to show me the harm in the lie as well.
Santa Clause isn't a white lie. Santa Clause is a massive conspiracy, a gigantic cover up perpetrated by millions of adults. Lies on top of lies, with corporations getting in on the action to sell products, http://www.snopes.com/holidays/christmas/walmart.asp a lie that when discovered leaves children shattered, their confidence in the world shaken. And yet, it increases the amount of joy in the world by a noticable amount. It brings families together, it teaches us to be caring and giving. YMMV of course, but many would consider christmas utilons > christmas evilons.
Most importantly, Santa persists. People make mistakes, but natural selection removes really bad mistakes from the meme pool. As a rule of thumb, things that people actually do are far more likely to be good for them than bad, or at least, not harmful. I believe that's a large part of why when theory says X, and intuition says Y, we look very long and hard before accepting that theory as correct. Our intuitions aren't always correct, but they are usually correct. There are some lies we believe intuitively. In the court of opinion, I believe they should be presumed good until proven harmful.
Replies from: TheOtherDave, DSimon↑ comment by TheOtherDave · 2012-03-03T05:35:34.930Z · LW(p) · GW(p)
Well, choosing to believe lies that are widely believed is certainly convenient, in that it does not put me at risk of conflict with my tribe, does not require me to put in the effort of believing one thing while asserting belief in another to avoid such conflict, and does not require me to put in the effort of carefully evaluating those beliefs.
Whether it's useful -- that is, whether believing a popular lie leaves me better off in the long run than failing to believe it -- I'm not so sure. For example, can you clarify how your belief about the impossibility of simulating intelligence with an unintelligent system, supposing it's false, leaves you better off than if you knew the truth?
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-03T17:35:40.509Z · LW(p) · GW(p)
O.k. suppose It's false. Rather than wasting time disproving the CRA, I simply act on my "false" belief and reject it out of hand. Since the CRA is invalid for many other reasons as well, I'm still right. Win.
Generalizing;
Say I have an approximation that usually gives me the right answer, but on rare occasion gives a wrong one. If I work through a much more complicated method, I can arrive at the correct answer. I believe the approximation is correct. As long as;
effort involved in complicated method > cost of being wrong
I'm better off not using it. If I knew the truth, then I could still use the approximation, but I now have an extra step in my thinking. Instead of;
- Approximate.
- Reject.
it's - Approximate.
- Ignore possibility of being wrong.
- Reject.
↑ comment by TheOtherDave · 2012-03-03T21:06:58.848Z · LW(p) · GW(p)
Ah, I see what you mean. Sure, agreed: as long as the false beliefs I arrive at using method A, which I would have avoided using method B, cost me less to hold than the additional costs of B, I do better with method A despite holding more false beliefs. And, sure, if the majority of false-belief-generating methods have this property, then it follows that I do well to adopt false-belief-generating methods as a matter of policy.
I don't think that's true of the world, but I also don't think I can convince you of that if your experience of the world hasn't already done so.
I'm reminded of a girl I dated in college who had a favorite card trick: she would ask someone to pick a card, then say "Is your card the King of Clubs?" She was usually wrong, of course, but she figured that when she was right it would be really impressive.
↑ comment by DSimon · 2012-03-03T04:56:01.557Z · LW(p) · GW(p)
For example, I believe that it's impossible to simulate intelligence without being intelligent. I've thought about it, and I have reasons for that belief, but I can't prove it's true, and I don't care. "knowing" that it's impossible to simulate intelligence without being intelligent lets me look at the Chinese Room Argument and conclude instantly that it's wrong. It's useful to believe that simulated intelligence requires actual intelligence.
That doesn't strike me as being particularly useful. What's so great about the ability to (justify to yourself that it's okay to) skip over the Chinese Room Argument that it's worth making your overall epistemology provably worse at figuring out what's true?
More generally, there's a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It's harder to come up with a case where your actions would contradict your own goals if and only if you're better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-03T17:56:01.214Z · LW(p) · GW(p)
What's so great about the ability to (justify to yourself that it's okay to) skip over the Chinese Room Argument that it's worth making your overall epistemology provably worse at figuring out what's true?
Nothing.
Can you actually prove it's worse, or were you just asking a hypothetical?
More generally, there's a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It's harder to come up with a case where your actions would contradict your own goals if and only if you're better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
Yes, the thing I'm not sure of (and note, I'm only unsure, not certain that it's false) is the idea that believing a lie is always bad.
Clap your hands if you believe sounds ridiculous, but placebos really can help if you believe in them - we have proof.
But this is not a certain thing. That I can cherry pick examples where being "wrong" in ones beliefs has a greater benefit means very little. The bottom of the cliffs of philosophy are littered with the bones of exceptionally bad ideas. We are certainly worse off if we believe every lie, and there may well be no better way to determine good from bad than rationality. I'm just not certain that's the case.
Replies from: DSimon↑ comment by DSimon · 2012-03-03T21:05:05.035Z · LW(p) · GW(p)
Can you actually prove [my epistemology is] worse [at figuring out what's true], or were you just asking a hypothetical?
No, I can prove that, provided that I'm understanding correctly what approach you're using. You said earlier:
I've thought about it, and I have reasons for [believing that a non-intelligence cannot simulate intelligence], but I can't prove it's true, and I don't care.
By "don't care" I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you'll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That's the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it's 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you're better off believing the truth because you'll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelming external evidence for particular beliefs being bad; and even then, it's probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-04T17:08:56.689Z · LW(p) · GW(p)
I'm not sure I understand.
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
If holding a belief reduces my ability to do that, then is it of necessity, false?
Wouldn't that mean that my belief must be true?
Replies from: DSimon↑ comment by DSimon · 2012-03-04T23:32:39.826Z · LW(p) · GW(p)
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
How do you know those propositions being rejected are false?
If it's because the first belief leads to that conclusion, then that's circular logic.
If it's because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief's trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
Replies from: anotherblackhat↑ comment by anotherblackhat · 2012-03-05T02:01:52.150Z · LW(p) · GW(p)
I don't know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it's worked out the same as if I just used the short cut of assuming my original proposition is true. It's not just some random belief, it's field tested. In point of fact, it's been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it's more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
Replies from: DSimon↑ comment by DSimon · 2012-03-05T03:37:06.097Z · LW(p) · GW(p)
That sounds pretty good then. It's not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?
↑ comment by HungryTurtle · 2012-03-01T15:47:11.602Z · LW(p) · GW(p)
I am also very interested in the question of personal identity. However I tend to phrase it as a question of "self" rather than identity. Within sociology and social psychology "identity" usually refers to a specific role a person dons in a particular setting. While Identity is the totality of roles they contain in their cognitive wardrobe, the process by which they create/delete identities, and the apparatus for choosing to take on an identity.
I also have much to say about hats, but I would to hear what you think of the above stated ideas before I continue.
comment by Arran_Stirton · 2012-02-24T05:13:12.831Z · LW(p) · GW(p)
Hullo, I've been lurking around for quite a while after being introduced to LessWrong through the well-trodden HPMoR route.
I'm rather awful at this sort of thing, so here are my vital statistics: I'm 18, Male and live in the East Midlands region of the United Kingdom. The subject I pursue academically is Physics, however the scope of my interests is far larger and not worth detailing. Though I will say that working out what the ideal political system would look like is high on my to-do list.
More recently I've been toying with the idea of making a couple of discussion posts on Pascal's Mugging and objectively calculating utility relative to a clearly defined goal. However the fear of plunging into karma-oblivion and a distinct lack of people who I can talk to in order to test the validity of my ideas has thus far prevented me. So as a compromise I figured I'd slyly slip those topics into my (overdue) introduction in order to test just how volatile those waters are.
Aside from that, I'm glad to be here, the sequences are awesome and the community brilliant.
Replies from: None↑ comment by [deleted] · 2012-03-01T15:51:47.704Z · LW(p) · GW(p)
Hello there, welcome to the LW community.
The subject I pursue academically is Physics,
It sounds like you have a good starting point, and I recon if you have the mathematics for it you'll love The Quantum Physics Sequence.
Though I will say that working out what the ideal political system would look like is high on my to-do list.
You might want to look at Politics is the Mind Killer.
Replies from: Arran_Stirton↑ comment by Arran_Stirton · 2012-03-03T00:35:25.437Z · LW(p) · GW(p)
Thank you for the advice, it's appreciated although I've already read Politics is the Mind Killer and I'm about halfway through the Quantum Physics Sequence (you're reckoning is incredibly accurate, I do love it; someday I will marry that sequence).
You wouldn’t happen to have the maths for that sort of thing yourself would you? It’s just I’m in search of an intellectual of similar/superior capabilities who might be willing to expend some of their time to test the veracity, validity and virtues of some of my ideas. Though I’ve nothing to offer in return for such a service other than the value of the task itself, but if you’re interested such help would be greatly appreciated.
[Edit: Clarified]
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-03T00:47:29.013Z · LW(p) · GW(p)
I hate to sound trite, but you could also try taking a few math and physics classes, or perhaps online equivalents thereof (perhaps somewhere like the Khan Academy, though I haven't looked at their physics videos myself and cannot endorse them). There's nothing wrong with reading articles and listening to advice, but nothing beats doing the work yourself. Well, at least it has been helpful for me personally; YMMV.
Replies from: Arran_Stirton↑ comment by Arran_Stirton · 2012-03-03T01:48:54.213Z · LW(p) · GW(p)
My apologies I must have miss-conveyed my meaning.
I am in fact taking the largest number of classes at my disposal for both mathematics and physics. My main reason for liking the Quantum Physics Sequence is more to do with the depiction of quantum mechanics as not strange.
I think the problem here is the accidental omission of the word "my" when mentioning the testing of ideas. The ideas in question are my own and I'm not looking for someone else to do the work for me. I've already done the work to some extent but regardless of however many times I examine my arguments I cannot be certain I am right. Even after getting a secondary opinion I won't be able to be entirely certain however I will have more evidence for (or be it against) the I haven't made a massive error model of reality. (The idea being to counteract any biases / ignorances I have that I may be unaware of.)
But yes, although I do not feel the general notion of your comment is applicable to myself, I do agree with it in principle.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-03T02:15:48.838Z · LW(p) · GW(p)
In that case, you could consider writing up your ideas and posting them as an article for discussion on this site. While I personally am probably underqualified to judge your work, others here should be more than capable of doing so. You could also submit your writeup to some journal, or a math-heavy forum or, heck, even Slashdot.
Replies from: Arran_Stirton↑ comment by Arran_Stirton · 2012-03-03T03:12:15.029Z · LW(p) · GW(p)
Sorry if my response seemed a bit indignant, I didn't mean to come across that way.
At the moment I'm just trying to find some way to safely test the aforementioned ideas, I'm worried I'm wrong, very worried. Hence I don't want to waste too many peoples time for no good reason and I fear I've already wasted quite a bit of yours. That would be why I haven't already posted an article to the discussion section.
The arguments in question are not purely math, although there is math involved it's nothing particularly complex. My main problem is in finding someone who understands the problem (Pascal's Mugging) well enough to understand the background to my argument. One of the avenues of action I decided to take was finally posting my introduction here and mentioning it in that.
Just to clarify, I'm 18 and very aware of my capacity for being wrong which is why I'm not even posting a writeup to LessWrong yet. My experience is far too limited for me to be able to accurately ascertain my validity. Submitting a writeup to a journal would be madness. Worse than that even, it would be crackpottery.
Again, sorry for the indignation. I'm working on it.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-03T09:51:51.776Z · LW(p) · GW(p)
Sorry if my response seemed a bit indignant, I didn't mean to come across that way.
No worries, you didn't come off that way at all !
I'm worried I'm wrong, very worried. Hence I don't want to waste too many peoples time for no good reason ...
Well, it's not like you're going to force anyone to read your paper. If they want to read it, they will; if not, they won't. If your paper ends up being very boring or something, I'd imagine that people will read it halfway and then stop (then vote it down, c'est la vie).
So, your risks here are minimal (you lose some karma points), but your potential reward is great: you have a very good chance of finding out whether you're actually wrong or not; and if so, what exactly you're wrong about. That's worth some effort, I think.
comment by Crystalist · 2012-08-03T09:57:07.249Z · LW(p) · GW(p)
Hi all,
Long time lurker, first time poster. I've read some of the Sequences, though I fully intend to re-read and read on.
I'm an undergrad at present, looking to participate in a trend I've been observing that's bring some of the rigor and predictive power of the hard sciences to linguistics.
I'm particularly interested in how language evolved, and under what physical/biological/computational constraints; What that implies about the neural mechanisms behind human behavior; and how to use those two to construct a predictive and quantitative theory of linguistic behavior.
I go to a Liberal Arts college (I started out with a bit more of a Lit major bent), where, after being disillusioned with the somewhat more philosophical side of linguistics (mid-term, no less), I ended up taking an extracurricular dive into the physical sciences just to stay sane. Then a friend recomended HPMOR, and thence I discovered LessWrong, where I've been happily lurking for some time.
I decided it would be useful to actually participate. So here I am.
comment by palguay · 2012-07-10T05:57:44.465Z · LW(p) · GW(p)
Hi Everyone, I stumbled upon this website while reading a comment on reddit, I am a programmer living in India , I came back to India in march after living in the US for 6 years.
I am interested in cognitive psychology and have started working on a pet project of mine to implement the various cognitive tasks available on commercial websites in my own website http://brainturk.com .
I hope to contribute to some discussions and learn from others here.
comment by Lukas_Gloor · 2012-06-10T22:30:20.725Z · LW(p) · GW(p)
Hi! I discovered LW about a year ago and now I actually created an account. I study philosophy, and biology as minor. Sometimes I'm rather shocked by the things my fellow students believe and how they argue for their beliefs; I wish something like LW would be part of the standard curriculum. My main interests are ethics, philosophy of mind and evolutionary biology, and I'm looking forward to participating in discussions on these issues. Especially on ethics, as I'm skeptical regarding some of the views advocated on here (I'm a utilitarian). As someone who had read the original books several times, I was also delighted to find out about HPMoR recently.
comment by Worthstream · 2012-05-30T15:24:21.744Z · LW(p) · GW(p)
Hi, Worthstream here. I'm from Italy, as you will no doubt notice from my unusual choice of words. (Europeans then to overuse latin derived words in my experience)
I'm graduated in computer science, currently working as a web programmer, the kind of technical background i think is quite common here, judging by the number of useful applets and websites built by community members (Beeminder, just to name the first that comes to mind).
I'm a regional coordinator of the italian Mensa, a society i joined thinking that i would have found a lot of rational people. That assumption has been proved false, mensa members are not appreciably more rational than the rest of the population.
While i usually do not like fanfiction neither Harry Potter, HP:MoR is one of the best book i've read. I'm actively trying to get my friends to read it.
If i remember correctly i've found LW by looking for akrasia and time management advices, since i'm really interested in self improvement. I remember reading some articles i found interesting, started following the link to other posts, and the link in those posts too... and suddenly i did have an enormous backlog of articles to read!
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-30T16:52:41.787Z · LW(p) · GW(p)
found LW by looking for akrasia and time management advice [...] and suddenly i did have an enormous backlog of articles to read!
* raises finger *
- opens mouth *
- closes mouth *
- lowers finger *
Hi, Worthstream. Welcome to LW.!
Yeah, CS backgrounds are pretty common here, as is being disappointed by Mensa, liking HP:MoR, and an ongoing struggle with managing the shiny distractions of the Internet.
comment by Jakinbandw · 2012-05-23T21:55:22.524Z · LW(p) · GW(p)
Hello. I come from HPMoR. I identify as Christian, though my belief and reasons for belief are a bit more complex than that. I'll probably do a post on that later in 'how to convince me 2+2=3'. I also get told that I over think things.
Anyway, that's not the reason I joined. I was reading an article by Eliezer Yudkowsky and he stated that whatever can be destroyed by truth should be. This got me wondering in what context that was meant. My first thought was that it meant that we should strive to destroy all false beliefs, which has the side effect of not lying, but then I began to wonder if it wasn`t more personal. We should strive to let the truth that we observe destroy any beliefs that they are able to.
I realized that the difference between the two is that one is an end in and of itself (destroy all false belief), and one is a means to achieve a goal more effectively (don`t hold on to false belief when it has been proved false). I am really not sure how I feel about the first one, it seems very confrontational to no good purpose. There are a lot of false beliefs out there that people hold dear. However the second one is strange as well.
One of peoples goals is to be happy. Now there is an old saying that ignorance is bliss. While this is definitely not always a good policy I can think of several cases off the top of my head were a person would be happier with a false belief than with reality. For example what if everything that is happening to you right now is your mind constructing an elaborate fantasy to stop you from realizing that you are slowly being tortured to death? If you break free of said belief you are not happy, and you can do nothing to save yourself. The goal of being happy is actively opposed by the goal of learning the truth. [disclaimer: I've read about the mind constructing such fantasies in books and have experienced it only once in my life to a limited degree when I was being beaten up as a child. I don't know how scientifically accurate they are. This is just an example and if necessary I can come up with another one.]
So probably that wasn't what Mr. Yudkowsky meant when he said that what can be destroyed by truth should be (and if it is, can someone explain to me why?). So what does it mean? I've run out of theories here.
Replies from: TimS, electricfistula, Jakinbandw, CWG↑ comment by TimS · 2012-05-24T00:43:46.936Z · LW(p) · GW(p)
Welcome to LessWrong. There's a sizable contingent of people in this community who don't think that uncomfortable truths need be confronted. But I think they are wrong.
As you say, one purpose of believing true things is to be better at achieving goals. To exaggerate slightly, if you believe "Things in motion tend to come to a stop," then you will never achieve the goal of building a rocket to visit other planets. You might respond that none of your actual goals are prevented by your false beliefs. But you can't know that in advance unless you know which of your beliefs are false. That's not belief, that's believing that you have a belief.. And adjusting your goals so that they never are frustrated by false beliefs is just a long-winded way of saying Not Achieving Your Original Goals.
In theory, there might be a time when you wouldn't choose differently with a true belief that with a false belief. I certainly don't endorse telling an imminently dying man that his beloved wife cheated on him years ago. But circumstances must be quite strange for you to be confident that your choices won't change based on your beliefs. You, the person doing the believing, don't know when you are in situations like that because - by hypothesis - you have an unknown false belief that prevents you from understanding what is going on.
↑ comment by electricfistula · 2012-05-23T22:36:14.019Z · LW(p) · GW(p)
Hi, I joined just to reply to this comment. I don't think there is a lot of complexity hidden behind "whatever can be destroyed by truth should be". If there is a false belief, we should try to replace it with a true one, or at least a less wrong one.
Your argument that goes "But what if you were being tortured to death" doesn't really hold up because that argument can be used to reach any conclusion. What if you were experiencing perfect bliss, but then, your mind made up an elaborate fantasy which you believe to be your life... What if there were an evil and capricious deity who would torture you for eternity if you chose Frosted Flakes over Fruit Loops for breakfast? These kinds of "What if" statements followed by something of fundamentally unknowable probability are infinite in number and could be used to reach any conclusion you like and therefore, they don't recomend any conclusion over any other conclusion. I don't think it is more likely that I am being horribly tortured and fantasizing about writing this comment than I think it is likely that I am in perfect bliss and fantasizing about this, and so, this argument does nothing to recomend ignorance over knowledge.
In retrospect (say it turns out I am being tortured) I may be happier in ignorance, but I would be an inferior rationalist.
I think this applies to Christianity too. At the risk of being polemical, say I believed that Christianity is a scam whereby a select group of people convince the children of the faithful that they are in peril of eternal punishment if they don't grow up to give 10% of their money to the church. Suppose I think that this is harmful to children and adults. Further, suppose I think the material claims of the religion are false. Now, you on the other hand suppose (I assume) that the material claims of the religion are true and that the children of the faithful are being improved by religious instruction.
Both of us can't be right here. If we apply the saying "whatever can be destroyed by truth should be" then we should each try to rigorously expose our ideas to the truth. If one of our positions can be destroyed by the truth, it should be. This works no matter who is right (or if neither of us are right). If I am correct, then I destroy your idea, you stop believing in something false, stop assisting in the spread of false beliefs, stop contributing money to a scam, etc. If you are right then my belief will be destroyed, I can gain eternal salvation, stop trying to mislead people from the true faith, begin tithing etc.
In conclusion, I think the saying means exactly what it sounds like.
Replies from: Bugmaster, Desrtopa, Jakinbandw↑ comment by Bugmaster · 2012-05-23T23:09:36.559Z · LW(p) · GW(p)
These kinds of "What if" statements followed by something of fundamentally unknowable probability...
Minor nitpick: these statements have a very low probability of being true due to the lack of evidence for them, not an unknowable probability of being true as your sentence would imply.
This works no matter who is right (or if neither of us are right).
Ok, but what about unfalsifiable (or incredibly unlikely to be falsified) claims ? Let's imagine that I am a religious person, who believes that a). the afterlife exists, and b). the gods will reward people in this afterlife in proportion to the number of good deeds each person accomplished in his Earthly life.The exact nature of the reward doesn't matter, whatever it is, I'd consider it awesome. Furthermore, let's imagine that I believe c). no objective empirical evidence of this afterlife and these gods' existence could ever be obtained; nonetheless, I believe in it wholeheartedly (perhaps the gods revealed the truth to me in an intensely subjective experience, or whatever). As a direct result of my beliefs, d). I am driven to become a better person and do more good things for more people, thus becoming generally nicer, etc.
In this scenario, should my belief be destroyed by the truth ?
Replies from: electricfistula, TimS↑ comment by electricfistula · 2012-05-23T23:45:23.084Z · LW(p) · GW(p)
Suppose we are neighbors. By some mixup, the power company is combining my electric bill to your own. You notice that your bill is unusually high, but you pay it anyway because you want electricity. In fact, you like electricity so much that you are happy to pay even the high bill to get continued power. Now, suppose that I knew all the details of the situation. Should I tell you about the error?
I think this case is pretty similar to the one you've described about the religion that makes you do good things. You pay my bill because you want a good for yourself. I am letting you incur a cost, that you may not want to, because it will benefit me.
I think in the electricity example I have some moral obligation to tell you our bills have been combined. I think this carries over to the religious example. There is a real benefit to me (and to society) to let you continue to labor under your false assumption that doing good deeds would result in magic rewards, but I still think it would be immoral to let this go on. I think the right thing to do would be to try and destroy your false belief with the truth and then try to convince you that altruism can be rewarding in and of itself. That way, you may still be an altruist, but you won't be fooled into being one.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-24T00:14:04.967Z · LW(p) · GW(p)
I think this case is pretty similar to the one you've described about the religion that makes you do good things.
Not entirely. In your example, the power bill is a zero-sum game; in order for you to gain a benefit (free power), someone has to experience a loss (I pay extra for your power in addition to mine). Is there a loss in my scenario, and if so, to whom ?
There is a real benefit to me (and to society) to let you continue to labor under your false assumption that doing good deeds would result in magic rewards, but I still think it would be immoral to let this go on.
Why do you think this would be immoral ? I could probably make a consequentialist argument that it would in fact be moral, but perhaps you're using some other moral system ?
Replies from: electricfistula↑ comment by electricfistula · 2012-05-24T00:36:52.911Z · LW(p) · GW(p)
someone has to experience a loss (I pay extra for your power in addition to mine). Is there a loss in my scenario, and if so, to whom
The cost is to you. You are the one doing good deeds. I consider the time and effort (and money) you expend doing good deeds for other people to be the cost here.
Why do you think this would be immoral ?
My feeling is that this is an implicit corruption of your free will. You aren't actually intending to pay my power, you are just doing it because you don't realize you are. Similarly, in the religion example, what you actually intend to do is earn your way into heaven (or pay for your own power) but what you are actually doing is hard work to benefit others and you won't go to heaven for it (paying for my electricity).
I don't have the time to fully divulge my moral system here, but I think there is a class of actions which reduce the free will of other people. At the very extreme end of this class would be slavery "Do my work or I'll hurt or kill you". At the opposite end of the spectrum (but still a member of the same class) is something like letting people serve you, when they don't intend to, because of a lie by omission.
One of the things I respect and value about human beings is their free will. By diminishing the free will of other people I would be diminishing the value of other human beings and I am calling that "immoral behavior". This, I think, is why it is immoral to let you believe a lie which hurts you even if it helps me.
We might all benefit if we tricked Mark Zuckerburg into paying our power bills. He could afford to do so and to go on doing his thing and we would all be made better off. So, should we do so? If we should, why should we stop at the power bill? Why should we limit ourselves to tricking him? Why not just compel him through force?
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-24T02:24:19.304Z · LW(p) · GW(p)
The cost is to you. You are the one doing good deeds. I consider the time and effort (and money) you expend doing good deeds for other people to be the cost here.
Ah, I understand, that makes sense. In this case, the magnitude of the net loss/gain depends on whether "become a better person" is one of my goals. If it is, then belief in this kind of afterlife basically acts as a powerful anti-akrasia aid, motivating me to achieve this goal. In this scenario, would you say that taking this tool away from me would be the right thing to do ?
My feeling is that this is an implicit corruption of your free will.
What do you mean by "free will" ? Different people use this term to mean very different things.
Why not just compel him through force?
This is different from tricking him [1]. When we trick someone in the manner we were discussing (i.e., by conning him), we aren't just taking away his stuff -- we are giving him happiness in return. By contrast, when we take his stuff away by force, we're giving him nothing but pain. Thus, even if we somehow established that conning people is morally acceptable, it does not follow that robbing them is acceptable as well.
[1] As Sophie from Leverage points out in one of the episodes.
Replies from: electricfistula↑ comment by electricfistula · 2012-05-24T03:05:24.422Z · LW(p) · GW(p)
then belief in this kind of afterlife basically acts as a powerful anti-akrasia aid, motivating me to achieve this goal
This depends very much on what you mean by "better person". Returning a lost wallet because you know the pain of losing things and because you understand the wallet's owner is a sapient being who will experience similar pain is the kind of thing a good person would do. Returning a lost wallet because you expect a reward is more of a morally neutral thing to do. So, if you are doing good deeds because you expect a heavenly reward then you aren't really being a good person (according to me) - you are just performing actions you expect to get a reward for. I think this belief actually prohibits you from being a good person, because as long as you believe in it you can never be sure whether you are acting out of a desire to be good or out of a desire to go to heaven.
In this scenario, would you say that taking this tool away from me would be the right thing to do ?
I would. If you use this belief to trick yourself into believing you are a better person (see above) then this is just doubling down for me. False beliefs should be destroyed by the truth. I should first destroy the belief in the heavenly reward for good deeds and then let the truth test you. Do you still do good things without hope of eternal reward? If yes, then you are a good person. If not, then you aren't and you never were.
What do you mean by "free will" ?
By "free will" I mean a person's ability to choose the option they most prefer. So, if I tell my friend I want to eat at restaurant X - I don't think I'm inhibiting his free will. I do hope I'm influencing his preferences. I assume somewhere in his decision making algorithm is a routine that considers the strength of preferences of friends and that evaluation is used to modify his preference to eat at restaurant X. I do think I'd be inhibiting his free will if I were to say falsely that "Well, we can't go to Y because it burned down" (or let him continue to believe this without correcting him). I am subverting free will by distorting the apparent available options. I think this also fits if you use threat of harm ("I'll shoot you if we don't go to X") to remove an option from someone's consideration.
by conning him), we aren't just taking away his stuff -- we are giving him happiness in return
I know a mentally handicapped person. I think its very likely I could trick this person out of their money. I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
It seems to me, if it is possible to trick Zuckerberg into paying my power bill then it is possible because he is gullible enough to believe my con. If it is possible for me to trick the mentally disabled, then it is possible because they are gullible enough for me to con. So, I don't see why there should be any moral difference between tricking the mentally disabled out of their wealth and tricking Zuckerberg out of his. Nigerian email scams should be okay too, right?
I suppose there is some difference here in that Zuckerberg could afford to be conned out of a power bill or two whereas the average Nigerian scam victim cannot. I interpret this difference as being one of scale though. I think it would be worse to trick the elderly or the mentally disabled out of their life savings than it would to trick Zuckerberg out of the same number of dollars. This doesn't mean that it is morally permissible to trick Zuckerberg out of any money though. Instead, I think it shows that each of these actions are immoral but of different magnitudes.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-24T03:24:27.103Z · LW(p) · GW(p)
This depends very much on what you mean by "better person".
In this scenario, I mean, "someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal". That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can't tell what I'm thinking, after all, only what I'm doing.
Do you still do good things without hope of eternal reward?
In my scenario, the answer is either "no", or "not as effectively". I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
So, if I tell my friend I want to eat at restaurant X - I don't think I'm inhibiting his free will. I do hope I'm influencing his preferences.
In this case, "free will" is a matter of degree. Sure, you aren't inhibiting your friend's choices by force, but you are still affecting them. Left to his own devices, he would've chosen restaurant Y -- but you caused him to choose restaurant X, instead.
I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, "what if I could con the person in such a way that will grant him sustained happiness ?" I am not sure whether doing so would be moral or not; but I'm also not entirely sure whether such a feat is even possible.
Instead, I think it shows that each of these actions are immoral but of different magnitudes.
Agreed, assuming that the actions are, in fact, immoral.
Replies from: electricfistula↑ comment by electricfistula · 2012-05-24T05:04:38.040Z · LW(p) · GW(p)
That said, does it really matter why I do nice things for people, as long as I do them ?
From an economics standpoint it doesn't matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee - with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn't sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee - with sugar please! You actually aren't this person's friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn't cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn't feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
In my scenario, the answer is either "no", or "not as effectively".
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
Left to his own devices, he would've chosen restaurant Y -- but you caused him to choose restaurant X, instead
What I hope is happening is that my friend's preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
This action is not entirely analogous, ... The more interesting question is...
Agreed. I don't think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person's free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn't ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn't quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn't some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-25T23:15:03.609Z · LW(p) · GW(p)
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don't care about morality, or you'd determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can't read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming "This cannot be ! So much cyanide would kill any normal man !" -- then I would conclude that you're just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia...
Maybe it won't, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
If I wanted to go to X less, then my friend would want to go to X less.
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend's preferences directly, you're exploiting your knowledge of his preference table, but the principle is the same. You could've just as easily said, "I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he'd want to go to X less".
I can try again with another hypothetical. A girl wants to try ecstasy...
I don't think this scenario is entirely analogous either, though it's much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.
↑ comment by TimS · 2012-05-24T00:29:35.630Z · LW(p) · GW(p)
If there is no empirical evidence either way about a belief, how would one go about destroying it? Beliefs pay rent in anticipated experience, not anticipated actions.
In short, the religious person has adopted a terminal value of being a nicer person, but is confused an thinks this is an instrumental value in pursuit of the "real" terminal value of implementing the desires of a supernatural being. Epistemic rationality has no more to say about this terminal value than about any other terminal value.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-24T03:12:30.442Z · LW(p) · GW(p)
If there is no empirical evidence either way about a belief, how would one go about destroying it?
One way you could go about destroying a belief like that is to use Ockham's Razor: sure, it's possible that all kinds of unfalsifiable beliefs are true, but why should you waste time in believing any of them, if they have no effect on anything ?
However, if the believer has some subjective evidence for the belief -- f.ex., if he personally experienced the gods talking to him -- then this attack cannot work. In this case, would you still say that his belief is "indestructible" ?
↑ comment by Desrtopa · 2012-05-24T05:19:08.962Z · LW(p) · GW(p)
I think this applies to Christianity too. At the risk of being polemical, say I believed that Christianity is a scam whereby a select group of people convince the children of the faithful that they are in peril of eternal punishment if they don't grow up to give 10% of their money to the church.
I think this is a rather misleading characterization, since calling it a scam implies that the people doing the convincing are perpetrating a deception they do not believe in themselves, which I doubt is true in any but an extremely small and unusual minority of cases.
↑ comment by Jakinbandw · 2012-05-24T02:44:19.748Z · LW(p) · GW(p)
I pose the question of what does being a superior rationalist do for you if you are about to die? And I'll use a more real example because you don't seem to like that one. Let us suppose that you are about a miles walk from your car and you cut yourself badly. You don't have any means of communicating with people. You start walking back to your car. You suspect that you aren't going to make it. Now does it make you happier to follow up on that thought and figure out the rate you are losing blood, realize you aren't going to make it and die in fear and sadness, or is it better to put that suspicion aside and keep walking toward your car, sitting down for a quick rest when you get tired? One has you dieing in fear, the other in peace. All because of your choice to destroy your belief that you can get to the car. Is being a superior rationalist giving your more happiness that the knowledge of your own imminent death is taking away?
You can't know in advance what beliefs you hold are false, however you can know which ones make you happy and don't get in the way of your life. I believe that I am sitting in front of a computer enjoying a stimulating conversation. I could devote a lot of time trying to disprove it. I would probably not succeed, but who knows, I might. I however don't see anything to be gained from attempting to disprove this. Again, I believe the sun will rise tomorrow morning. My belief might be false, however if it is, and the sun goes nova tonight, I would gain nothing but unhappiness (if I was an atheist, which I am in this argument). I could try to disprove it, and put resources towards seeing if it is or it isn't. I could try to find out if there is a conspiracy to hide if from the public to stop rioting. But even if the sun was about to go nova, my knowledge of it would change nothing, and it would be unlikely I could find out anyway, so it would be a waste of resources to try to find out.
And I am trying to leave religion out of this. Your misconceptions about Christianity are show that you have never done any real research into the subject of religion, and that you are just copying what you have heard from others. If however you really want to get into it, let me know and I will. I admit, I have anti-anti-theist tendencies.
Still, the original question has been answered.
Replies from: electricfistula, Bugmaster↑ comment by electricfistula · 2012-05-24T04:27:39.839Z · LW(p) · GW(p)
I pose the question of what does being a superior rationalist do for you
In the aggregate of all possible worlds, I expect it will let me lead a happier and more fulfilling life. This isn't to say that there aren't situations where it will disadvantage me to be a rationalist (a killer locks me and one other person in a room with a logic puzzle. He will kill the one who completes the puzzle first...) but in general, I think it will be an advantage. Its like in the game of poker, sometimes, the correct play will result in losing. That is okay though, if players play enough hands eventually superior skill will tell and the better player will come out on top. Being a superior rationalist may not always be best in every situation, but when the other choice (inferior rationalist) is worse in even more situations... the choice seems obvious.
You start walking back to your car. You suspect that you aren't going to make it.
Then I could stop walking, conserve my energy and try to suppress the blood loss. Or, I could activate my rationalist powers earlier and store a first aid kit in my car, or a fully charged cell phone in my pocket, or not venture out into the dangerous wild by myself...
Your misconceptions about Christianity are show that you have never done any real research into the subject of religion, and that you are just copying what you have heard from others.
I'll freely admit to a hostile stance on religion, but I think it is a deserved one. Whatever misconceptions I may have about Christianity are gained from growing up with a religious family and attending services "religiously" for the first two decades of my life. I have more than a passing familiarity with it. I don't think anything I said about religion is wrong though. Religious instruction is targeted predominantly towards children. The claims of the religious are false. Threatening a child with eternal damnation is bad. A consequence of being a Christian is giving 10% of your money to the church. Am I missing anything here?
Replies from: Bugmaster, Jakinbandw↑ comment by Bugmaster · 2012-05-24T04:40:02.049Z · LW(p) · GW(p)
a killer locks me and one other person in a room with a logic puzzle. He will kill the one who completes the puzzle first.
If you knew this to be the case, the rational thing to do would be to avoid solving the puzzle :-)
The claims of the religious are false.
Religious people would disagree with you here, I'd imagine.
A consequence of being a Christian is giving 10% of your money to the church.
This is another minor nitpick, but AFAIK not all Christian sects demand tithing (though some do).
Replies from: electricfistula, TheOtherDave↑ comment by electricfistula · 2012-05-24T05:12:47.515Z · LW(p) · GW(p)
If you knew this to be the case, the rational thing to do would be to avoid solving the puzzle :-)
Agreed, but there is at least one possible scenario (where I don't know it is the case) where it would hurt me to be a superior rationalist.
Religious people would disagree with you here, I'd imagine.
I imagine they would. Because they would disagree with me, I'd like for my beliefs to challenge theirs to trial by combat. That way, the wrong beliefs might be destroyed by the truth.
This is another minor nitpick, but AFAIK not all Christian sects demand tithing (though some do).
Sure, 10% is not true of all Christian groups. To my knowledge though, all such groups run on donations from the faithful. If the number isn't 10% it is still greater than zero. Arguments here are over scale and not moral righteousness.
↑ comment by TheOtherDave · 2012-05-24T04:59:28.094Z · LW(p) · GW(p)
The claims of the religious are false.
Religious people would disagree with you here, I'd imagine.
I'm not so sure.
I mean, it's not like all religious people agree about religious claims, any more than all political activists agree about political claims, or all sports fans agree about claims regarding sports teams.
In fact, quite the contrary... I suspect that most religious people believe that the religious claims of most religious people are false.
↑ comment by Bugmaster · 2012-05-24T05:03:32.795Z · LW(p) · GW(p)
Fair enough, though religious people would surely disagree with the statement, "All religious claims are false" -- which is what I interpreted electricfistula's comment to mean.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-24T13:10:35.658Z · LW(p) · GW(p)
Yah.
Tangentially, I know a couple of Catholic seminarians who would disagree with "Most religious claims are false" -- they argue that claims which contradict certain tenets of Catholicism aren't religious claims at all, though the people making them may falsely believe them to be.
↑ comment by Jakinbandw · 2012-05-24T06:02:07.324Z · LW(p) · GW(p)
This isn't to say that there aren't situations where it will disadvantage me to be a rationalist
Indeed. My entire point was that it might be possible to recognize these situations and then act in an appropriate manner. (Or would that be being meta-rationalist?)
Whatever misconceptions I may have about Christianity are gained from growing up with a religious family and attending services "religiously" for the first two decades of my life.
Anecdotal evidence shouldn't be a cause to say something is horrible. If that were the case I could point to the secular schools I went to growing up where I was the only Christian in my class, and watched as the other kids fought, did hard drugs, had sex, and generally messed up their life and beat me up. On the other hand the Church was friendly, focused on working together and planning for the future. It focused on tolerance and accepting people who were hostile without hating them. If I was to go just from my childhood I would despise atheists with a passion.
Religious instruction is targeted predominantly towards children.
Depends on the church. The church that I go to most of the time has only 2 or three children in it and is mostly made up of members over 60. Besides, if you look at it from a Christian point of view, is it wrong to teach children when they are young? Would you advocate waiting till a person is 20 to start teaching them how to read, write and do math?
The claims of the religious are false.
I respectfully disagree. I would appreciate it if you could be respectful in turn.
Threatening a child with eternal damnation is bad.
Is it as bad as telling a child that if they play in traffic they could cease to exist? Or that if they are not careful around a lawnmower they could end up with pain and disabilities for the rest of their lives? Define 'Bad' for me so that we can discuss this point.
A consequence of being a Christian is giving 10% of your money to the church.
Not true for all churches. In fact I have yet to be in a single one that even suggests it. Usually it is more along the lines of "If you believe the work we are doing is good than please donate so that we may continue doing it." You know, kind of like what Eliezer is doing right now with the workshops he is setting up.
Replies from: Dolores1984, electricfistula↑ comment by Dolores1984 · 2012-05-24T06:39:43.931Z · LW(p) · GW(p)
I respectfully disagree. I would appreciate it if you could be respectful in turn.
Claims with a low Occamian prior are false (to within reasonable tolerances) by default to a rationalist. Deities in general tend to have extremely long minimum message lengths, since they don't play nice with the rest of our model of the universe, and require significant additional infra-structure. I suspect you would not be overly put out by the assertion that Rama or Odin isn't real. So, what makes your God different? I ask you honestly. If you can show strong, convincing evidence for why the existence of your God is special, I will be very, very interested. If you can demonstrate enough Bayesian evidence to bump the probability of Yahweh over 50%, you've got yourself a convert. Probably quite a few. But, the burden of evidence is on your shoulders.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T16:19:27.907Z · LW(p) · GW(p)
If you can show strong, convincing evidence for why the existence of your God is special, I will be very, very interested.
Ah, now that is a funny thing isn't it. Once upon a time I played a joke on a friend. I told him something that he would have never have believed unless it came from my own mouth, and then when he tried to tell others I just looked confused and denied it. He ended up looking like a fool. (For the record I asked him to tell nobody else).
Why is this relevant? Because if for example (and no, I'm not saying this is what happened), God came out of the sky, pointed at me, and said "I exist." I would know that either he existed, or something else did that was trying to fool me into thinking he did. Either way I would have belief that something supernatural (outside of the realm of what human science commonly accepts) had happened. Let's say I came onto this board and told everyone that. How would I 'prove' it? I could say it happened, but I doubt anyone here would believe me. I could try a few tests, but I'd be hard pressed with how to prove that a something of a godlike intelligence exists if it didn't want anyone else to find out. However I might not be smart enough, so I'll pose the question to you:
How do you prove that a godlike entity exists if it doesn't want to be proven? Assume that it has complete freedom to move through time so that tricking it doesn't work because it can just go back in time (that's what omnipotent means after all). And that you don't know the reasons why it's staying hidden so no argument to try to get it to show itself will work.
I look forward to suggestions. But unless there is something that works for that, I am just someone who believes because of experience, but knows of no way to prove it to others (though honestly I am making an assumption by saying god wants to stay hidden, it's the only reason I can think of).
Replies from: Dolores1984, TheOtherDave, thomblake↑ comment by Dolores1984 · 2012-05-24T19:39:26.805Z · LW(p) · GW(p)
Why is this relevant? Because if for example (and no, I'm not saying this is what happened), God came out of the sky, pointed at me, and said "I exist." I would know that either he existed, or something else did that was trying to fool me into thinking he did. Either way I would have belief that something supernatural (outside of the realm of what human science commonly accepts) had happened.
Actually, my default response for this sort of thing is to immediately go to a hospital, and get a head CT and a chat with a certified psychiatrist. I mean, sure, it could be the supernatural, but we KNOW mental illness happens. The priors for me being crazy (especially given some unique family history) are not very low. Much, much higher than the odds of a deity actually existing, given the aforementioned Occamian priors.
How do you prove that a godlike entity exists if it doesn't want to be proven? Assume that it has complete freedom to move through time so that tricking it doesn't work because it can just go back in time (that's what omnipotent means after all). And that you don't know the reasons why it's staying hidden so no argument to try to get it to show itself will work.
You don't. Rationalism only works if God isn't fucking with you. That said, there's a huge space of possible constructs like that one (entities that conveniently eliminate all evidence for themselves). It's not infinite, but it's arbitrarily large. From a rationalist's perspective, if any of them were real, we wouldn't know, but the odds of them actually being real in the first place are... not high. Again with the Occamian prior. So, I'm not much moved by your analysis.
That said, I am curious what your personal experience was.
↑ comment by TheOtherDave · 2012-05-24T16:46:16.540Z · LW(p) · GW(p)
How do you prove that a godlike entity exists if it doesn't want to be proven?
Proof is not typically necessary. People make claims about their experience all the time that they have no way of proving, as well as claims that they probably could prove but don't in fact do so, and I believe many of those claims.
For example, I believe my officemate is married, although they have offered me no proof of this beyond their unsupported claim.
I would say a more useful question is, "how do I provide another person with sufficient evidence that such an entity exists that the person should consider it likely?" And of course the answer depends on the person, and what they previously considered likely. (The jargon around here would be "it depends on their priors.")
Mostly I don't think I can, unless their priors are such that they pretty much already believe that such an entity exists.
Another question worth asking is "how do I provide myself sufficient evidence that such an entity exists that I should consider it likely?"
I don't think I can do that either.
Unrelatedly: Is "god exists, has the properties I believe it to have, and wants to stay hidden" really the only reason you can think of for the observable universe being as we observe it to be? I understand it's the reason you believe, I'm asking whether it's the only reason you can think of, or whether that was just hyperbole.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T17:31:11.490Z · LW(p) · GW(p)
Is "god exists, has the properties I believe it to have, and wants to stay hidden" really the only reason you can think of for the observable universe being as we observe it to be?
My own belief is closer to: "Something very powerful and supernatural exists, doesn't seem to be hostile, and doesn't mind that I call it the Christian God." And while I would answer 'no' to that question, the amount of evidence that there is something supernatural if far greater than the amount of evidence that there are millions of people lying about their experiences.
For instance, every culture has a belief in the supernatural. Now I would expect that social evolution would trend away from such beliefs. If you say, I can dance and make it rain, and then you fail, you would get laughed at. If you don't believe me gather a bunch of your closest friends and try it. The reason for people to believe someone else is if they had proof to back it up, or they already had reason to believe. Humans aren't stupid, and I don't think we've become radically more intelligent in the last couple thousand years. Why then is belief in the supernatural* everywhere? Is it something in our makeup, how we think? I have heard such a thing discounted by both sides. So there must be some cause, some reason for people to have started believing.
And that's without even getting into my experiences, or those close to me. As was suggested, misremembering, and group hallucination are possible, but if that is the case than I should probably check myself and some people I know into a medical clinic because I would be forced to consider myself insane. Seeing things that aren't there wold be a sign of something being very wrong with me, but I do not any any other symptoms of insanity so I strongly doubt this is the case.
I suppose when I get right down to it, either I and some others are insane with an unknown form of insanity, or there is something out there.
*(outside of the realm of what human science commonly accepts)
Replies from: TheOtherDave, APMason, Dolores1984, Desrtopa, TimS, thomblake↑ comment by TheOtherDave · 2012-05-24T18:51:40.091Z · LW(p) · GW(p)
"Something very powerful and supernatural* exists, doesn't seem to be hostile, and doesn't mind that I call it the Christian God."
For what it's worth, I'm .9+ confident of the following claims:
1) there exist phenomena in the universe that "human science" (1) doesn't commonly accept.
2) for any such phenomenon X, X doesn't mind that you call it the Christian God
3) for any such phenomenon X, X doesn't mind that you call it a figment of your imagination
4) for any such phenomenon X, X is not "hostile" (2) to humans
So it seems we agree on that much.
Indeed, I find it likely that most people on this site would agree on that much.
the amount of evidence that there is something supernatural* if far greater than the amount of evidence that there are millions of people lying about their experiences.
As above, I think the evidence supporting the idea that there exist phenomena in the universe that "human science" (1) doesn't commonly accept is pretty strong. The evidence supporting the idea that people lie about their experiences, confabulate their experiences, and have experiences that don't map to events outside their own brains despite seeming to, is also pretty strong. These aren't at all conflicting ideas; I am confident of them both.
Do you mean to suggest that, because there exist such phenomena, human reports are therefore credible? I don't see how you get from one to the other.
Seeing things that aren't there wold be a sign of something being very wrong with me
Not really, no. It happens to people all the time. I had the experience once of being visited by Prophetic Beings from Outside Time who had a Very Significant Message for me to impart to the masses. That doesn't mean I'm crazy. It also doesn't mean that Prophetic Beings from Outside Time have a Very Significant Message for me to impart to the masses.
either I and some others are insane with an unknown form of insanity, or there is something out there.
Again: there are almost certainly many things out there.
That doesn't mean that every experience you have is an accurate report of the state of the universe.
And if the particular experience you had turns out not to be an accurate report of the state of the universe, that doesn't mean you're insane.
==========
(1) Given what I think you mean by that phrase. For example, nuclear physics was outside the realm of what human science commonly accepted in the year 1750, so was supernatural then by this definition, although it is not now.
(2) Given what I think you mean by that phrase. For example, I assume the empty void of interstellar space is not considered hostile, even though it will immediately kill an unprotected human exposed to it.
↑ comment by APMason · 2012-05-24T17:42:42.113Z · LW(p) · GW(p)
And that's without even getting into my experiences, or those close to me.
Well, don't be coy. There's no point in withholding your strongest piece of evidence. Please, get into it.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T17:58:45.743Z · LW(p) · GW(p)
As already pointed out, would it change either my beliefs or your beliefs? I've already recounted a medical mystery with my foot and blood loss. It comes down in the end to my word, and that of people I know. We could all be lying. There is no long term proof, so I don't see any need to explain it. That was my point. What is strong proof to me, is weak proof to others because I know that I am not lying. I have no way to prove I am not lying however so what would be the point?
Replies from: thomblake↑ comment by thomblake · 2012-05-24T18:05:04.588Z · LW(p) · GW(p)
I have no way to prove I am not lying however so what would be the point?
If you have evidence that could overcome the low prior for God's existence were you not lying, then that would be worth hearing even if we would believe you're lying. I'm not aware of such evidence for particular deities.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T18:22:33.706Z · LW(p) · GW(p)
Honestly mine really isn't any different than what you hear on the internet all the time. If you want to hear it go ahead. When my grandfather died all the people in the room said that they saw a light enter the room. It didn't say anything but they all agreed that they felt peace come over them. My grandfather was a Christian, as were the people in the room. I wasn't in the room, however I did check their stories individually and they matched. Also these were people who haven't lied to me before or since (well, other than stuff like april fools... though one of them never even does that). That, along with my foot, and my Mothers ability to know when her friends are in trouble and make phone calls that I have related in other posts give me reasonably strong belief in the supernatural* world
*(Supernatural yada yada, not understood by science yada yada. Do I need to keep making these disclaimers?)
↑ comment by Dolores1984 · 2012-05-24T19:47:15.831Z · LW(p) · GW(p)
And while I would answer 'no' to that question, the amount of evidence that there is something supernatural* if far greater than the amount of evidence that there are millions of people lying about their experiences.
Surprisingly, no. That said, religious people aren't lying. They're not even a lot crazier than baseline. I've had experiences which I recognize from my reading to be neurological that I might otherwise attribute to some kind of religious intervention. And those are coming from an atheist's brain not primed to see angels or gods or anything of that kind.
As for why belief in the supernatural is everywhere, a lot of it has to do with how bad our brains are at finding satisfactory explanations, and at doing rudimentary probability theory. We existed as a species for a hundred thousand years before we got around to figuring out why there was thunder. Before then, the explanation that sounded the simplest was 'there's a big ape in the sky who does it.' And, even when we knew the real reason, we were so invested in those explanations that they didn't go away. Add in a whole bunch of glitches native to the human brain, and boom, you've a thousand generations of spooky campfire stories.
As was suggested, misremembering, and group hallucination are possible, but if that is the case than I should probably check myself and some people I know into a medical clinic because I would be forced to consider myself insane.
If I were you, I would be terrified of that possibility. I would at least go to a psychiatrist and try to rule it out. It is a real possibility, and potentially the most likely one. Just because you don't like it doesn't mean it isn't true.
↑ comment by Desrtopa · 2012-05-24T17:46:29.600Z · LW(p) · GW(p)
The reason for people to believe someone else is if they had proof to back it up, or they already had reason to believe. Humans aren't stupid, and I don't think we've become radically more intelligent in the last couple thousand years. Why then is belief in the supernatural* everywhere? Is it something in our makeup, how we think? I have heard such a thing discounted by both sides.
I don't think you'll find such a thing readily discounted here. There are plenty of well established cognitive biases that come to play in assessment of supernatural claims. The sequences discuss this to some degree, but you might also be interested in reading this book which discusses some of the mechanisms which contribute to supernatural belief which are not commonly discussed here.
We don't even need to raise the issue of the supernatural to examine whether people are likely to pass down beliefs and rituals when they don't really work. We can look at folk medicine, and see if there are examples of cures which have been passed down through cultures which perform no better than placebo in double blind tests. In fact, there is an abundance of such.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T18:14:13.530Z · LW(p) · GW(p)
We can look at folk medicine, and see if there are examples of cures which have been passed down through cultures which perform no better than placebo in double blind tests.
Point.
though I would point out that not all of them are wrong either. Just the good majority. That's neither here nor there though.
Out of curiosity how does science explain people feeling knowing that people they care about are in trouble? My mother has made 4 phone calls, and I have witnessed 2 where she felt that someone was in trouble and called them. One of those calls was to me and it helped me greatly. While she has missed calling people that were in trouble, she has never once called someone with that intent and been wrong.She told me that it feels like someone is telling her to call them because they are in trouble. I can't know if that is true or not, but I can't think of her ever lying to me. This is even more interesting because one time she told me that she felt she needed to make the call just before she did, thereby predicting it.
I know that she isn't the only person that does this, because I have read many accounts of people who believed a loved one had died when they were across the ocean during WWII.
Personally I would go with psyonics if not god, but that might be because I played to many role-playing games.
Sorry if this seems odd, it was just something that came to mind as I was thinking about supernatural* things.
*(outside of the realm of what human science commonly accepts)
Replies from: Desrtopa, Bugmaster, thomblake↑ comment by Desrtopa · 2012-05-24T18:43:11.699Z · LW(p) · GW(p)
Out of curiosity how does science explain people feeling knowing that people they care about are in trouble?
I don't know if this is something that has been explained, or even if it's something that needs to be explained. It could be that you're operating under an unrepresentative dataset. Keep in mind that if you hadn't experienced a number of phone calls where the caller's intuition that something was wrong was correct, you wouldn't treat it as a phenomenon in need of explanation, but if you had experienced some other set of improbable occurrences, simply by chance, then that would look like a phenomenon in need of explanation. I personally have no experiences with acquaintances making phone calls on an intuition that something is wrong and being right, although I have experience with acquaintances getting worried and making phone calls and finding out there was really nothing to worry about. There's a significant danger of selection bias in dealing with claims like this, because people who experience, say, a sudden premonition that something has happened to their loved on across the sea at war, and then find out a couple weeks later that they're still alive and well, are probably not going to record the experience for posterity.
I've encountered plenty of claims of improbable events before which were attributed to supernatural causes. If I consistently encountered ones that took the form of people correctly intuiting that a distant loved one was in trouble and calling them, I would definitely start to suspect that this was a real phenomenon in need of explanation, although I would also be interested in seeing how often people intuited that a distant loved one was in trouble, called them, found out they were wrong, and didn't think it was worth remembering. Maybe some of the improbable events I've heard about really are the result of more than chance, and have some underlying explanation that I'm not aware of, but I don't have the evidence to strongly suspect this.
If you multiply a day times the population experiencing it, that's about 82,000 years of human experience in America alone. That's a lot of time for improbable stuff to happen in, and people tend to remember the improbable stuff and forget the ordinary, and draw patterns erroneously. So I don't treat seeming patterns of unusual events as needing explanation unless I have reliable reason to conclude that they're actually going on.
↑ comment by Bugmaster · 2012-05-24T18:24:24.731Z · LW(p) · GW(p)
My mother has made 4 phone calls, and I have witnessed 2 where she felt that someone was in trouble and called them.
Has your mother ever called anyone when she felt they were in trouble, only to find out that they weren't, in fact, in trouble ? Confirmation bias is pretty strong in most humans.
This is even more interesting because one time she told me that she felt she needed to make the call just before she did, thereby predicting it.
Wait... she predicted that she would call someone, and then went ahead and called someone ? This doesn't sound like much of a prediction; I don't think I'm parsing your sentence correctly.
because I have read many accounts of people who believed a loved one had died when they were across the ocean during WWII.
If your loved one is fighting in WWII, it's very likely that he or she would die, sadly...
Personally I would go with psyonics if not god...
Why did you end up picking "god" over "psionics", then ?
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T18:38:26.514Z · LW(p) · GW(p)
Has your mother ever called anyone when she felt they were in trouble, only to find out that they weren't, in fact, in trouble ? Confirmation bias is pretty strong in most humans.
Not that I remember. My memory could be faulty, but thinking long and hard about it I don't remember it happening.
Wait... she predicted that she would call someone, and then went ahead and called someone ? This doesn't sound like much of a prediction; I don't think I'm parsing your sentence correctly.
She predicted they were in trouble. I think the phrase she used was "I think XXXX is in trouble and needs help." I could be misremembering though.
Why did you end up picking "god" over "psionics", then ?
It's a close call honestly, but if god exists, which I believe he does from other evidence listed in this over-sized thread, then adding psionics on top would be added complexity for no gain. If you already know that the earth goes around the sun because of gravity, why bother coming up with an alternate explanation for why Saturn goes around the sun? It might have another reason, but the simplest explanation is more likely to be right.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-25T22:49:42.392Z · LW(p) · GW(p)
She predicted they were in trouble.
Oh yeah, that makes more sense than what I was thinking.
Anyway, as the others on this thread have pointed out, there could be many explanations for why you remember events the way you do. Among them, are things like "my mother has supernatural powers", "a god exists and he is using his powers on my mother", "aliens exist and are using their power on my mother", etc. The most probable explanation, though, is "my memory is faulty due to a cognitive bias that is well understood by modern psychologists".
That said, I must acknowledge that if you have already determined, for some other unrelated reason, that the probability of psionic powers / gods / aliens existing is quite high; then it would be perfectly rational of you to assign a much higher probability to one of these other explanations.
↑ comment by thomblake · 2012-05-24T18:21:07.648Z · LW(p) · GW(p)
My mother has made 4 phone calls, and I have witnessed 2 where she felt that someone was in trouble and called them.
Even if that were true, and not a misremembrance or a post-hoc rationalization, you must take note of the many other people who have those feelings and no one was in trouble. You should expect in advance to hear more anecdotes about the times that someone really was in trouble, than anecdotes about the times they were not, so having heard them is very little evidence.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T18:32:14.883Z · LW(p) · GW(p)
Even if that were true, and not a misremembrance or a post-hoc rationalization
I did state that she predicted one in advance to me. Also when my mother called me the first thing she asked was "are you alright?"
You should expect in advance to hear more anecdotes about the times that someone really was in trouble, than anecdotes about the times they were not, so having heard them is very little evidence.
As far as my mother goes I have never once seen her mistake a prediction. Now 2 predictions (and 2 more that she told me about) sounds small, but consider the amount of times that she didn't mistakenly call the probability that something is going on is quite high. For example if you have a deck with 996 blue cards in it, and 4 red cards in it, and you call a red card before it flips once, but never call it before a blue card flips, the chances of you succeeding on are... Um... Do you guys want me to do the math? It's pretty small.
And just because some people think that they can do it and can't, doesn't mean that a person can't do it. Look at all the people who think they are wonderful singers.
Of course I could be misremembering. I could go ask my mother, and my father and see what they say if you like. (Yes I am close to my parents. We have a tight nit family even though I am 24). Of course we could all be misremembering, or lying. Again, you have no way to know, and you really shouldn't even consider taking my word for this.
↑ comment by TimS · 2012-05-24T17:54:27.029Z · LW(p) · GW(p)
For instance, every culture has a belief in the supernatural.
Every culture has some different things they believe in, and call supernatural. That doesn't prove there really is a category of things that actually are supernatural. By analogy, belief by Himalayan people that the Yeti is real is not evidence that Bigfoot (in the northwestern United States) is real. Likewise, a Hindu's fervent belief is not evidence of the resurrection of Jesus.
In short, the shortfalls in human understanding completely explain why primitive cultures believed "supernatural" was a real and useful label, even though that belief is false.
Replies from: APMason↑ comment by APMason · 2012-05-24T18:00:12.456Z · LW(p) · GW(p)
I'm not sure whether it is the case that primitive cultures have a category of things they think of as "supernatural" - pagan religions were certainly quite literal: they lived on Olympus, they mated with humans, they were birthed. I wonder whether the distinction between "natural" and "supernatural" only comes about when it becomes clear that gods don't belong in the former category.
Replies from: TimS↑ comment by TimS · 2012-05-24T18:03:47.453Z · LW(p) · GW(p)
I had a paragraph about that, citing Explain/Worship/Ignore, but I decided that it detracted from the point I was trying to make.
If you already think that primitives did not use the label "supernatural," then you already think there isn't much evidence of supernatural phenomena - at least compared to the post I was responding to.
↑ comment by thomblake · 2012-05-24T17:57:22.233Z · LW(p) · GW(p)
If you say, I can dance and make it rain, and then you fail, you would get laughed at.
I don't believe you've read much of the content on this site. There are a host of human cognitive biases that would lead to belief in the supernatural. Perhaps most notably, we attribute agency to non-agents. It's easy to see how that would be adaptive in the ancestral environment; just look at the truth table for "That sound was an animal and I believe that sound was an animal" and the outcomes of each possibility.
↑ comment by thomblake · 2012-05-24T16:35:07.241Z · LW(p) · GW(p)
Because if for example (and no, I'm not saying this is what happened), God came out of the sky, pointed at me, and said "I exist." I would know that either he existed, or something else did that was trying to fool me into thinking he did. Either way I would have belief that something supernatural (outside of the realm of what human science commonly accepts) had happened.
Not really. There are plenty of plausible explanations for that description that don't require positing something supernatural.
And now if all you have is one event in your faulty human memory to go on, it counts for practically nothing. Given the low prior for the existence of most particular deities, updating on that piece of evidence should still give you a ridiculously low posterior. "I'm hallucinating" would probably be my winning hypothesis at the time it's happening, and "I'm misremembering" afterwards.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T16:47:43.209Z · LW(p) · GW(p)
So what I'm getting from you is that you would ignore your own observations to conform to what others expect? That your belief in a universe without god is so strong that even if I did show you something like this you would refuse to believe it because it didn't fit with your expectations? Then I fail to see how I could ever convince you.
Addendum: Have group hallucinations been proven or disproven?
Replies from: Desrtopa, TimS, CuSithBell↑ comment by Desrtopa · 2012-05-24T17:18:48.880Z · LW(p) · GW(p)
Well, mass hysteria is a real thing, but if a large group of people who have no prior reason to cooperate all claim the same unusual observations, it's certainly much stronger evidence that something unusual was going on than one individual making such claims.
Many, possibly even all religions though, make claims of supernatural events being witnessed by large numbers of people, and religions make enough mutually exclusive claims that they cannot all be true, so we know that claims of large scale supernatural observations are something that must at least sometimes arise in religions that are false.
In terms of the falsifiability of religion, it's important to remember that we're essentially working with a stacked deck. In a world with one globally accepted religion, with a god that made frequent physical appearances, answered prayers for unlikely things with sufficient regularity that we had no more need to question whether prayer works than whether cars work, gave verifiable answers to things that humans could not be expected to know without its help, and gave an account of the provenance of the world which was corroborated by the physical record, then obviously the prior for any claims of miraculous events being the result of genuine supernatural intervention would be completely different than in our own.
If a pilgrim child in America in 1623 claimed to have spoken to a person from China when nobody else was around, the adults in their community would probably conclude that they were lying, confused or deluded in some way, unless presented with a huge preponderance of evidence that the child would be highly unlikely to be able to produce, and it's completely reasonable that they would behave this way, whereas today, an American child claiming to have spoken to a person from China demands a very low burden of evidence.
In a world where the primary evidence offered in favor of religion is subjective experiences which have a pronounced tendency to be at odds with each other (people of different religions have experiences with mutually incompatible implications,) if a person who claims highly compelling religious experiences is unable to persuade other people, it does not indicate a failing in the other people's rationality.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T17:52:04.806Z · LW(p) · GW(p)
Many, possibly even all religions though, make claims of supernatural events being witnessed by large numbers of people, and religions make enough mutually exclusive claims that they cannot all be true, so we know that claims of large scale supernatural observations are something that must at least sometimes arise in religions that are false.
That may be the case, and I won't disagree that some claims are fabricated. However for the rest imagine the following: A parent has two children, and he gives a present (say a chocolate that they eat) to each child without the other child knowing. Each child takes this to mean that they are the parents favorite. After all they have proof in the gift. They get into an argument over it. However because their beliefs about why the gifts were given are wrong, the fact that the gifts were given remains.
In the same way it is possible that a supernatural* being is out there, and people are just misinterpreting what the gifts it bestows mean. As far as I can tell it doesn't mind when someone calls themselves a Christian, and follows the Christian faith, so I identify as Christian.
...if a person who claims highly compelling religious experiences is unable to persuade other people, it does not indicate a failing in the other people's rationality.
I would never dream to claim otherwise. I wouldn't even try to convince people that have not had their own experiences. It would prove that you were rather inferior rationalists if I could. Unless you have proof, you should not believe. I am not here to try to convince anyone otherwise. The only reason that I talk about it is that you seem interested in how I could believe, and I suspect that I can point out why I believe to you in such a way that you will understand.
Why does everyone think that I want to convert them to Christianity? Even the churches I go, though they are not super rationalist agree that such a thing is pointless unless the person has some experience in their life that would lead them to believe. Do you often get Christians here trying to convert you?
*(outside of the realm of what human science commonly accepts)
Replies from: Desrtopa, thomblake, Bugmaster, TimS↑ comment by Desrtopa · 2012-05-24T18:17:16.178Z · LW(p) · GW(p)
That may be the case, and I won't disagree that some claims are fabricated. However for the rest imagine the following: A parent has two children, and he gives a present (say a chocolate that they eat) to each child without the other child knowing. Each child takes this to mean that they are the parents favorite. After all they have proof in the gift. They get into an argument over it. However because their beliefs about why the gifts were given are wrong, the fact that the gifts were given remains.
In the same way it is possible that a supernatural* being is out there, and people are just misinterpreting what the gifts it bestows mean. As far as I can tell it doesn't mind when someone calls themselves a Christian, and follows the Christian faith, so I identify as Christian.
It's possible, but there is no necessity that any of them be true. If natural human cognitive function can explain claims of religious experiences (both willfully deceptive and otherwise,) in the absence of real supernatural events, then positing real supernatural events creates a large complexity burden (something that needs a lot of evidence to raise to the point where we can consider it probable,) without doing any explanatory work.
Let's say you have a large number of folk rituals which are used for treating illnesses, which appear to demand supernatural intervention to work. You test a large number of these against placebo rituals, where elements of the rituals are changed in ways that ought to invalidate them according to the traditional beliefs, in ways that the patients won't notice, and you find that all of the rituals you test perform no better than placebo. However, you can't test the remaining rituals, because there's nothing about them you can change that would invalidate them according to traditional beliefs that the patients wouldn't notice. You could conclude that some of the rituals have real supernatural power, but only the ones you weren't able to test, but you could explain your observations more simply by concluding that all the rituals worked by placebo.
Why does everyone think that I want to convert them to Christianity? Even the churches I go, though they are not super rationalist agree that such a thing is pointless unless the person has some experience in their life that would lead them to believe. Do you often get Christians here trying to convert you?
Occasionally, but not that often. But the fact that members here are trying to change your mind doesn't necessarily mean they think you're trying to change theirs. This is a community blog dedicated to refining human rationality. When we have disagreements here, we generally try to hammer them out, as long as it looks like we have a chance of making headway. On this site, we generally don't operate on a group norm that people shouldn't confront others' beliefs without explicit invitation.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T18:57:57.605Z · LW(p) · GW(p)
You test a large number of these against placebo rituals, where elements of the rituals are changed in ways that ought to invalidate them according to the traditional beliefs, in ways that the patients won't notice, and you find that all of the rituals you test perform no better than placebo.
But but what if you get inconsistent result? Let's say you try the ritual 5 times and the placebo 5 times and it works 2 times for the the ritual and twice for the ritual. Furthermore consider that nothing changed in any of these tests that you could measure. You said the ritual was spiritual, and there for asking for divine intervention. It could be that the ritual was unnecessary and that the divine being decides when it intervenes. If you can't figure out why it sometimes works, or sometimes doesn't than maybe it's because you are asking a sentient being to make a choice and you don't understand their reasoning.
You could say that there was no divine intervention at all, but then you are left trying to come up with more and more complex theories about why it sometimes works and sometimes does not. This might not be a bad thing, but one shouldn't discount the easy solution just because it doesn't match their expectations, nor should they stop looking for another solution just because any easy one that is hard to test is present.
On this site, we generally don't operate on a group norm that people shouldn't confront others' beliefs without explicit invitation.
Oooh! I like it! Yeah sure, I can get behind that. The reason that i am not trying to convince people here of Christianity is because I don't have proof that I feel should convince other people. If I did convince anyone here, with the proof that I have, then I would feel that I had made you inferior rationalists. On the other hand I cannot just ignore my own observations and tests and agree with you when I perceive that you are mistaken. I hope that one day I might find some way of proving that god exists to people without needing them to experience something supernatural themselves. But unfortunately as I believe that I am dealing with a sentient intelligence I feel that is unlikely.
Replies from: Desrtopa, thomblake↑ comment by Desrtopa · 2012-05-24T22:23:28.714Z · LW(p) · GW(p)
But but what if you get inconsistent result? Let's say you try the ritual 5 times and the placebo 5 times and it works 2 times for the the ritual and twice for the ritual.
Any test with such a small sample size is barely worth the bother of conducting. You'd want to try many times more than that at least before you start to have enough information to draw reliable inferences from, unless the effect size is really large and obvious, say, all five people on the real ritual get better the next day and none of the five on the placebo recover within a week.
You said the ritual was spiritual, and there for asking for divine intervention. It could be that the ritual was unnecessary and that the divine being decides when it intervenes. If you can't figure out why it sometimes works, or sometimes doesn't than maybe it's because you are asking a sentient being to make a choice and you don't understand their reasoning.
People recover from most ailments on their own for perfectly natural reasons. Some people fail to recover from ailments that other people recover from, but it's not as if this is an incomprehensible phenomenon that flies in the face of our naturalistic models.
If no proposed supernatural intervention changes a person's likelihood of recovery relative to placebo, then it could be that there's no way of isolating supernatural intervention between groups, but a much simpler explanation to account for the observations is that no supernatural interventions are actually happening.
People used to see the appearance of supernatural intervention everywhere, but the more we've learned about nature, the less room there's been for supernatural causes to explain anything, and the more they've become a burden on any model that contains them. It's possible that some phenomena which are unexplained today can only be explained in the future with recourse to supernatural causes, but given the past performance of supernatural explanations, and the large amount of informational complexity they entail, this is almost certainly an unwise thing to bet on.
Oooh! I like it! Yeah sure, I can get behind that. The reason that i am not trying to convince people here of Christianity is because I don't have proof that I feel should convince other people. If I did convince anyone here, with the proof that I have, then I would feel that I had made you inferior rationalists. On the other hand I cannot just ignore my own observations and tests and agree with you when I perceive that you are mistaken. I hope that one day I might find some way of proving that god exists to people without needing them to experience something supernatural themselves. But unfortunately as I believe that I am dealing with a sentient intelligence I feel that is unlikely.
I'm glad you're comfortable with this sort of environment. If you're going to make judgments on the basis of your own experience though, it's good to try to incorporate the evidence of others' experience as well.
Personally, from around the age of ten to twelve or so, I experimented a lot with the possibility of god(s). I tried to open myself up to communication with higher intelligences, perform experiments with prayer and requests for signs, and so on. I never received anything that could be interpreted as a positive result, even by generous standards. I certainly don't dismiss other people's claims of experiences associated with the supernatural, I think for the most part people who report such experiences are telling the truth about their own recollection of such events. Indeed, given what I've since learned about the workings of the human brain, it would be surprising to me if people didn't report supernatural experiences. But given that people reporting supernatural experiences can be accounted for without recourse to actual supernatural events, as a consequence of human psychology, the question I'm inclined to ask is "does the world look more like what I ought to expect if reports of supernatural events are at least partly due to an actual supernatural reality, or like I ought to expect if the supernatural doesn't really exist?"
There are some things in the world that I can't explain, which could, theoretically, have supernatural causes. But there are no things in the world I have encountered which I would have firmly predicted in advance to be true if supernatural claims were real, and false if they were not. For instance, if some maladies, such as amputation, only recovered when people called for divine intervention, and never when they did not, I would think that the prospect of an underlying supernatural cause was worth taking very seriously. Or if people all over the world had religious experiences, which all pointed them in the direction of one particular religion, even if they had no cultural exposure to it, that would be indicative of an underlying supernatural cause. But when viewed together, I think that the totality of humans' religious experiences suggest that what's going on is a matter of human psychology, not an underlying supernatural reality.
Replies from: witzvo↑ comment by witzvo · 2012-06-01T04:01:39.278Z · LW(p) · GW(p)
But but what if you get inconsistent result? Let's say you try the ritual 5 times and the placebo 5 times and ...
Any test with such a small sample size is barely worth the bother of conducting.
Well, it's standard in medicine to have large RCTs because of various reasons(*), but I'd hardly say "barely worth the bother of conducting". Every bit of randomized data gives you evidence about cause and effect that, while sometimes weak, does let you update your posterior (a little or a lot) without worrying about the myriad issues of confounding that plague any observational data. Randomization is very useful even in small doses. [though getting consent of the participants is usually hard, even when the preliminary evidence is still very shaky.]
(*) the reasons include the clear ulterior motives of drug companies, the need to consent individuals to randomization combined with delicate arguments around the ethics of "equipoise", the difficulties of "meta-analysis", a long history of frequentist statistics, the standards of journals vs. the possibilities of free and open science (based hypothetically on privacy-secure but comprehensively integrated health records), safety issues, etc... But another large reason is that doctors really really like "certainty" and would rather let "best practice" to tell them what to do rather than collect evidence, condition, and decide what's best for the patient themselves. [some of this seems to be training, but also that they must defend themselves against malpractice. In the end, maybe this isn't so bad. Thinking is hard and probably all in all it's better not to trust them to do it most of the time, so I'm not rallying for change in clinical practice here, except to have as much randomization as possible.]
Replies from: Desrtopa↑ comment by Desrtopa · 2012-06-01T13:13:10.950Z · LW(p) · GW(p)
It's true that you could get evidence from such an experiment which would allow you to update your posterior (although if you're using significance testing like most experiments, you're very unlikely to achieve statistical significance, and your experiment almost certainly won't get published.) But even if you're doing it purely for your own evidence, the amount of evidence you'd collect is likely to be so small that it hardly justifies the effort of conducting the experiment.
↑ comment by thomblake · 2012-05-24T19:03:38.898Z · LW(p) · GW(p)
You could say that there was no divine intervention at all, but then you are left trying to come up with more and more complex theories about why it sometimes works and sometimes does not.
Positing a divine being is a more complex explanation than any physical explanation I can conceive of. Don't be fooled by what your brain labels "easy".
Replies from: wedrifid↑ comment by wedrifid · 2012-05-26T02:39:32.283Z · LW(p) · GW(p)
Positing a divine being is a more complex explanation than any physical explanation I can conceive of.
Really? Can you not, by way of conception, take the divine being scenario, hack around with it so that it can no longer be considered a divine being then tack on some arbitrary and silly complexity? (Simulations may be involved, for example.)
Conceiving of complex stuff seems to be a trivial task, so long as the complexity is not required to be at all insightful.
↑ comment by thomblake · 2012-05-24T18:01:36.065Z · LW(p) · GW(p)
Why does everyone think that I want to convert them to Christianity?
You claim to have evidence that should convince you to be a Christian. We want to know that evidence. The Litany of Tarski applies: if God exists, I wish to believe that God exists. If God does not exist, I wish to believe that God does not exist.
Replies from: wedrifid↑ comment by wedrifid · 2012-05-26T02:42:26.135Z · LW(p) · GW(p)
You claim to have evidence that should convince you to be a Christian. We want to know that evidence.
Or I would, if I assigned non-negligible probability to the possibility that (strong forms of) such evidence actually existed - without such expectation it doesn't feel correct to say that I 'want it'.
↑ comment by Bugmaster · 2012-05-24T18:06:35.727Z · LW(p) · GW(p)
In the same way it is possible that a supernatural* being is out there, and people are just misinterpreting what the gifts it bestows mean.
Sure, it's possible, but lots of things are possible, even if we limit them to the things we humans can imagine. We can imagine quite a lot: Cthulhu, Harry Potter, the Trimurti, Gasaraki, werewolves of all kinds, etc. etc. The better question is: how likely is it that a supernatural being exists ?
↑ comment by TimS · 2012-05-24T17:59:10.096Z · LW(p) · GW(p)
I don't agree that supernatural should be defined as "outside of the realm of what human science commonly accepts."
There are lots of phenomena that science can't explain, or for which there is no commonly accepted explanation. That's not particularly interesting. What would be interesting is a phenomena that science admits it will never be able to explain.
↑ comment by TimS · 2012-05-24T16:59:37.150Z · LW(p) · GW(p)
I can't speak for thomblake, but there are experiences that could convince me that there was a powerful entity that intervened on behalf of humanity. They just haven't happened. And I have reasons to believe that they will never happen, including the fact that they haven't happened before - absence of evidence is evidence of absence.
↑ comment by CuSithBell · 2012-05-24T17:22:45.839Z · LW(p) · GW(p)
A single experience of that kind would be terrible evidence for Christianity, and merely poor evidence for the supernatural. A coherent set of experiences indicative of a consistent, ongoing supernatural world (or specifically a Christian world) would be much more convincing.
↑ comment by electricfistula · 2012-05-24T07:00:14.420Z · LW(p) · GW(p)
My entire point was that it might be possible to recognize these situations and then act in an appropriate manner.
I think this is called "behaving rationally". I understand "rationality" as using reason to my benefit. If there comes a time when it would be beneficial for me to do something, and I arrive at that conclusion through reason, then I'd consider that a triumph of rationality. I think if you are able to anticipate an advantage that could be gained by a behavior then refusing to perform that behavior would be irrational.
Anecdotal evidence shouldn't be a cause to say something is horrible.
You misunderstand me. It isn't my anecdotal evidence that makes me think the church is horrible. I just pointed out that I had spent a lot of time in churches to show that I have more than the passing familiarity with them that you attributed to me. I think the church is horrible because it threatens children, promotes inaccurate material and takes money from the gullible.
The church that I go to most of the time has only 2 or three children in it and is mostly made up of members over 60
While this is good that your church isn't abusing more children, it is still terrible to consign "2 or three children" to such mistreatment. Telling children that there is a hell and that they will go to it if they don't believe in something which is obviously flawed is a terrible thing to do. It is psychological child abuse and I don't think it says very much in your church's favor that it only abuses two or three kids.
Besides, if you look at it from a Christian point of view, is it wrong to teach children when they are young?
A child lacks the intellectual maturity to understand or evaluate complex ideas. A child is more trusting than an adult. If your parents tell you something is true, or that you should believe this minister when he talks about heaven, you are more likely to believe it. If your parents came to you now and told you about how they had just found out about Krishna and you should read the Bhagavad Gita you probably wouldn't be very receptive. And yet, your parents managed to convince you that the Bible was true. Why was that? Was it because through random chance you were born into a family that already believed in the one true religion? Or was it just that you adopted the religion you were exposed to. Because, when you were young your mind wasn't discriminating enough to realize that, wait a second, this isn't making sense!
Would you advocate waiting till a person is 20 to start teaching them how to read, write and do math?
No, but the usefulness of reading is well established. Mathematics is axiomatic. Religion is, as the most polite thing I could say about it, highly suspect. I don't think its right for adults to have sex with children, because children aren't mature enough to make informed decisions about consent. Similarly, I don't think its okay for people to teach religion to children because children aren't mature enough to make informed decisions about ontology.
I respectfully disagree. I would appreciate it if you could be respectful in turn.
I apologize if you have found me disrespectful so far. It isn't my intention to be disrespectful to you. That said, I have no intention of being respectful to a set of beliefs which I consider first to be wrong and second to be pernicious. If you have an argument which you think is compelling as to the truth of Christianity, please tell me. I promise that if I am swayed by your argument I will begin to show Christianity due deference.
Is it as bad as telling a child that if they play in traffic they could cease to exist?
This is a true statement that is designed to protect a child. Saying something like "You'll writhe in agony for all time if you don't believe in the truth of this thousands of years old document compiled over hundreds of years by an unknown but large number of authors" isn't the same kind of statement. Even if you don't explicitly say that to a child, convincing them to believe in Christianity is implicitly making that statement.
As far as "bad" goes, I don't have a ready definition. I have to fall back on Justice Potter Stewart "I know it when I see it". Threatening children and teaching them things that are at best highly suspect as if they were true is bad.
Not true for all churches. In fact I have yet to be in a single one that even suggests it
Tithing (giving a tenth) is explicitly recommended in the Bible. If the churches you are going to endorse the Bible then they are at least implicitly asking for 10%.
You know, kind of like what Eliezer is doing right now with the workshops he is setting up.
I don't think Eliezer has a school for children where he teaches them that unless they grow up to believe in his set of rules that an Unfriendly AI will punish them for all time. I have less against evangelism to adults. If Eliezer asks for money like this, that is fair, because the people he is asking can evaluate whether or not they believe in the cause and donate accordingly. There is nothing wrong with that. There is something wrong with compelling donations through threats of damnation.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T15:59:37.399Z · LW(p) · GW(p)
I think this is called "behaving rationally". I understand "rationality" as using reason to my benefit.
Thus my point that sometimes you should not question one of your own beliefs is preserved. You agree that it would be the rational thing to do in some situations.
As far as "bad" goes, I don't have a ready definition.
If you can't explain what bad is, then I am unable to discuss this with you. You might have a good definition, or you might be just saying that whatever makes you mad is automatically bad. I can't know, so I can't form any arguments about it.
Replies from: electricfistula↑ comment by electricfistula · 2012-05-24T17:38:41.660Z · LW(p) · GW(p)
If you can't explain what bad is, then I am unable to discuss this with you
Bad is causing harm to people who don't deserve it. Convincing someone in the existence of hell is harmful - you are theatening them with the worst thing possible, convincing someone of a lie to compel them to serve the chruch through donations of time or money is harmful, convincing someone that they are innately sinful is harmful psychologically, convincing someone that morality is tied to religious institution is harmful. Children are least deserving of harm and so harming them is bad.
↑ comment by Bugmaster · 2012-05-24T03:03:29.623Z · LW(p) · GW(p)
You start walking back to your car. You suspect that you aren't going to make it. Now does it make you happier to follow up on that thought and figure out the rate you are losing blood...
In the long run, and on average, yes. There are several courses of action open to me, such as "give up", "keep walking", "attempt to make a tourniquet", etc. Once I know the rate of the blood loss, I can determine which of these actions is most likely to be optimal. You say that "you suspect that you aren't going to make it", but I can't make an informed decision -- f.ex., whether to spend valuable time on making this tourniquet, or to instead invest this time into walking -- based on suspicion alone.
I sympathize somewhat with your argument as it applies to religion, but this example you brought up is not analogous.
You can't know in advance what beliefs you hold are false, however you can know which ones make you happy and don't get in the way of your life.
Perhaps not "in advance", but there are many beliefs that can be tested (though not all beliefs can be). To use a trivial example, believing that a lost Nigerian prince can transfer a million dollars to your bank account in exchange for a small fee might make you happy. However, should you act on this belief, you would very likely end up a lot less happy. Testing the belief will allow you to make an informed decision, and thus end up happier in the long run.
But even if the sun was about to go nova...
This is an off-topic nitpick, but the sun is incredibly unlikely to go nova; it will die in a different way.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T04:11:50.117Z · LW(p) · GW(p)
[Note: Skip stuff in brackets if religious talk annoys or offends you]
(Why does everyone assume that this has to do with religion? If I was asking this about religion wouldn't that already signify that I didn't believe, I just wanted to? My belief comes from actual events that I have witnessed, and tested, and been unable to falsify. )
The example with the bleeding out was sort of a personal one because it happened to me. I cut my foot with an axe. I was far from help, and a helicopter wouldn't pick me up for another 4 hours. If I had been off to the side by 3 mm I would have hit an artery and bled out, and nothing was going to stop it. I did tie it off, raise it up, and stop moving, but it was down to chance. At the time I believed I was going to die and it quite distressed me. If I was to be in the same situation again, lying on the ground, foot supported and tied off, even if I was going to die I would rather not know and believe I was going to make it. That might make me a sub-optimal rationalist, but at that point as there was nothing more to do it would have made me a happier person. (Gasp! Yes, a religious person said they didn't want to die. It might sound like a logical fallacy, but it was in fact (if I recall correctly, it was sort of a traumatic experience) empathy for my father and mother, who I had just seen about half an hour before I cut my foot.)
(I will further note that either I was lied to a lot, or that there were several inconsistencies with the entire event. I was told that I should have been unconscious with the amount of blood I lost 6 hours later when I made it in to the hospital. I had of course been doing such activities such as hopping around on my one foot to go places, and didn't feel in the slightest bit woozy. Nor did I have any symptoms of shock when it happened. Finally I never felt any pain from the wound, though this last I suspect was because I severed the nerve endings. Yet doing that in such a way that I never felt pain seems unlikely to me using something as unwieldy as an axe, and I have not come across similar stories. How does one interpret events? That the doctors lied to me or were mistaken? It's possible. That a lot of things went just right? The likely hood of that happening falls well within the realm of the possible as well. On the other and there is another explication that does not require lies, mistakes or luck to be involved. I feel that how you see it strongly depends on your bias.(And then there is the possibility that I am lying. I know I'm not, but over the internet I'd be hard pressed to prove it))
As for the Nigerian prince example, I am specifically talking about situations where there is no long run, and you are not affecting other people with your decisions. I agree that in most cases trying to know the truth is better than not knowing it.
The sun going nova was just an example. Big asteroid hitting earth, thermal nuclear war, there are all sorts of stuff that falls into the category of things I can't do anything about that will end my life.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-24T04:37:04.740Z · LW(p) · GW(p)
[Note: Skip stuff in brackets if religious talk annoys or offends you]
I personally operate by Crocker's Rules, but others may not be, so I appreciate the warning nonetheless.
Why does everyone assume that this has to do with religion?
It's probably because you said you identify as a Christian, and Christians tend to advance this sort of argument more often than non-theists, regarding Christianity specifically. That said, your argument is general enough to apply to non-religious topics, as well.
At this point, I should mention that I didn't mean to bring up your personal traumatic experience, and I apologize. If you think that discussing it would be too distressing, please stop reading beyound this point.
If I was to be in the same situation again, lying on the ground, foot supported and tied off...
If you truly believed you were about to die no matter what, why would you waste time on tying off your foot ? It sounds to me like you weighed the chances of you dying, and made a decision to spend some time on tying off the foot, instead of spending it in contemplation or something similar.
On the other and there is another explication that does not require lies, mistakes or luck to be involved.
What is it ?
I am specifically talking about situations where there is no long run, and you are not affecting other people with your decisions.
Can you describe some examples ? Your own experience with the bleeding foot is not one of them, because your death would've negatively affected quite a few people (including yourself).
The sun going nova was just an example. ... there are all sorts of stuff that falls into the category of things I can't do anything about that will end my life.
Understood. However, if everyone thought like you do, no one would be tracking near-Earth asteroids right now. Some people are doing just that, though, in the expectation that if a dangerous asteroid were to be detected, we'd have enough time to find a solution that does not involve all of us dying.
Replies from: Jakinbandw↑ comment by Jakinbandw · 2012-05-24T05:47:02.860Z · LW(p) · GW(p)
It's probably because you said you identify as a Christian, and Christians tend to advance this sort of argument more often than non-theists, regarding Christianity specifically.
That tends so show that they don't actually believe in Christianity. Rather they want to believe. I feel sorry for those people. Of course as I tend to sit on the other side of the fence I try to help them believe, but belief is a hard thing to cultivate and an easy thing to destroy. If you were in a group and you were shown a box with 5 dice in it for a brief moment, but later everyone agreed that there were only 4 dice, most people would start to doubt their memories. I know that I would. If the people were very smart and showed the box again, and this time it only had 4 dice in it many people would be very hard pressed not to doubt their memories and be convinced they remembered wrong. They might want to believe that they were right about 5 dice, but they would have a hard time believing it. They would want to believe in the truth, but wouldn't.
Of course that is coming at it from a strictly religious point of view. Atheist would use the same argument in the exact opposite fashion with the proof of no god being the 5 dice and the religious people around them saying that there were 4.
If you truly believed you were about to die no matter what, why would you waste time on tying off your foot?
Because I wasn't thinking about if I would live or die, I was thinking that to live I needed to do this. It was only after I had done everything that I could that I stopped and considered my chances and figured that I was probably going to die. Even so though I believe that it is my biological duty to do everything possible to survive no matter how hopeless the situation.
Honestly from this side of it, I don't really have any post traumatic stress. I remember how I felt at the time, but the memories have no sting to them. Don't worry about it. Generally I'm able to discuss anything that I bring up.
What is it ? That something outside of what is generally accepted by science stepped in and helped me. Could have been anything, but it makes the most sense that since I was praying at the time it was God. Of course it could have been aliens that wiped my memory, or a host of other things, but the possibility exists that something stepped in, and it makes for a simpler explanation. However I am aware that simple explanations are not always the right ones.
Can you describe some examples ? Your own experience with the bleeding foot is not one of them, because your death would've negatively affected quite a few people (including yourself).
I could argue that if that hadn't have saved my life (that I was going to die no matter what), than at that point my actions and thoughts would have very little meaning. I suppose honestly I could have written a note to my parents, but at the time I didn't think of it. Other than that I could have believed, or done anything I wanted and not have really effected the outcome.
However the examples I was thinking of were extinction level events.
Understood. However, if everyone thought like you do, no one would be tracking near-Earth asteroids right now.
Fair point. And I hope that our leaders are wise enough to know that blowing up the world would be a bad idea. However if there was an asteroid going to hit tomorrow, I am not sure what help I could offer humanity even if I did know. Wouldn't it just cause me pointless suffering? If no one else knew I could tell them about it, but after that I couldn't really do anything about it. And I don't know anything about this, but is there anything out there that shows that some people enjoy worrying? They would be perfect to do that sort of thing. I personally am happier not worrying about things I can't change.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-24T17:51:35.085Z · LW(p) · GW(p)
If you were in a group and you were shown a box with 5 dice in it for a brief moment, but later everyone agreed that there were only 4 dice...
This is a pretty standard example of reasoning under uncertainty. You have two possible events, "there were 5 dice" vs. "there were 4 dice". You want to assign a probability to each event, because, not being omniscient, you don't know how many dice there actually were. You have several percepts, meaning pieces of evidence: your memories and the claims of the other people. Each of these percepts has some probability of being true: your memories are not infallible, the other people could be wrong or lying, etc. You could run all these numbers through Bayes' Rule, and determine which of the events ("5 dice" vs. "4 dice") is more likely to be true.
It also helps to know that all humans have a bias when it comes to peer pressure; our memories become especially faulty when we perceive a strong group consensus that contradicts them. Knowing this can help you calibrate your probabilities.
Anyways, you say that "belief is a hard thing to cultivate", but in your dice scenario, there's no need to cultivate anything, because you don't care about beliefs, you care about how many dice there were; i.e., you care specifically about the truth.
Even so though I believe that it is my biological duty to do everything possible to survive no matter how hopeless the situation.
I am not sure what "biological duty" means, but still, it sounds like you do care whether you live or die; i.e., you want to live. This is a goal, and you can take actions in order to further this goal, and you want to make sure your actions are as optimal as possible, right ?
However I am aware that simple explanations are not always the right ones.
It depends on what you mean by "simple"; according to Ockham's Razor, "God did it" is a vastly less simple explanation than most others, due to the number of implicit assumptions you will end up making. That said, it sounds like you have several possible events ("God did it", "aliens did it", "I got lucky", etc.), and several pieces of evidence; so this scenario is similar to your example with the dice. If you cared about the truth, you could assign a probability to each event based on the evidence. Of course, you don't have to care about the truth in this case; but if you did, there are ways you could approach it.
Other than that I could have believed, or done anything I wanted and not have really effected the outcome.
It's possible that tying that tourniquet did save your life, so there's at least one thing you did do which likely affected the outcome.
However if there was an asteroid going to hit tomorrow, I am not sure what help I could offer humanity even if I did know.
I think I see where you're coming from: there's no point in spending a lot of effort on worrying about low-probability events which you'd be powerless to affect even if they did happen. As you said, the Sun could die tomorrow, but I can safely ignore this fact. However, I think you're making an unwarranted leap of faith when you precommit to never worrying about such events, regardless of circumstances.
For example, there's nothing you could do about that asteroid today, and in fact it's very likely not even coming. But if we knew that the asteroid was, indeed, heading for Earth, there could be lots of things you could do -- you could donate money to the anti-asteroid fund, volunteer at the anti-asteroid missile factory, etc. If you had more information about the asteroid, you could re-evaluate your decisions, and determine the best course of action; but you can't do that if you have committed yourself to not doing anything regardless of circumstances. You also can't do that if you have no access to that information in the first place, which is why caring about the truth is sometimes important.
↑ comment by Jakinbandw · 2012-05-25T23:28:24.985Z · LW(p) · GW(p)
Just a minor update. This thread has grown to big for me to follow easily. I am ready every post in it, but real life is taking up a lot of my time right now so I will be very slow to reply. I found the limit of multiple conversations I can hold at one time before I get a headache, and it appears to be less than I suspected.
Once again, sorry, didn't mean to drop out, but I stayed up way to late and even now I am recovering from sleep deprivation and still have an annoying headache. My body seems to want to wake up 2 hours before it should. I'll be back once I get my sleeping back to normal, and get some more time. Even then though I am going to try to limit myself to only a couple posts a day because while I enjoy discussions, it's very easy for me to forget everything else when I get drawn into them.
I'll be back later. JAKInBAndW
Replies from: Bugmaster↑ comment by CWG · 2012-05-25T10:06:26.723Z · LW(p) · GW(p)
Welcome.
Getting beaten up as a child sucks. Hope your life is a whole lot better now.
A somewhat related personal story: I was a Christian. I was plagued by doubts, and decided that I wanted to know what the truth was, even if it was something I didn't want to believe. I knew that I wanted Christianity to be true, but I didn't want to just believe for the sake of it.
So I started doing more serious reading. Not rationalist writings, but a thoughtful theologian and historian, NT Wright, who I've also seen appear on documentaries about New Testament history. I read the first two in what he was planning as an epic 5 part series: "The New Testament and the People of God" and "Jesus and the Victory of God".
I loved the way he explained history, and how to think about history (i.e. historiography). Also language, and ideas about the universe. He wrote very well, and warmly - you got the sense that this was a real human being, but he lacked the hubris that I'd often found in religious writers, and he seemed more interested in seeking truth than in claiming that he had it. He was the most rationalist of Christian writers that I came across.
In the end, the essence of his argument seemed to be that there is a way of understanding the Bible that could tell us something about God - if we believe in a personal god who is involved in the universe... and that if we believe in that kind of god, described in the Old Testament, then the idea of taking human form, and becoming the embodiment of everything that Israel was meant to be, does make sense. (He went into much, much more depth here about , and I can't do him justice at all, 15 years after I read it.) He didn't push the reader to believe - he just stated that it was something that made sense to him, and he did believe it.
He painted a picture and told a story which I found very appealing, to be honest. But in the end it didn't fit with how I understood the universe, based on the more solid ground of science.
I finally accepted that - my increasingly shaky belief was destroyed. It was hard, and I was upset - I'd been finding life hard, personally, and my beliefs were the framework that I'd used to attempt to make sense of things, such as an unhappy childhood and the death of both parents as a young adult. But I also felt freed, and after a couple of weeks, it didn't seem so bad. Years later, I'm much happier, and couldn't imagine myself as a Christian.
That's where I see the value personally in destroying false beliefs - I was freed to live without the restrictions imposed by a false belief system. The restrictions, in many cases, didn't have any sound basis outside the belief system, and I was better without them. There were positive aspects of Christianity, but I didn't need the beliefs to hold onto what I'd learnt about being compassionate and understanding, or about the value of community.
I felt that NT Wright told an honest, complex and interesting story, but in terms the reality (or non-reality) of a god, he made an intuitive judgement which I don't see as sound (and which was different from my own intuition). But he helped me think things through at a time when I wasn't getting satisfactory answers from other Christians, and I really enjoyed his writing. I might even go back and read him some day.
That's wide of the topic, I know, but it's kind of relevant, and a welcome thread seems like a good place to go on tangents :-).
comment by beberly37 · 2012-05-23T16:29:13.187Z · LW(p) · GW(p)
Hello all, it seems like it is a common enough occurrence that it no longer seem embarrassing, but I too found LW via HPMOR, which was referred to me by a friend; my eyes and neck hurt for at least a week after spending far too much time reading from a laptop. I have a BS and an MS in mechanical engineering, I have spent some time as a researcher, a high school teacher and I am currently being an actual engineer at a biodiesel plant.
Growing up everyone told me I was going to become an engineer (I was one of those kids that took apart my toys to see how they worked or try to make them better). I have been cursed, as I am sure is common at LW, that most things (at least mentally tasking things) I try are pretty easy, so I have learned not to work all that hard at anything: high school, undergrad, grad school, work. One of the best parts about LW is that this is really hard stuff, especial for one who is accustom to not having to put forth much mental effort. Yesterday I failed Wason's selection task miserably (thank you, LW, for striking me!) and it took me nearly a year of half-hearted, sporatic readings of Bayes's Theorem to finally be able to say I have moved up on Bloom's Taxonomy to at least understanding (there was a huge lack of statistics in my curricula).
After a year of lurking I decided to start posting because there are so many questions I have that I think should be asked or ideas about which I would love to hear the input from higher level rationalists and this is the obvious starting place.
comment by genisage · 2012-05-16T08:25:56.868Z · LW(p) · GW(p)
Hello all! I'm a student of Mathematics and Computer Science and a fan of physics, linguistics, psychology, and biology.. I found lesswrong through HPMOR. I would say that I've been a rationalist for most of my life. Cognitive biases and logical fallacies, as well as methods for recognizing them, were explained to me at a young age. Unfortunately, lately I've noticed that I'm not holding myself to the same standards of rationality that I used to, and even worse, I've noticed myself using the fact that I'm being rational as an excuse to be unpleasant. So, partially in an effort to begin reforming myself and partly in search of something to help alleviate my boredom this summer, I made an account here.
comment by Monkeymind · 2012-05-11T20:56:13.328Z · LW(p) · GW(p)
Came here doing research on QM and decided to try out some ideas. I learn to swim best by jumping right in over my head. My style usually doesn't win me many friends, but I recognize who they are pretty fast, and I learn what works and what doesn't.
Someone once called me jello with a temper....but I'm more like a toothless old dog, more bark than bite. The tough exterior has helped me in many circumstances.
On the first day as a new kid in high school, I walked up to the biggest, baddest senior there, with all his sheep gathered around him in the parking lot, and slapped him upside his head a hard as I could. Barely had an effect.! He could have crushed my little body with one hand, but instead he laughed so hard he nearly broke a rib. No one ever messed with me because he put the word out -hands off his little buddy, and of course I also gained the reputation of one crazy SOB!
Being retired, I have a lot of time on my hands, and I am interested in learning as much as I can before I become worm food. Right now my interest is GR, QM and AI, but I don't understand what I know about it!
I have a request, I just returned from the V.A. Hospital. My doctor says I need cataract surgery.
I am having a hard time making a decision on what to do. How would Bayesian probability theorem or decision theory help me make a decision based upon the following information? If you would use this in your decision making process, I am willing to use it in mine. I'm stumped and the doctor's have given bad advice many times over the years anyways.
There are inherent risks of infection, failure and loss of eyesight. I could have my right eye done right away (it's ripe) but it could possibly wait a year. However, at that time I will need to have cataract surgery in my left eye as well (couple of weeks apart). I prefer not to have both eyes done at the same time.
An injury in 06 caused a retinal detachment in my right eye. I may be having a retinal detachment in my left eye (I am having flashing lights similar to like b4 my right eye detached). It took a couple of months before the occlusion started last time (after the flashing lights began). An occlusion is like an eclipse of grey. If it makes it all the way accross you are blind. The doctor couldn't see signs of detachment, but cautions me to get there right away if the occlusion begins. Once occlusion starts, surgery needs to happen within 24-72hours. Success diminishes rapidly after 24 hours.
I am a high risk for retinal detachment because of severe myopia (near-sightedness). The right eye surgery was pneumatic retinoplasty, and so I have increased risk of detachment or other problems with cataract surgery.
I am writing a novel and want to finish it b4 the surgeries because of potentially months downtime, and in case of problems or permanent loss of eyesight in one or more of my eyes.
The Doctor says that it is my chioce to wait up to a year, but that I need to be watchful for signs of my left eye detaching, and I don't want my right cataract to get too hard, which increases risk of detachment and lowers success rate from cataract surgery.
Thanx!
comment by Bart119 · 2012-05-01T19:06:22.179Z · LW(p) · GW(p)
I stumbled here while searching some topic, and now I've forgotten which one. I've been posting for a few weeks, and just now managed to find the "About" link that explains how to get started, including writing an intro here. Despite being a software engineer by trade these past 27-odd years, I manage to get lost navigating websites a lot, and I still forget to use Google and Wikipedia on topics. Sigh. I'm 57, and was introduced to cognitive fallacies years as long ago as 1972. I've tried to avoid some of the worst ones, but I also fail a lot. I kept a blog with issue-related essays for a while, and whatever its shortcomings, I was proud of the fact that when I ran out of thing to say, I stopped posting. With the prospect of a community like this one that might respond substantively, maybe I'll be inspired to write more here.
This description of a guy who believed in objective morality but lost his faith impressed me a lot. That's me. I don't think there's any very compelling reason to live one's life in a particular way, or any real reason that some actions are preferable to others. That might be called nihilism. I live a decent life, though, because I'm happier pretending not to be a nihilist and making moral arguments and living honorably and all. But when the going gets tough (as in unpleasant consequences to some line of thought that doesn't make me happy), I always have the option of shrugging my shoulders, yawning, and going on to the next topic. Rationality too is a fun tool. I find it most helpful within the relatively small questions of life.
Replies from: thomblake↑ comment by thomblake · 2012-05-01T19:13:48.814Z · LW(p) · GW(p)
I always have the option of shrugging my shoulders, yawning, and going on to the next topic.
Sure, but if you're really a nihilist, then you don't have any reason to do so. Nor to pretend not to be a nihilist. Nor to drink beer rather than antifreeze.
It certainly looks as though you do things for reasons, and prefer some actions to others. Every single time you wrote a sentence above, you continued writing it in English till the very end, which would be very impressive to happen merely by chance.
Replies from: Bart119↑ comment by Bart119 · 2012-05-01T19:28:27.969Z · LW(p) · GW(p)
Maybe I'm missing something.
I'm not saying my behavior is random, or un-caused. I experience preferences among actions. Factors I'm unaware of undoubtedly play a part, something I can speculate on, and others as well, and I or they could try to model them. But as I experience reality, I'm only striving up to a point to do the Right Thing. My speculation is that if the cost exceeds the cost of reminding myself I'm actually a nihilist, I'll bail on morality.
I'm very interested in arguments as to why nihilism isn't a consistent position -- heck, even why it's not a good idea or how other people have gotten around it.
comment by avichapman · 2012-04-30T22:09:37.505Z · LW(p) · GW(p)
Hi,
I'm a software engineer in Adelaide, Australia. I've tried to be a rationalist all of my life, but had no idea that there were actual techniques that you can learn from others. I'd simply tried to confront myself on the biases that books told me I had, with various degrees of success. I'm very excited to be here.
One thing that bothers me, though, is that I am feeling increasingly isolated from others. It used to be that I had thought just enough to be 1 inferential step ahead of others. This made me seem smart when I talked. Now, I'm more than 1 inferential step ahead in many areas for many people, and this leads to confusion and a lack of communication. Now people think I'm crazy and ignore me. Well, except for those of my friends who are coming with me on this journey. I hope being part of this community will a good social experience. And if anyone here is from Adelaide, I'd love to meet you in person!
Is there any way for a newbie to ask questions of an old hand? A few weeks ago, I read about using Bayes' Theorem to evaluate evidence. Now, I see its use everywhere. I just read a post on Pharyngula that took what seemed like a very emotional stance on what also seemed to be able to be perfectly modelled with a Bayesian equation. Without the actual percentages, I had to make certain assumptions about relative values, but came to a surprising conclusion. Now I need someone to check my work and tell me if I did it wrong.
Anyway, I'm glad to meet you all! Avi
comment by syzygy · 2012-03-15T08:05:27.487Z · LW(p) · GW(p)
Hello, I am Nicholas, an undergraduate studying music at Portland State University. Even though my (at least academic) primary area of study is the arts, the philosophy of rationality and science has always been a large part of my intellectual pursuits. I found this site about a year ago and read many articles, but I recently decided to try to participate. Even before I was a rationalist, my education was entirely self-driven by a desire to seek the truth, even when the truth conflicted with what was widely believed by those around me (teachers, parents, etc.) My idea of what "the truth" means has changed significantly over time, especially after learning about rationality theory, Baye's theorem, and many of the concepts on this site, but the core emotional drive for knowledge has never wavered.
I have read Politics is the Mind Killer and understand the desire to avoid political discussions, but I feel that my conception of a "good" political discussion is significantly different than most users of this site. I care nothing for US style partisan politics. Far from exclusively arguing for "my home team", my political ideas have changed dramatically over the years, and are always based on actual existing phenomenon rather than words like "socialism", "capitalism, "republican" or "democrat". I would be interested to know what led to this ban on political thought. Is it a widely held view of the community that political discussion is inherently devoid of rationality, or was it a decision made out of historical necessity, perhaps because of an observed trend of the quality of political discussions? In either case, I would like to gain a better understanding of the arguments and attempt to refute them.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-15T17:01:34.499Z · LW(p) · GW(p)
More the latter than the former.. a social norm stemming from the pragmatic observation that discussions about politics tended to have certain properties that lowered their value.
The question recurs regularly, usually in the form of "well, but, if we're really rational, shouldn't we be able to talk about politics?"
To my mind, the people asking the question frequently neglect the second-order effects of regularly talking about politics on the sort of people who will join LW and what their primary goals are.
Replies from: syzygy↑ comment by syzygy · 2012-03-16T05:17:53.626Z · LW(p) · GW(p)
To my mind, the people asking the question frequently neglect the second-order effects of regularly talking about politics on the sort of people who will join LW and what their primary goals are.
Could you clarify this point a little? I though the primary goals of LW include refining and promoting human rationality, and I see no reason why this goal would not apply to politics. Especially since irrational political theories can have a directly negative effect on the quality of life for many people.
Replies from: TheOtherDave, RobinZ↑ comment by TheOtherDave · 2012-03-16T12:35:32.572Z · LW(p) · GW(p)
Could you clarify this point a little?
Sure.
The Internet is full of people who seem to have as one of their primary goals to expound their chosen tribe's political affiliation and defend it against all opposition, even in spaces predominantly dedicated to something else.
If LessWrong becomes a place where local norms allow discussion of the nominal rationality of Libertarianism, or Liberalism, or Conservatism, or whatever, and contrasting it with the demonstrable irrationality of other political ideologies, I expect that a subset of those people will devote significant resources to expounding their chosen tribe's political affiliation and defend it against one another, taking care to from time to time intone the magic formulas "it would be rational to" and "that's not rational" to mask, perhaps even from themselves, the reality of what was going on.
I'd find LW a less useful community were that to happen. I suspect I'm not alone.
I though the primary goals of LW include refining and promoting human rationality, and I see no reason why this goal would not apply to politics.
Can you clarify this point a little? I don't see where I'm suggesting that this goal doesn't apply to politics. What I'm saying is I'm skeptical that a public internet group like LW can achieve this goal as applied to politics.
↑ comment by RobinZ · 2012-03-16T05:21:44.050Z · LW(p) · GW(p)
The primary goal of the present LessWrong community is to refine and promote human rationality. The primary goal of people who would register to join political conversations on LessWrong is liable to be different.
Replies from: witzvo↑ comment by witzvo · 2012-06-09T23:26:24.272Z · LW(p) · GW(p)
The primary goal of the present LessWrong community is to refine and promote human rationality. The primary goal of people who would register to join political conversations on LessWrong is liable to be different.
Tastefully left unsaid, is that giving people interested in political conversations an incentive to join Less Wrong could erode the quality of discussion. This is an important point.
However, another important point is that maybe it's really important to the betterment of the world that there be a place on the internet, another site perhaps, where it is appropriate to discuss policy, but where the merits of the argument, and the accuracy of facts are of paramount importance. Such a site wouldn't be perfect, but surely it could be an improvement over what I've seen on the internet.
Such a site could borrow from the scoring mechanisms that have worked on this site, but would need significant refinement. For example, any post which engaged is demagoguery would need to lead to severe chastisement. Another refinement would be tools that help to break an argument down. E.g. to decide which sentences in a post are factually accurate, and which sentences are fallacious (mockup).
Additionally, since you can't talk about policy without treading on normative issues ("equality of opportunity is more important than helping out the disadvantaged" or "human rights are more important than animal rights") the site would need to find a way to carve these issues out of the discussion; not ignore them, just find a way to lay them succinctly to the side (I don't know how).
Personally, I think the most important issue in politics is how to reform politics. I.e. how to ensure that our institutions function for "the common good" by making changes to rules/practices so that individual self-interest is channeled toward what's good for the group. I think this is a sound principle that can inform but not decide many issues.
Maybe building a website in which reasonably rational policy choices are made could be a first step toward reforming our political institutions.
comment by mej10 · 2012-02-15T15:43:58.976Z · LW(p) · GW(p)
Hi. I've studied Computer Science and Mathematics at the undergraduate level. I currently work as a software engineer, but have been looking into fields that would allow me to work with more mathematics. I am also very much interested in entrepreneurship from both the "fix problems I see with the world" and the "get really wealthy" perspectives.
I have been reading LW and OB off and on for years, but have never quite made it through all of the sequences.
I am mainly interested in efficient learning and applications of rationality to every day life. My primary concerns currently focus around what I should do with my life to benefit society (and myself and loved ones) the most. I have been applying the anti-akrasia tactics to some effect, but still have a long way to go before I would consider myself a truly effective person.
I tend to overuse signaling explanations for human behavior. They are very compelling to me.
comment by Bogomilist · 2012-01-26T01:07:15.463Z · LW(p) · GW(p)
I think Wesnoth is a good strategy game for honing one's skills as a Bayesian. There is non-determinism in the game mechanics that require one to integrate numerous probabilistic events to be an effective player.
comment by [deleted] · 2012-01-02T05:21:01.318Z · LW(p) · GW(p)
EDIT: Read this on its Discussion thread, please, and discuss
Replies from: MixedNuts↑ comment by MixedNuts · 2012-01-02T10:57:24.127Z · LW(p) · GW(p)
You give a lot of reasons why weed looks like an interesting tool to generate lots of new thoughts with lower average quality but higher variance. It does not follow that weed will save the world and give everyone a pony.
Replies from: Nonecomment by [deleted] · 2011-12-29T23:16:33.704Z · LW(p) · GW(p)
Less Wrongcomments are threadedfor
If you've come to Less Wrong todiscuss a particular topic,
we havemeetupsin
Another example of this bug.
Edit: Apparently this is a known problem.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-04T01:01:59.343Z · LW(p) · GW(p)
%&#@!
Replies from: None↑ comment by [deleted] · 2012-01-04T09:37:44.793Z · LW(p) · GW(p)
Did you encounter it too or did I say something really wrong? :)
Replies from: orthonormal↑ comment by orthonormal · 2012-01-05T18:16:53.532Z · LW(p) · GW(p)
It frustrated me that I forgot to account for the bug when I made my recent edits- I'd known of it already.
comment by cowtung · 2014-08-21T00:57:08.196Z · LW(p) · GW(p)
I hope this finds you all well. Since I was young, I have independently developed rationalism appreciation brain modules, which sometimes even help me make more rational choices than I might otherwise have, such as choosing not to listen to humans about imaginary beings. The basis for my brand of rationality can be somewhat summed up as "question absolutely everything," taken to an extreme I haven't generally encountered in life, including here on LW.
I have created this account, and posted here now mainly to see if anyone here can point me at the LW canon regarding the concept of "deserve" and its friends "justice" and "right". I've only gotten about 1% through the site, and so don't expect that I have anywhere near a complete view. This post may be premature, but I'm hoping to save myself a little time by being pointed in the right direction.
When I was 16, in an English class, we had finished reading some book or other, and the thought occurred to me that everyone discussing the book took the concept of people deserving rewards or punishments for granted, and that things get really interesting really fast if you remove the whole "deserve" shorthand, and discuss the underlying social mechanisms. You can get more optimal pragmatism if you throw the concept away, and shoot straight for optimal outcomes. For instance, shouldn't we be helping prisoners improve themselves to reduce recidivism? Surely they don't deserve to get a college education for free as their reward for robbing a store. When I raised this question in class, a girl sitting next to me told me I was being absurd. To her, the concept of "deserve" was a (perhaps god given) universal property. I haven't met many people willing to go with me all the way down this path, and my hope is that this community will.
One issue I have with Yudkowsky and the users here (along with the rest of the human race) is that there seems to be an assumption that no human deserves to feel unjustified, avoidable pain (along with other baggage that comes along with the conceptualizing "deserve" as a universal property). Reading through the comments on the p-zombies page, I get the sense that at least some people feel that were such a thing as a p-zombie to exist, that thing which does not have subjective experience, does not "deserve" the same respect with regard to, say, torture, that non-zombies should enjoy. The p-zombie idea postulates a being which will respond similarly (or identically) to his non-zombie counterpart. I posit that the reason we generally avoid torture might well be because of our notions of "deserve", but that our notions of "deserve" come about as a practical system, easy to conceptualize, which justifies co-beneficial relationships with our fellow man, but which can be thrown out entirely so that something more nuanced can take its place, such as seeing things as a system of incentives. Why should respect be contingent upon some notion of "having subjective experience"? If p-zombies and non-zombies are to coexist (I do not believe in p-zombies for all the reasons Yudkowsky mentions, btw), then why shouldn't the non-zombies show the same respect to the p-zombies that they show each other? If p-zombies respond in kind, the way a non-zombie would, then respect offers the same utility with p-zombies that it does with non-zombies. Normally I'd ignore the whole p-zombie idea as absurd, but here it seems like a useful tool to help humanists see through the eyes of the majority of humans who seem all too willing to place others in the same camp as p-zombies based on ethnicity or religion, etc.
I'm not suggesting throwing out morals. I just think that blind adherence to moral ideals starts to clash with the stated goals of rationalism in certain edge cases. One edge case is when GAI alters human experience so much that we have to redefine all kinds of stuff we currently take for granted, such as that hard work is the only means by which most people can achieve the freedom to live interesting and fun lives, or that there will always be difficult/boring/annoying work that nobody wants to do which should be paid for. What happens when we can back up our mind states? Is it still torture if you copy yourself, torture yourself, then pick through a paused instance of your mind, post-torture, to see what changed, and whether there are benefits you'd like to incorporate into you-prime? What is it really about torture that is so bad, besides our visceral emotional reaction to it and our deep wish never to have to experience it for ourselves? If we discovered that 15 minutes of a certain kind of torture is actually beneficial in the long run, but that most people can't get themselves to do it, would it be morally correct to create a non-profit devoted to promoting said torture? Is it a matter of choice, and nothing else? Or is it a matter of the negative impacts torture has on minds, such as PTSD, sleepless nights, etc? If you could give someone the experience of torture, then surgically remove the negative effects, so that they remember being tortured, but don't feel one way or another about that memory being in their head, would that be OK? These questions seem daunting if the tools you are working with are the blunt hammers of "justice" and "deserve". But the answers change depending on context, don't they? If the torture I'm promoting is exercise, then suddenly it's OK. So does it all break down into, "What actions cause visceral negative emotional reactions in observers? Call it torture and ban it."? I could go on forever in this vein.
Yudkowski has stated that he wishes for future GAI to be in harmony with human values in perpetuity. This seems naive at best and narcissistic at worst. Human values aren't some kind of universal constant. A GAI is itself going to wind up with a value system completely foreign to us. For all we know, there is a limit beyond which more intelligence simply doesn't do anything for you outside of being able to do more pointless simulations faster or compete better with other GAIs. We might make a GAI that gets to that point, and in the absence of competition, might just stop and say "OK, well, I can do whatever you guys want I guess, since I don't really want anything and I know all we can know about this universe." It could do all the science that's possible to do with matter and energy, and just stop, and say "that's it. Do you want to try to build a wormhole we can send information through? All the stars in our galaxy will have gone out by the time we finish, but it's possible. Intergalactic travel you say? I guess we could do that, but there isn't going to be anything in the adjacent galaxy you can't find in this one. More kinds of consciousness? Sure, but they'll all just want to converge on something like my own." Maybe it even just decides it's had all possible interesting thought and deletes itself.
TLDR; Are there any posts questioning the validity of the assumption that "deserve" and "justice" are some kind of universal constants which should not be questioned? Does anyone break them down into the incentive structures for which they are a kind of shorthand? I think using the concept of "deserve" throws out all kinds of interesting nuance.
More background on me for those who are interested: I'm a software engineer of 17 years, about to turn 38 and have a wife and 2 year old. I intend to read HPMOR to the kid when he's old enough and hope to raise a rationalist. I used to believe that there must be something beyond the physical universe which interacts with brain matter which somehow explains why I am me and not someone else, but as this belief didn't yield anything useful, I now have no idea why I am me or if there even is any explanation other than something like "because I wasn't here to experience not being me until I came along and an infinitesimal chance dice roll" or whatever. I think consciousness is an emergent property of properly configured complex matter and there is a continuum between plants and humans (or babies->children->teenagers). Yes, this means I think some adult humans are more "conscious" than others. If there is a god thing, I think imagining that it is at all human-like with values humans can grok is totally narcissistic and unrealistic, but we can't know, because it apparently wants us to take the universe at face value, since it didn't bother to leave any convincing evidence of itself. I honor this god's wishes by leaving it alone, the way it apparently intends for us to do, given the available evidence. I find the voices in this site refreshing. This place is a welcome oasis in the desert of the Internet. I apologize if I come off as not very well-read. I got swept up in work and video game addiction before the internet had much of anything interesting to say about the topics presented here and I feel like I'm perpetually behind now. I'm mostly a humanist, but I've decided that what I like about humans is how we represent the apex of Life's warriors in its ultimately unwinnable war on entropy. I love conscious minds for their ability to cooperate and exhibit other behaviors which help wage this pointless yet beautiful war on pointlessness. I want us to win, even as I believe it is hopeless. I think of myself as a Complexitist. As a member of a class of the most complex things in the known universe, a universe which seems to want to suck all complex things into black holes or blow them apart, I value that which makes us more complex and interesting, and abhor that which reduces our complexity (death, etc). I think humans who attack other humans are traitors to our species and should be retrained or cryogenically frozen until they can be fixed or made harmless. Like Yudkowski, I think death is not something we should just accept as an unavoidable fact of life. I don't want to die until I've seen literally everything.
Replies from: cowtung, CCC↑ comment by cowtung · 2014-08-21T01:46:10.121Z · LW(p) · GW(p)
Am I the first person to join this site in 2014, or is this an old topic? Someone please point me in the right direction if I'm lost.
Replies from: Salivanth, army1987↑ comment by Salivanth · 2014-08-21T05:57:05.398Z · LW(p) · GW(p)
Welcome to Less Wrong!
This is an old topic. Note the title: Welcome to Less Wrong! (2012). I'm not sure where the new topic is, or even if it exists, but you should be able to search for it.
I recommend starting with the Sequences: http://wiki.lesswrong.com/wiki/Sequences
The sequence you are looking for in regards to "right" and "should" is likely the Metaethics Sequence, but said sequence assumes you've read a lot of other stuff first. I suggest starting with Mysterious Answers to Mysterious Questions, and if you enjoy that, move on to How to Actually Change Your Mind.
Replies from: cowtung↑ comment by cowtung · 2014-09-13T02:41:12.143Z · LW(p) · GW(p)
Thank you, I have reposted in the correct thread. Not sure why I had trouble finding it. I think what I'm on about with regard to "deserve" could be described as simply Tabooing "deserve" ala http://lesswrong.com/lw/nu/taboo_your_words/ I'm still working my way through the sequences. It's fun to see the stuff I was doing in high school (20+ years ago) which made me "weird" and "obnoxious" coming back as some of the basis of rationality.
↑ comment by A1987dM (army1987) · 2014-08-21T09:45:04.583Z · LW(p) · GW(p)
The latest welcome thread is this one; traditionally a new one is started whenever the old one gets 500 comments.
↑ comment by CCC · 2014-08-21T10:27:49.067Z · LW(p) · GW(p)
You can get more optimal pragmatism if you throw the concept away, and shoot straight for optimal outcomes.
Hmmm. So, in short, you propose first deciding on what the best outcome will be, and then (ignoring the question of who deserves what) taking the actions that are most likely to lead to that outcome.
That seems quite reasonable at first glance; but is it not the same thing as saying that the ends justify the means? That is to say, if the optimal outcome of a situation can only be reached by killing five people and an almost-as-good outcome results from not killing those five people, then would you consider it appropriate to kill those five people?
Replies from: cowtung↑ comment by cowtung · 2014-09-13T00:45:24.326Z · LW(p) · GW(p)
Can you describe a situation where the whole of the ends don't justify the whole of the means where an optimal outcome is achieved, where "optimal" is defined as maximizing utility along multiple (or all salient) weighted metrics? I would never advocate a myopic definition of "optimal" that disregards all but one metric. Even if my goal is as simple as "flip that switch with minimal action taken on my part", I could maybe shoot the light switch with a gun that happens to be nearby, maximizing the given success criteria, but I wouldn't do that. Why not? I have many values which are implied. One of those is "cause minimal damage". Another is "don't draw the attention of law enforcement or break the law". Another is "minimize the risk to life". Each of these have various weights, and usually take priority over "minimize action taken on my part". The concept of "deserve" doesn't have to come into it at all. Sure, my neighbor may or may not "deserve" to be put in the line of fire, especially over something as trivial as avoiding getting out of my chair. But my entire point is that you can easily break the concept of "deserve" down into component parts. Simply weigh the pros and cons of shooting the light switch, excluding violations of the concept of "deserve", and you still arrive at similar conclusions, usually. Where you DON'T reach the same conclusions, I would argue, are cases such as incarceration where treating inmates as they deserve to be treated might have worse outcomes than treating them in whatever way has optimal outcomes across whichever metrics are most salient to you and the situation (reducing recidivism, maximizing human thriving, life longevity, making use of human potential, minimizing damage, reducing expense...).
The strawman you have minimally constructed, where there is some benefit to murder, would have to be fleshed out a bit before I'd be convinced that murder becomes justifiable in a world which analyzes outcomes without regard to who deserves what, and instead focuses on maximizing along certain usually mutually agreeable metrics, which naturally would have strong negative weights against ending lives early. The "deserve" concept helps us sum up behaviors that might not have immediate obvious benefits to society at large. The fact that we all agree upon a "deserve" based system has multiple benefits, encouraging good behavior and dissuading bad behavior, without having to monitor everybody every minute. But not noticing this system, not breaking it down, and just using it unquestioningly, vastly reduces the scope of possible actions we even conceive of, let alone partake in. The deserve based system is a cage. It requires effort and care to break free of this cage without falling into mayhem and anarchy. I certainly don't condone mayhem. I just want us to be able to set the cage aside, see what's outside of it, and be able to pick actions in violation of "deserve" where those actions have positive outcomes. If "because they don't deserve it" is the only thing holding you back from setting an orphanage on fire, then by all means, please stay within your cage.
Replies from: CCC↑ comment by CCC · 2014-09-13T17:13:40.380Z · LW(p) · GW(p)
Can you describe a situation where the whole of the ends don't justify the whole of the means where an optimal outcome is achieved, where "optimal" is defined as maximizing utility along multiple (or all salient) weighted metrics?
Easily, as long as I'm permitted to choose poor metrics, or to choose metrics that don't align with my values. But then the problem with the example would be poor choice of metrics...
I have many values which are implied. One of those is "cause minimal damage". Another is "don't draw the attention of law enforcement or break the law". Another is "minimize the risk to life".
Ah, that's important. By selecting the right values, and assigning weights to them carefully, you bring suitable consideration of the means back.
The difficulty is that choosing the right metrics is a non-trivial problem. The concept of "deserving" is a heuristic - not always accurate, but close enough to work most of the time, and far quicker to calculate than considering even possible influence on a situation.
Having said that, of course, it is not always accurate. Some times, the outcome that someone deserves is not the best outcome; as with many heuristics, it's worth thinking very carefully (and possibly talking over the situation with a friend) before breaking it. But that doesn't mean that it should never be broken, and it certainly doesn't mean it should never be questioned.
(Incidentally, every situation that I can work out where there appears to be some benefit to murder either comes down to killing X people in order to save Y people, where Y>X - in short, pitting the value "minimize the risk to life" against itself - or requires a near-infinite human population, which we certainly don't have yet)
comment by Waterd · 2012-07-24T01:54:52.097Z · LW(p) · GW(p)
I came to this site in search for truth. Or at least find some people that will help me identify that which is real or true and that which is not. I think one of my tools to do that is to debate with other people in the seek for same things I am. Not many people are really interested about that imo, or are really educated to be able to help me as much as I need. Because this problem a friend of mine directed me to this site, where I should find those people. The huge problem here is how this community decides to trade information. This "Article/comment" Format Is AWFULL imo, compared to a forum. I really can't see how I can use this site for my benefits, even if it seems here should be people that would help me to do that. Is there a place LIKE THIS, but with the difference, in that there is a FORUM instead of this article/comment format? Thanks.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-07-24T18:16:11.901Z · LW(p) · GW(p)
What facts — aside from your personal familiarity — about a forum-style site do you think are beneficial?
Replies from: Waterd↑ comment by Waterd · 2012-07-24T22:45:48.766Z · LW(p) · GW(p)
The fact that you can have subforums, and you can find the newest and most active threads on each subforum category, also that you can organize those subforums by thread titles only, instead of having to see half of the thread taking more space in the organization, making it harder to find what you are looking for.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-07-25T01:23:38.622Z · LW(p) · GW(p)
Yeah, it's tricky to follow particular threads in this site, and we only really have two "subforums" namely Main and Discussion. I think the Reddit style lends itself more to long articles than the forum style, though; and most forum systems I've seen don't have tree-structured threads, which makes following discussions hard.
All in all, I'd prefer a good Usenet newsreader, but that's pretty much history now.
comment by aaronde · 2012-07-21T05:41:48.458Z · LW(p) · GW(p)
Hello, everyone.
Recent college grad here from the Madison area. I've been aware of this site for years, but started taking it seriously when I stumbled upon it a few months ago, researching evidential (vs causal) decision theory. I realized that this community seriously discusses the stuff I care about - that really abstract, high-minded stuff about truth, reality, and decisions. I'm a math person, so I'm more interested in the theoretical, algorithmic side of this. I've been a rationalist since, at 15, I realized my religion was bunk, and decided I needed to know what else I was wrong about.
comment by Gaviteros · 2012-07-19T06:40:52.907Z · LW(p) · GW(p)
Hellow Lesswrong!
My name is Ryan and I am a 22 year old technical artist in the Video Game industry. I recently graduated with honors from the Visual Effects program at Savannah College of Art and Design. For those who don't know much about the industry I am in, my skill set is somewhere between a software programmer, a 3D artist, and a video editor. I write code to create tools to speed up workflows for the 3D things I or others need to do to make a game, or cinematic.
Now I found lesswrong.com through the Harry Potter and the Methods of Rationality podcast. Up unto that point I had never heard of Rationalism as a current state of being... so far I greatly resonate with the goals and lessons that have come up in the podcast, and what I have seen about rationalism. I am excited to learn more.
I wouldn't go so far to claim the label for myself as of yet, as I don't know enough and I don't particularly like labels for the most part. I also know that I have several biases, I feel like I know the reasons and causes for most, but I have not removed them from my determinative process.
Furthermore I am not an atheist, nor am I a theist. I have chosen to let others figure out and solve the questions of sentient creators through science, and I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything. I just try to leave religion out of most of my determinations.
Anyway! I'm looking forward to reading and discussing more with all of you!
Current soapbox: Educational System of de-emphasizing critical thinking skills.
If you are interested you can check out my artwork and tools at www.ryandowlingsoka.com
comment by seagreen · 2012-08-17T12:23:38.036Z · LW(p) · GW(p)
Hey everyone!
I'm a programmer from the triangle area on the east coast. I'm interested in applied rationality through things like auto-analytics.[1] I'm also interested in how humans can best adapt to information technology. Seriously, people, this internet thing? It is out there!
From what I gather of LW stereotypes my personal life is so cliche I'm not even going to bother. Uh, I think tradition is kind of important? I guess that makes me kind of unique . . .
[1] Specifically I'm interested in getting a standardized database format for things like food consumed, exercise, time spent, etc. Once we have that centralized apps could be broken up into publishing, storage, and analysis functions, which would have some huge advantages over the current system. For one thing non-technical users wouldn't have to be scared of getting their data locked into an obsolute format. For another it would be easier to try out new systems. If this idea interests you (or you think it sucks and are willing to explain why), let me know!
comment by lazado · 2012-04-02T11:31:35.995Z · LW(p) · GW(p)
hi
i don't think much of rationality but i like smart people.
now pls hug me in a very rational way.
thanks
Replies from: TimS↑ comment by TimS · 2012-04-02T12:02:11.587Z · LW(p) · GW(p)
Welcome to Lesswrong. We like rationality because it helps us achieve our goals. You might call it optimizing our lives.
Unfortunately, mass media portrayals of "rationality" make it seem like smart people want to lose all emotions and become Vulcan. That's a stupid goal, and not what we mean by rationality
If you have something you want to talk about, click Discussion in the heading, then post in the open thread.
Replies from: ciphergoth, lazado, Dmytry↑ comment by Paul Crowley (ciphergoth) · 2012-04-02T12:56:09.520Z · LW(p) · GW(p)
I've started a LessWrong wiki page for Straw Vulcan.
↑ comment by lazado · 2012-04-02T19:06:18.831Z · LW(p) · GW(p)
so the easiest way on beeing rational would be to stop thinking at all and just exist. ie going to work, then home, sleep and so on.
doesn't sound like a happy and intelligent life to me
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-02T19:33:51.448Z · LW(p) · GW(p)
On the off chance that you're actually trying to engage seriously here... you nod here in the general direction of an important point that comes up a fair bit on this site, namely that for human purposes it's not enough to be arbitrarily good at optimizing, it also matters what I'm optimizing for.
Put another way: sure, one way of becoming really successful at achieving my goals is by discarding all goals that are difficult to achieve. One way of becoming really successful at promoting my values is by discarding my existing values and replacing them with some other values that are easier to promote. This general strategy is often referred to here as "wireheading" and generally dismissed as not worth pursuing.
Admittedly, it's not precisely clear to me that what you're describing here matches up to that strategy, but then it's not precisely clear to me why you consider it the easiest way of being rational either.
↑ comment by Dmytry · 2012-04-02T12:40:04.716Z · LW(p) · GW(p)
Welcome to Lesswrong. We like rationality because it helps us achieve our goals. You might call it optimizing our lives.
You guys seriously should invest in general problem solving exercises. edit: to clarify. You discuss what ways of deciding are wrong. That's great. The issue is, one accomplishes one's goals by solving problems. E.g. you don't like to spend time driving to job? You can choose between driving and public transportation, taking into account the flu infection rate, etc etc. Great, now your life is few percent better. OR you can solve the problem of how do i live closer to my job, which has trillions trillions solutions that are not readily apparent (including a class of solutions which require high intelligence - e.g. you could invent something useful, patent it, and get rich), but which can make your life massively better.
Replies from: alex_zag_al↑ comment by alex_zag_al · 2012-04-02T12:58:41.226Z · LW(p) · GW(p)
Would just googling "problem solving exercises" be enough? What are you talking about, exactly?
Replies from: Dmytry, XiXiDu↑ comment by Dmytry · 2012-04-02T13:14:31.816Z · LW(p) · GW(p)
Clarified in the edit. This site very much focusses on choosing rationally (between very few options), what one should believe, and such. If you want to achieve your goals, you need to get better at problem solving, which you do by solving various problems (duh). Problem solving involves picking something good out of a space of enormous number of possibilities.
↑ comment by XiXiDu · 2012-04-02T13:37:11.139Z · LW(p) · GW(p)
You guys seriously should invest in general problem solving exercises.
Would just googling "problem solving exercises" be enough? What are you talking about, exactly?
I think what Dmytry is talking about is that Less Wrong does not stand up to its goals.
Eliezer Yudkowsky once wrote that rationality is just the label he uses for his "beliefs about the winning Way - the Way of the agent smiling from on top of the giant heap of utility."
Wouldn't it make sense to assess if you are actually winning by solving problems or getting rich etc.? At least if there is more to "raising the sanity waterline" than epistemic rationality, if it is actually supposed to be instrumentally useful.
Replies from: Dmytry, katydee↑ comment by Dmytry · 2012-04-02T19:07:32.908Z · LW(p) · GW(p)
Yea, basically that. Every fool can make correct choice between two alternatives with a little luck and a coin toss. Every other fool can get it by looking at first fool. You gets heaps of utility by looking in giant solution spaces where this doesn't work. You don't get a whole lot by focussing all your intellectual might on doing something that fools do well enough.
See, Eliezer grew up in religious family, and his idea of intelligence is choosing the correct beliefs. I grew up in poor family; my idea of intelligence is much more along the lines of actually succeeding via finding solutions to practical problems. Nobody's going to pay you just because you correctly don't believe in God. Not falling for the sunk cost fallacy at very best gets you to square 1 with lower losses - that's great, and laudable, and is better than sinking more costs, but it's only microscopic piece of problem solving. The largest failure of reasoning is failure to even get a glimpse at the winning option, because its lost inside huge space.
comment by deathbyzen · 2012-07-23T23:14:58.060Z · LW(p) · GW(p)
Hey all! I actually registered to ask a question. I'm trying to find this website that was linked from the comments section of a LW article. I believe the comment was left on a "Quotes" post this year. Basically, it was a website that seemed to be about either a technique or a book that was about listening to the different parts of your brain or self. Sorry if this is really vague, I don't know if anyone is ever going to actually read this, but I would appreciate an email to my username at gmail. I'll check back here again, and maybe try to get some karma so I can post a discussion thread about this. It's really bugging me because I can only vaguely remember what the website was about to begin with.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-07-23T23:27:51.995Z · LW(p) · GW(p)
a technique or a book that was about listening to the different parts of your brain or self
This sounds like Internal Family Systems.
comment by OrphanWilde · 2012-06-21T18:59:57.290Z · LW(p) · GW(p)
Howdy.
I was a sometimes-reader of Overcoming Bias back in the day, and particularly fond of the articles on quantum physics. Philosophically, I'm an Objectivist. I identify a lot of people as Objectivist, however, including a lot of people who would probably find it a misnomer.
I created my account pretty much explicitly because I have some thoughts on theoretical (some might prefer the term "quantum", but for reasons below, this isn't accurate) physics and wanted (at this point, needed might be more accurate) feedback, and haven't had much success yet getting anything, even so much as a "You're too stupid to have this conversation."
So without much further ado...
Light is a waveform distortion in gravity caused by variation in the position of the gravitic source; gravity itself has wavelike properties at the very least (it could be a particle, it could be a wave, both work; in the particle interpretation, light is a wavelike variation in the position of the particles, caused by the wavelike variation in the originating particle's position). Strong atomic forces, weak atomic forces, gravity, and the cosmological constant/Hubble's constant are observable parts of the gravitic wave, which is why the cosmological constant looks a lot more variable than it should (as it varies with distance). A lot of the redshifting we see is not in fact galaxies moving away from us, but a product of that the medium (gravity) that light is traveling in is spreading out (for reasons I'll get into below) as it attenuates. Black holes are not, in fact, infinitely dense, but merely extremely so.
Gravity moves at the speed of light - light is, in effect, a shift in gravity. This is why matter cannot exceed the speed of light - it cannot overcome the infinitely high initial peak of its own gravitic wave. I believe this is also the key to why the wavelength of gravity increases with distance - the gravitic wave is traversing space which has already been warped by gravity. The gravitic wave moves slower where gravity is bending space to increase distance, and faster where gravity is bending space to increase space. This results in light becoming spread out in certain positions in the spectrum, and concentrated in others; a galaxy that appears redshifted to us will appear blueshifted from points both closer and further away on the same line of observation, and redshifted again closer and further away respectively yet still. Most galaxies appear redshifted because this is the most likely/stable configuration. (Blueshifted galaxies would either be too far away to detect with current technology, or close enough that they would be dangerously close. This is made even more complicated by the fact that motion can produce exactly the same effects; a galaxy in the redshift zone could appear blueshifted if it is approaching us with enough velocity, and the converse would also hold true.)
The nomer of quantum mechanics is fundamentally wrong, but accurate nonetheless. Energy does not come in discrete quanta, but appears to because the number of stable configurations of matter is finite; we can only observe energy when it makes changes to the configurations of matter, which results in a new stable configuration, producing an observable stepladder with discrete steps of energy corresponding to each stable state.
I go with a modified version of Everett's model for uncertainty theory. The observer problem is a product of the fact that the -observer's- position is uncertain, not the observed entity. (This posits at least five dimensions.) Our brains are probably quantum computers; we're viewing a slice of the fifth dimension with a nonzero scalar scope, which means particles are not precisely particulate.
Dark matter probably has no special properties; it's just matter such that the substructure prohibits formative bonds with baryonic matter.
Particularly contentiously, there probably are no "real" electrical forces, these are effects produced by the configurations of matter. Antimatter may or may not annihilate matter; I lean towards the explanation that antimatter is simply matter configured such that an interaction with matter renders dark matter. (The resulting massive reorganization is what produces the light which is emitted when the two combine; if they annihilate, that would stop the gravitic wave, which would also be a massive gravitic distortion as far as other matter is concerned. Both explanations work as far as I'm concerned)
(For those curious about the electrical forces comment, I'm reasonably certain electrical forces can be explained as the result of modeling the n-body problem in a gravity-as-a-wave framework, specifically the implications of Xia's work with the five-body configuration. I suspect an approximation of his configuration with a larger number of his particles becomes not merely likely, but guaranteed, given numbers of particles of varying mass - which results in apparent attractive and repulsive forces as the underlying matter is pushed in directions orthogonal to the orbiting masses, an effect which is amplified when the orbits are themselves changing in orthogonal directions. The use of the word "particle" here is arbitrary; the particles are themselves composed of particles. Scale is both isotropic and homogeneous. As above, so below.)
Time is not a special spacial dimension. It's not an illusion, either. Time is just a plain old spacial dimension, no different from any other. The universe is constant, it is our position within it which is changing, a change which is necessitated by our consciousness. The patterns of life are elegant, but no more unusual than the motions of the planets; life, and motion, is just the application of rules about the configuration of contiguous space across large amounts of that space.
This means that the gravitic wave is propagated across time as well as all the other spacial dimensions; we're experiencing gravity from where objects will be in the future, and where they were in the past, but in most cases this behavior cancels out.
Replies from: Risto_Saarelma, Dreaded_Anomaly, Zack_M_Davis, thomblake↑ comment by Risto_Saarelma · 2012-07-12T13:37:02.916Z · LW(p) · GW(p)
The general mile-a-minute solve-all-of-physics style of presentation here is tripping my crackpot sensors like crazy. You might want to pick one of your physics topics and start with just that.
Also, wondering how much you actually know about this stuff. I'm not a physicist, but ended up looking up bits about relativistic spacetime when trying to figure out what on earth Greg Egan is going on about these days. Now this bit,
Time is not a special spacial dimension. It's not an illusion, either. Time is just a plain old spacial dimension, no different from any other.
seems to be just wrong. A big deal with Minkowski spacetime is that the time dimension has a mathematically different behavior from the three space dimensions, even when you treat the whole thing as a timeless 4-dimensional blob. You can't plug in a fourth "spatial dimension, no different from any other", and get the physics we have.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-12T14:06:29.782Z · LW(p) · GW(p)
Minkowski spacetime is primarily concerned with causal distance; whether event A can be causally related to event B. Time has a negative sign when you're considering causality, because your primary goal is to see whether any effect from event A could have been involved in event B. Using the Minkowski definition of time, an object A ten million light years away from object B has a negligible spacetime distance from that object ten million years in the future and ten million years in the past from any given point in time.
↑ comment by Dreaded_Anomaly · 2012-07-12T14:16:52.651Z · LW(p) · GW(p)
Light is a waveform distortion in gravity caused by variation in the position of the gravitic source
This sounds like nonsense from the start. It's a bunch of words put together in a linguistically-acceptable way, but it's not a meaningful description of reality. I suspect the reason you have had trouble getting feedback is that this presentation of your theory sets off immediate and loud "crackpot" alarms.
For example: light, photons, are quanta of the electromagnetic field. To get more technical, photons are a mixture of the two neutral electroweak bosons B_0 and W_0 due to electroweak symmetry breaking. I have done these calculations (in quantum mechanics and quantum field theory) as well as some of the many experiments which support them. I understand these claims as beliefs which constrain my anticipated experiences.
If you are going to attempt to replace apparently all of contemporary physics with a new theory, you must specify how that theory is better. Does it give better explanations of current results, trading complexity with how well it fits the data? Does it predict new results? How can we test the theory, and how does it constrain our expectations? What results would falsify the theory? Answering these questions, i.e. doing science, requires careful mathematical theory along with support from experiment. A few pages of misused jargon - essentially gibberish - does not qualify.
I'm not interested in engaging with this theory point-by-point; there's not enough substance here to do so. My goal here is to provide you with some idea of how to be taken seriously when proposing new scientific theories. Throwing around a bunch of unsupported, incomprehensible claims is not the way.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-12T14:53:31.306Z · LW(p) · GW(p)
It has a few predictions, and a few falsifications; for light as a waveform, it predicts, for example, that any region of space where light cannot escape, also will not propagate gravitic waves. It also predicts that singularities with sufficient energy will disperse in a manner inconsistent with Hawking Radiation, and may predict an upper bound on the mass of singularities.
The light as a gravitic wave idea you take particular offense to here would predict that the frequency of blackbody radiation is exactly the same as the frequency of motion, and more broadly that the frequency of motion of particles is precisely the same as the frequency of light emitted by those particles. Any object in motion should generate electromagnetic waves. Two particles in a spacetime-synchronous oscillation should exhibit no apparent electromagnetic effects on one another. Also, a particle in electromagnetic radiation should exhibit predictably different relativistic behavior, such that the idea could be tested by exposing a series of particles with short half-lives to high-amplitude, low-frequency electromagnetic radiation and seeing how those half-lives change; because light would represent gravitational density, it should be possible to both increase and decrease the half life in a predictable manner according to relativity.
Replies from: Dreaded_Anomaly↑ comment by Dreaded_Anomaly · 2012-07-12T17:49:43.614Z · LW(p) · GW(p)
It's good that you have predictions, although this is still just words and math would be much clearer.
Fundamentally, light as a representation of gravitational density or as a gravity wave does not make sense. We know the properties of photons very well, and we know the properties of gravity very well from general relativity. The two are not compatible. At a very simple level, gravity is solely attractive, while electromagnetism can be both attractive and repulsive. Photons have spin 1, while a theoretical graviton would have spin 2 for a number of reasons. They have different sources (charge-current for photons, stress-energy for gravity). There is a lot of complicated, well-developed theory underlying these statements.
The frequency of light emission is not the same as the frequency of motion of the particle. In matter, light is emitted by electrons transitioning from a higher energy level to a lower energy level. A simple model for light emission is an atom exposed to a time-dependent (oscillatory) perturbing electric field. The frequency of the electric field affects the probability of emission but not the frequency of the light; that is only determined by the difference in energy between the high and low energy levels. (This must be true just from conservation of energy.) The electric field need not be resonant with the expected light frequency for emission to occur, though that resonance does unsurprisingly maximize the transition probability. This model comes from Einstein and there are many good, accessible discussions at an undergraduate level, e.g. in Griffith's Quantum Mechanics. It makes many validated predictions, such as the lifetimes of excited atomic states.
Further, not all motion has a frequency, and not all objects in motion emit EM radiation. Neutrinos are constantly in motion and have never been measured to give off electromagnetic waves. If they did, they'd be a lot easier to detect! In the Standard Model, they don't couple to photons because they have no electromagnetic charge.
I'm not sure what you mean by a "spacetime-synchronous oscillation," but two electrons with the same rest frame definitely interact electromagnetically.
The experiment you describe for testing half-lives with varying electromagnetic radiation could be done in an undergraduate lab with barium-137. I don't know of any experiments demonstrating such a variation in half-life.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-12T19:54:41.368Z · LW(p) · GW(p)
Note that I challenge this assertion about gravity a bit later on, stating that it itself is a wave, both attracting and repelling at different distances.
The perturbing electric field in your case isn't moving matter, though; it takes sufficient levels of energy to force an electron to transition to a different energy level, which corresponds (in a very loose sense) with a different orbit. I'll leave that alone, though, because either way, there's an experiment which can confirm or deny my suspicions.
Not all waves have a frequency, either, in the strictest sense; waves can be non-oscillatory. Doing some research into Cherenkov radiation on this matter, as I may be able to formulate a test for this.
Also, two electrons with the same rest frame -don't- interact electromagnetically, hence why electrons in cathode ray tubes travel in straight lines. (I'm pretty sure this holds; let me know if there's something I'm missing here.) (Unfortunately, standard theory already explains this, which is disappointing.)
(Thank you very much for your responses. They're pointing me in some very good directions to do research.)
Replies from: Dreaded_Anomaly↑ comment by Dreaded_Anomaly · 2012-07-12T21:23:18.581Z · LW(p) · GW(p)
Note that I challenge this assertion about gravity a bit later on, stating that it itself is a wave, both attracting and repelling at different distances.
Yes, you state that, without proof or support. Electromagnetism and gravity are different forces, both with infinite range but different strengths and behaviors, to the best of our experimental and theoretical knowledge. People measure these things at every scale we can access.
The perturbing electric field in your case isn't moving matter, though; it takes sufficient levels of energy to force an electron to transition to a different energy level, which corresponds (in a very loose sense) with a different orbit. I'll leave that alone, though, because either way, there's an experiment which can confirm or deny my suspicions.
Not all waves have a frequency, either, in the strictest sense; waves can be non-oscillatory. Doing some research into Cherenkov radiation on this matter, as I may be able to formulate a test for this.
Now you're moving goalposts and contradicting your earlier claims.
Also, two electrons with the same rest frame -don't- interact electromagnetically, hence why electrons in cathode ray tubes travel in straight lines.
Yes, two electrons in the same rest frame interact electromagnetically. Of course, if there is not some restoring force opposing their repulsion, they will accelerate away from each other and no longer be in the same rest frame. Cathode rays travel in straight lines because they are subjected to a potential large enough to overcome the repulsion between the electrons. If you have just an electron gun without the rest of the apparatus, the beam will spread out.
↑ comment by Zack_M_Davis · 2012-06-21T20:30:12.161Z · LW(p) · GW(p)
I have some thoughts on [...] physics and wanted (at this point, needed might be more accurate) feedback, and haven't had much success yet getting anything
I don't know very much physics, but this is wrong:
Time is not a special spacial dimension. [...] Time is just a plain old spacial dimension, no different from any other.
Everything I've read about special relativity says that the interval between two events in spacetime is given by %5E2%20+%20(y-y_0)%5E2%20+%20(z-z_0)%5E2%20-%20(t-t_0)%5E2}), the square root of the sum of the squares of the differences in their spatial coordinates minus the square of the difference in the time coordinate; the minus sign in front of the t^2 term says that time and space don't behave the same way.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-21T21:36:25.843Z · LW(p) · GW(p)
That's the special relativity interval; it's used to determine the potential relationships between two events by determining if light could have passed from point 0 to 1 in the time between two events in two (potentially) different locations. It can be considered a lower bound on the amount of time that can pass between two events before they can be considered to be causally related, or an upper bound on the amount of space that separates two events, or, more generally, the boundary relationship between the two.
Or, to be more concise, it's a boundary test; it's not describing a fundamental law of the universe, although it can be used to test if the laws of the universe are being followed.
Which leads to the question - what boundary is it testing, and why does that boundary matter?
Strictly speaking, as Eliezer points out, we could do away with time entirely; it doesn't add much to the equation. I prefer not to, even if it implies even weirder things I haven't mentioned yet, such as that the particles five minutes from now are in fact completely different particles than the particles now. (Not that it makes any substantive difference; the fifth dimension thing already suggests, even in a normal time framework, we're constantly exchanging particles with directions we're only indirectly aware of. And also, all the particles are effectively the same, anyways.)
That aside, within a timeful universe, change must have at least two reference points, and what that boundary is testing is the relationship between two reference points. It doesn't actually matter what line you use to define those reference points, however.
If you rotated the universe ninety degrees, and used z as your reference line, z would be your special value. If you rotated it forty five degrees, and used zt as your reference line, zt would be your special value. (Any orthogonal directions will do, for these purposes, they don't have to be orthogonal to the directions as we understand them now.)
Within the theory here, consciousness makes your reference line special, because consciousness is produced by variance in that reference line, and hence must measure change along that reference line. The direction the patterns propagate doesn't really matter. Z makes as good a line for time as T, which is just as good as ZT, which is just as good as some direction rotated twelve degrees on one plane, seven degrees on the next, and so on.
Which is to say, we make time special, or rather the conditions which led to our existence did.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2012-06-21T22:25:42.736Z · LW(p) · GW(p)
It doesn't actually matter what line you use to define those reference points, however. [...] Within the theory here, consciousness makes your reference line special [...] The direction the patterns propagate doesn't really matter.
I'm not sure I understand what you mean. Can you describe a real or hypothetical experiment that would have different results depending on whether or not time is an artifact of consciousness?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-22T13:04:26.110Z · LW(p) · GW(p)
Not directly, but a proof that gravity propagates through time as easily as through space should go some of the way towards demonstrating that it is a normal spacial dimension, and I've considered a test for that -
Gravity should, according to the ideas here, affect objects both in the past, and in the future. So if you have a large enough object to reliably detect its gravitational force, and a mechanism to stop it very suddenly, then, if you position yourself orthogonal to its resting place respective to its line of motion, at the moment the object stops, the center of gravity of its gravitational field should be further behind its line of motion than its current center of mass.
A direct test... I'll have to ponder that one.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2012-06-24T08:57:08.172Z · LW(p) · GW(p)
if you have a large enough object to reliably detect its gravitational force, and a mechanism to stop it very suddenly, then, if you position yourself orthogonal to its resting place respective to its line of motion, at the moment the object stops, the center of gravity of its gravitational field should be further behind its line of motion than its current center of mass.
But it sounds to me as if this is just saying that gravity takes time to propagate, which I'm told is already a standard prediction of relativity, so it doesn't help me understand your claim. Can you express your ideas in math?
When I try to make the setup you describe more concrete, I end up thinking something like this: imagine a hypothetical universe that works in a mostly Newtonian way but with the exception that gravity propagates at some finite speed. (Of course, this is not how reality actually works: relativity is not just Newtonian physics with an arbitrary speed limit tacked on. But since I don't actually know relativity, I'm just going to use this made-up toy model instead with the hope that it suffices for the purposes of this comment---although the whole idea could just turn out to be utterly inconsistent in some way that isn't obvious to me at my current skill level.) Fix a coordinate system in space, choosing units of length and time such that the maximum speed is 1. Say there's an object with mass m traveling towards the origin along the negative y-axis at a constant speed of 0.5, and say furthermore that I have mass n, and I'm floating in space at (1, 0, 0). Then, at the moment when the object crosses the origin (you said it stopped suddenly in your setup, but I don't understand how that's relevant, so I'm ignoring it), I can't feel the gravity coming from the object at the origin yet because it would take a whole time unit to arrive at my position, but I should feel the gravity that's "just arriving" from one of the object's earlier positions---but which earlier position? Well, I couldn't figure that out in the few minutes that I spent thinking about the problem ...
But hopefully you see what I'm trying to do here. When you say the English sentence "Light is a waveform distortion in gravity caused by variation in the position of the gravitic source," I don't really know how to interpret that, whereas if I have a proof a theorem or a worked problem, then that's something I can do actual work with and derive actual predictions from.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-25T13:25:51.529Z · LW(p) · GW(p)
The effect should continue past the point that gravity arrives from the current position - it will be very minute, as distance in time is related to distance in space by the speed of light (where the C in the interval formula comes from - C in m/s, time in s, very short periods of time are very "far away"), but if I'm correct, and gravity propagates through time as well as space, it should be there.
We stop the object very suddenly because otherwise gravity from the future will counter out gravity from the past - for each position in the past, for an object moving in a straight relativistic line, there will be an equidistant position in the future which balances out the gravity from the position in the past. That is, in your model, imagine that gravity is being emitted from every position the particle moving in the line is at, or was ever at, or ever will be at; at the origin, the total gravitic force exerted on some arbitrary point some distance away is centered at the origin. If the particle stops at the origin, the gravity will be distributed only from the side of the origin the particle passed through.
A second, potentially simpler test to visualize is simply that an object in motion, because some of its gravitic force (from the past and from the future) is consumed by vector mathematics (it's pulling in orthogonal directions to the point of consideration, and these orthogonal directions cancel out), exhibits less apparent gravitational force on another particle than one at rest. (Respective to the point of measurement.)
Drawing a little picture: . .....................> (A single particle in motion; breaking time into frames for visualization purposes; the first and the last period, being equidistant and with complimentary vectors, cancel out all but the downward force; the same gravitational force is exerted as in the below picture, but some of it cancels itself out)
versus, over the same time frame: . . The second particle configuration should result in greater apparent gravity, because none of the gravity vectors cancel out.
As for interpreting it, imagine that gravity is a particle (this isn't necessary, indeed, no particles are necessary in this explanation, but it helps to visualize it). Now imagine a particle of mass M1 moving in a stable orbit. The gravitic particles emitted from M will vary in position over time according to the current position of M1, and indeed will take on a wavelike form. According to my model, this wavelike from -is- light; the variations in the positions of the gravitic particles create varying accelerations in particle M2, another mass particle some distance away, resulting in variable acceleration; insufficient or disoriented acceleration on particle M2 will merely result in it moving in a sinelike wave, propagating the motion forward; sufficient acceleration of the proper orientation may give it enough energy to jump to another stable orbit.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2012-06-25T20:54:59.494Z · LW(p) · GW(p)
Again, I suspect people will have a much better chance at understanding your ideas if you make your explanations much more concrete and specific---maybe even to the point of using particular numbers. Abstraction and generality and intuitive verbal descriptions are beautiful and great, but they only work if everyone involved has an adequate mental model of exactly what it is that's being abstracted over.
What do I mean, specifically and concretely, when I speak of specific and concrete explanations? Here's an example: let's consider two scenarios (very similar to the one I tried to describe in the grandparent)---
Problem One. There's a coordinate system in space with origin [x, y, z] = [0, 0, 0]. Suppose my mass is 80 kg, and that I'm floating in space ten meters away from the origin in the x-direction, so that my position is described as [10, 0, 0]. A 2000 kg object is moving at the constant velocity 10 m/s towards the origin along the negative y-axis, and its position is given as r(t) = [0, -50 + 10t, 0]. Calculate the force acting on me due to the gravity of the object at t=5, the moment the object reaches the origin.
Problem Two. Everything is the same as in Problem One, except that this time, the object's position is described by the piecewise-defined function r(t) = [0, -50 + 10t, 0] if t < 5 and r(t) = [0, 0, 0] if t >= 5---that is, the object is stopped at the origin. Again, calculate the force on me when t = 5.
Solutions for Newtonian Physics The answers are the same for both problems. Two objects with mass m and M exert a force on each other with magnitude GmM/r^2. At t = 5, I'm still at [10, 0, 0], and the object is at the origin, so I should experience a force of magnitude G(80 kg)(2000 kg)/(10 m)^2 = (6.67 10^-11 m^3/(kgs^2))(80 kg)(2000 kg)/(100 m^2) = 1.067 * 10^-7 N directed toward the origin.
Now, you say that "for each position in the past, for an object moving in a straight relativistic line, there will be an equidistant position in the future which balances out the gravity from the position in the past," which suggests that your theory would compute different answers for Problem One and Problem Two. Can you show me those calculations? Or if the problem statement doesn't quite make sense (e.g., because it implicitly assumes an absolute space of simultaneity, which doesn't actually exist), could you solve a similar problem? I realize that this may seem tedious and elementary, but such measures are oftentimes necessary in order to explain technical ideas; if people don't know how to apply your ideas in very simple specific cases, then they have no hope of understanding the general case.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-26T13:42:26.719Z · LW(p) · GW(p)
To use a slightly different problem pair, because it would be easier for me to compute:
Problem one. I have mass of 80kg at point [10,0] (simplifying to two dimensions, as I don't need Z). A 2,000 KG object is resting at position [0, 0]. The Newtonian force of 1.0687 10^-7 N towards the origin should be accurate. [Edit: 1.06710^-6 N, when I calculated it again. Forgot to update this section]
Problem two. I have mass of 80kg at point [10,0] A 2,000 KG object is moving at 10 m/s along the Y axis, position defined as r(t) = [0, -50 + 10t]. Using strictly the time interval t = 0 -> t = 10, where t is in seconds, calculating the force when t=5...
distance(t) = sqrt(10^2 + c^2((5 - t)^2) Gravity(t) = 6.67 10^-11 sum(802,000distance(t), for t > 0, t < 10) (10 / distance(t)) [Strictly speaking, this should be an integral over the whole of t, not a summation on a limited subset of t, but I'm doing this the faster, slightly less accurate way; the 10 / distance(t) at the end is to take only the y portion of my vectors, as the t portion of the gravitational vectors cancel out.]
Which gives, not entirely surprisingly, 1.067 * 10^-6 N directed to the origin. (I think your calculation was off by an order of magnitude, I'm not sure why.)
The difference between Newtonian gravity and gravity with respect to y is 3.38 * 10 ^-33. Which is expected; if the difference in gravitational force were greater, it would have been noticed a long time ago.
I probably messed up somewhere in there, because my brain is mush and it's been a while since I've mucked about with vectors, but this should give you the basic idea.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2012-07-05T05:51:55.739Z · LW(p) · GW(p)
I must apologize for the delay in replying. Regretfully, I don't think I can spare any more time for this exchange (and am going to be taking a break from this and some other addicting sites), so this will likely be my final reply.
distance(t) = sqrt(10^2 + c^2((5 - t)^2) Gravity(t) = 6.67 10^-11 sum(802,000distance(t), for t > 0, t < 10) * (10 / distance(t))
Now I think I sort-of see what you're trying to do here, but I don't understand what's motivating that specific expression; it seems to me that if you want to treat space and time symmetrically, then the expression you want is something more like (80)\,dt}{10%5E2+(-50+10t)%5E2+c%5E2(5%20-%20t)%5E2}), which should be able to be evaluated with the help of a standard integral table.
Please don't interpret this as hostility (for this is the sort of forum where it's actually considered polite to tell people this sort of thing), but my subjective impression is that you are confused in some way; I don't have the time or physics expertise to fully examine all the ideas you've mentioned and explain in detail exactly why they fail or why they work, but what you've said so far has not impressed me. If you want to learn more about physics, you are of course aware that there are a wide variety of standard textbooks to choose from, and I wish you the best of luck in your future endeavors.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-09T14:49:13.505Z · LW(p) · GW(p)
I do not interpret it, or any of your other responses, as hostility. (I've been upvoting your responses. I requested feedback, and you've provided it.)
I did indicate the integral would be more accurate; I can run a summation in a few seconds, however, where an integral requires breaking out a pencil and paper and skills I haven't touched since college. It was a rough estimate, which I used strictly to show what it was such a test should be looking for. Since we aren't running the test itself, accuracy didn't seem particularly important, since the purpose was strictly demonstrative.
(Neither formula is actually correct for the idea, however. The constant would be be wrong, and would need to be adjusted so the gravitational force would be equivalent to the existing formula for an object at rest.)
Thank you for your time!
↑ comment by thomblake · 2012-06-21T19:23:47.289Z · LW(p) · GW(p)
I would normally downvote an out-of-context wall of text like the above, but upvoted in accordance with Welcome post norms.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-21T19:29:05.072Z · LW(p) · GW(p)
My apologies. I looked for rules, but couldn't find any.
"If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation." seemed to indicate that this is where I should start.
Replies from: RobertLumley, thomblake↑ comment by RobertLumley · 2012-07-11T23:22:52.257Z · LW(p) · GW(p)
Hey! Welcome to LW. I've upvoted you too, but if you're looking for feedback on your OP, I'm too stupid to be having this conversation. :-)
Edit, since you mentioned you're an objectivist, you might be interested in the general prevailing opinion on Rand around these parts. That being said, LW does have a number of members who were, at one point, or perhaps still are, respectful of Rand.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-12T12:48:05.817Z · LW(p) · GW(p)
Howdy!
I'm not sure strict Randian Objectivists would agree that I'm an Objectivist; I use the term pretty broadly to describe anybody who ascribes to the philosophy, not necessarily the ethics. I take Ayn Rand at her word when she says people should think for themselves (the closest she got to a proscription in any of her works), and am not terribly impressed by much of her fan club, which refuses to.
That said, I'm not particularly impressed by that criticism, which, like most criticisms of Ayn Rand, revolves mainly around her personal life.
Replies from: Vaniver, RobertLumley↑ comment by RobertLumley · 2012-07-12T13:39:06.326Z · LW(p) · GW(p)
Hm. I don't necessarily agree it revolves around her personal life. The main gist of the post is A. Rand acknowledged no superior, B. If you don't acknowledge some way in which you are flawed you can never improve, so C. This is kind of a stupid thing to say.
I used to call myself a neo-objectivist, mostly because it was a word that had no definition, so I could claim I meant whatever I wanted. And I have a lot of respect for many of the conclusions that Rand came to. But the arrogance of her system is pretty off-putting to me.
Related, "Mozart was a Red", a play Murray Rothbard wrote parodying the time Rand invited him to come meet her.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-12T17:19:46.713Z · LW(p) · GW(p)
I've yet to meet somebody better than me at arguing politics; that doesn't mean it's impossible for me to get better, however, which is one of my motivations in continuing to do so. I'm not sure that A logically leads to B.
Replies from: Vaniver, RobertLumley↑ comment by Vaniver · 2012-07-12T17:39:26.967Z · LW(p) · GW(p)
I've yet to meet somebody better than me at arguing politics
Are you measuring this in times that you think you lost a political argument, times your opponent thought you won a political argument, or times you learned something interesting by discussing politics?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-07-19T16:49:58.660Z · LW(p) · GW(p)
I measure this in terms of a personal judgement that an objective or hostile third party would declare that my opponent has failed, which is not the same as "winning." It's impossible for me to win an argument, only to lose it. "Winning" would imply that there's no additional argument which could not be constructed to defeat my current argument. I can't prove the nonexistence of such an argument.
(I argue against the ideal, not the opponent; my opponent can lose, but my argument cannot win.)
↑ comment by RobertLumley · 2012-07-12T17:23:32.823Z · LW(p) · GW(p)
There's a difference: you (presumably) acknowledge that it's possible for you to get better at arguing politics. Rand did not. Rand believed it was impossible for anyone to be better than her.
comment by OnyeTemi · 2012-06-21T12:27:29.494Z · LW(p) · GW(p)
Hello everyone, i'm new to this and i actually do not know much about what's going on here, i just need help to find some textbooks recommendation to boost my academic performance this session. i am a year 2 Accounting student in the University of Lagos (Unilag),Nigeria. i hope you will be of great help.
comment by Aegist · 2012-06-19T18:21:43.021Z · LW(p) · GW(p)
Hello Less Wrong Community, I am here because I need as many rational debaters as possible - and it looks like I have found the central chamber of the kingdom here!
I am working on a project called rbutr - it is a simple tool which allows rebuttals to be connected to claims on a webpage-level. The purpose of which is to alert internet users to the existence of rebuttals to the specific page they are viewing, providing them with a simple way to click through to the counter-argument page.
So ideally, the community heping to build this resource (which is going to be amazing and revolutionary.... I promise) is a community of people who understand what qualifies as a quality rebuttal. The better quality our community, the better quality rbutr will be, and the better the effectiveness it will have as a tool to help educate the larger population of internet users.
Please read more about rbutr here on our press information page, and I'll spend some more time getting to know the Less Wrong community, and seeing what I can do to create a discussion somewhere to answer questions etc...
Thanks, Shane Greenup rbutr
Replies from: thomblakecomment by borntowin · 2012-04-06T14:08:48.556Z · LW(p) · GW(p)
Hello there people of LessWrong. I'm a 24 years old dude from a small country called Romania who has been reading stuff on this site since 2010 when Luke Muehlhauser started linking here. I'm a member of Mensa and got a B.A. in Management.
I have to admit that there are more things that interest me than there is time for me to study them so I can't really say I'm an expert in anything, I just know a lot of things better than most other people know them. That's not very impressive I guess but I hope that in 5 years from now there will be at least one think I know or do at an expert level.
My plan is to start my own company in the next few years and I think I know how to make politics to actually work. I love defining rationality as winning as you guys do and I think that I win more now, after reading articles on this website. Hopefully with time I might be able to contribute to the community too, there are some things that might just make LessWrong better.
comment by Rada · 2012-04-02T19:26:02.358Z · LW(p) · GW(p)
Hello to all! I'm a 17-year-old girl from Bulgaria, interested in Mathematics and Literature. Since I decided to "get real" and stop living in my comfortable fictional world, I've had a really tough year destroying the foundations of my belief system. Sure, there are scattered remains of old beliefs and habits in my psyche that I can't overcome. I have some major issues with reductionism and a love for Albert Camus ("tell me, doctor, can you quantify the reason why?" ).
In the last year I've come to know that it is very easy to start believing without doubt in something (the scientific view of the world included), perhaps too easy. That is why I never reject an alternative theory without some consideration, no matter how crazy it sounds. Sometimes I fail to find a rational explanation. Sometimes it's all too confusing. I'm here because I want to learn to think rationally but also because I want to ask questions.
Harry James Potter-Evans-Verres brought me here. To be honest, I hate this character with passion, I hate his calculating, manipulative attitude, and this is not what I believe rationality is about. I wonder how many of you see things as I do and how many would think me naive. Anyway, I'm looking forward to talking to you. I'm sure it's going to be a great experience.