Open Thread, April 2011
post by ata · 2011-04-02T18:43:26.253Z · LW · GW · Legacy · 111 commentsContents
111 comments
It seems we have agreed that open threads will continue but that they will go in the Discussion section, so here's this month's thread.
111 comments
Comments sorted by top scores.
comment by folkTheory · 2011-04-03T03:07:12.655Z · LW(p) · GW(p)
I propose we make a series of exercises to go along with the articles of the sequences. These exercises could help readers know how well they understood the material as well as help them internalize it better.
Replies from: Normal_Anomaly, RobinZ, Normal_Anomaly, David_Gerard, David_Gerard↑ comment by Normal_Anomaly · 2011-04-03T13:33:52.223Z · LW(p) · GW(p)
This looks like another of the good ideas people have on here that then doesn't get done. I'm sick of that happening.
If folkTheory creates one exercise as an example, I will make another. I hereby commit to this. If I don't follow through within 2 weeks of folkTheory posting his example, please downvote this comment to negative 10. FolkTheory, please PM me when you have yours so I make sure I see it. Thanks.
Everybody else who wants to see this succeed, feel free to post a similar comment.
EDIT: I'm doing an exercise for Words As Hidden Inferences, and will post the exercise as a discussion post no later than April 17, 2011. If it doesn't match what folkTheory was envisioning, I'll make edits but won't lose the karma.
EDIT2: It's up.
Replies from: folkTheory, JoshuaZ↑ comment by folkTheory · 2011-04-03T15:50:22.308Z · LW(p) · GW(p)
I'm glad this is actually happening, and at great speed.
I hereby commit to doing exercises for 'Belief in Belief' and 'Bayesian Judo' by what appears to be our standard commitment: Deliver by April 17, 2011 or downvote to -10
Note: It'll be a combined exercise for those two articles as they're very related.
↑ comment by RobinZ · 2011-04-03T14:18:57.226Z · LW(p) · GW(p)
I propose we create posts in /discussion/ for each post in the sequence containing exercises for that post. I will create a Wiki page now where people can indicate that they have taken charge of creating exercises for any specific post. If I do not edit this comment with a link to said Wiki page within two days, downvote this comment to -10.
Edit: Project page. If I do not take charge of creating exercises for at least one page within two days, downvote this comment to -5.
Edit: Claimed "Making Beliefs Pay Rent (in Anticipated Experiences)". If I do not submit a page of exercises within two weeks, downvote this comment to -10.
Replies from: folkTheory, Giles↑ comment by Normal_Anomaly · 2011-04-03T16:26:56.091Z · LW(p) · GW(p)
Would you be willing to beta read my exercise for "Words as Hidden Inferences"? If you say yes I'l email you a word document.
Replies from: folkTheory↑ comment by folkTheory · 2011-04-03T16:40:58.654Z · LW(p) · GW(p)
Absolutely, I'd love to.
↑ comment by David_Gerard · 2011-04-03T21:38:12.026Z · LW(p) · GW(p)
Now that this is happening, I suggest a post (maybe discussion, maybe main) noting that it is happening and with progress and commitments so far.
Replies from: RobinZ↑ comment by RobinZ · 2011-04-04T00:50:31.237Z · LW(p) · GW(p)
Seconding. As the one who propsed it, I'd suggest folkTheory should make it.
Replies from: folkTheory↑ comment by folkTheory · 2011-04-04T01:45:08.552Z · LW(p) · GW(p)
Done.
↑ comment by David_Gerard · 2011-04-03T08:37:16.931Z · LW(p) · GW(p)
THIS. THIS. Read parent.
comment by atucker · 2011-04-03T14:55:42.472Z · LW(p) · GW(p)
I was thinking of starting a sequence of articles summarizing Heuristics and Biases by Kahneman and Tversky for people who don't want to buy or read the book.
I bought it, and it seems like something like this would help make me actually stick through reading it long enough to make me finish it. And make it more memorable.
Would people want that?
Edit: I guess the answer is Yes. I should make time for this.
Replies from: None, David_Gerard, benelliott↑ comment by [deleted] · 2011-04-04T20:48:40.554Z · LW(p) · GW(p)
.
Replies from: Richard_Kennaway, atucker↑ comment by Richard_Kennaway · 2011-04-05T11:28:07.287Z · LW(p) · GW(p)
Wikipedia has the "Simple English" version, maybe there could be a similar parallel version of the LessWrong wiki? Although I find reading the Simple English Wikipedia a rather mind-numbing experience.
Replies from: None↑ comment by [deleted] · 2011-04-08T02:52:23.492Z · LW(p) · GW(p)
.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-08T03:26:11.265Z · LW(p) · GW(p)
Are you familar with youarenotsosmart.com? It might be more what you're looking for.
Replies from: None↑ comment by atucker · 2011-04-05T01:16:11.501Z · LW(p) · GW(p)
It would be fun, but I'm not sure how memorable it would be. Maybe do them as jokes?
Couldn't hurt to do as a recap though.
Replies from: None↑ comment by [deleted] · 2011-04-05T01:22:56.775Z · LW(p) · GW(p)
.
Replies from: atucker↑ comment by atucker · 2011-04-05T01:31:17.786Z · LW(p) · GW(p)
Like, weighty and burned into my brain in a way that makes it a part of my natural reaction to things.
I guess if they were short enough to memorize the list though, I could just memorize it and go through it when I was worried about a bias.
Replies from: None↑ comment by David_Gerard · 2011-04-03T20:36:45.305Z · LW(p) · GW(p)
By all means :-) Links to relevant Sequences articles should be achievable as well.
Replies from: atucker↑ comment by atucker · 2011-04-03T21:12:06.105Z · LW(p) · GW(p)
Yeah. I intend to use existing material whenever appropriate.
IIRC, there are quite a few articles on specific cognitive biases floating around here already, they're just not well indexed.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-03T22:03:07.712Z · LW(p) · GW(p)
You may find this site interesting as well.
Replies from: atucker↑ comment by benelliott · 2011-04-03T16:14:13.281Z · LW(p) · GW(p)
Please do this.
comment by Kutta · 2011-04-02T21:30:13.967Z · LW(p) · GW(p)
I've just read "Hell is the Absence of God" by Ted Chiang, and upon finishing it I was blown away to such extent that I was making small inarticulate sounds and distressed facial expressions for about a minute. An instant 10/10 (in spite of its great ability to cause discomfort in the reader, but hey, art =/= entertainment all the time).
I'm compelled to link to a html mirror but I suppose it hasn't the author's permission. Anyone who'd like to read the story now may look at the first page brought up by googling the title. This is the book in question.
I'm curious as to the opinions of those who have read it.
Replies from: chris_elliott, drethelin, NancyLebovitz, gwern, Normal_Anomaly, cousin_it↑ comment by chris_elliott · 2011-04-03T06:04:43.812Z · LW(p) · GW(p)
I think people on Less Wrong might enjoy my personal favourite Ted Chiang story "Understand", about nootropics. It's also been made available in full on Infinity Plus with permission, here: http://www.infinityplus.co.uk/stories/under.htm
↑ comment by drethelin · 2011-04-03T02:02:12.023Z · LW(p) · GW(p)
Ted Chiang is a master. If you haven't I recommend reading at least the rest of the stories in the collection that has that one.
To me, it felt like an extrapolation of a lot of existing beliefs. IF you believe that god causes miracles and sends people to heaven or hell, and ALSO that god is unknowable to lesser beings, this is the kind of world that you get.
↑ comment by NancyLebovitz · 2011-04-02T22:41:28.045Z · LW(p) · GW(p)
Emotionally very intense, but essentially an argument against a point of view that I don't have a connection to-- the idea that God is substantially inimical to people, but wants worship.
I was raised Jewish (the ethnicity took, the religion didn't), so I fear malevolent versions of Christianity, but I don't exactly hate them in quite the way that people who expect Christianity to be good seem to.
ETA: It may not be a coincidence that Chiang's "Seventy-Two Letters" is one of my favorites among his stories.
James Morrow (another sf author who spends a lot of time poking at Christianity) doesn't do much for me, either.
I seem to be jumping to conclusions about your reaction. What do you think made the story so affecting for you?
↑ comment by gwern · 2011-04-03T01:52:40.710Z · LW(p) · GW(p)
I just read it because of this comment. I was pretty impressed by the few Chiang stories I've read before (Nancy mentions "Seventy-Two Letters" which I was amazed by). He has a very smooth prose style that reminds me of one of my favorite SF authors, Gene Wolfe, and seems to have an intellectual depth comparable to another favorite of mine, Jorge Luis Borges.
I have no idea what to make of this one. I'm baffled. I'm horrified, I think. The final lines twist the dagger. Do I take it as a reductio of divine command theories of morality? Of an investigation of true love? Or what?
Replies from: Kutta↑ comment by Kutta · 2011-04-03T10:40:49.102Z · LW(p) · GW(p)
Do I take it as a reductio of divine command theories of morality? Of an investigation of true love? Or what?
There are small notes attached to each story in my book. The note to this one contains:
(…) For me one of the unsatisfying things about the Book of Job is that, int he end, God rewards Job. (…) One of the basic messages of the book is that virtue isn’t always rewarded; bad things happen to good people. Job ultimately accepts this, demonstrating virtue, and is subsequently rewarded. Doesn’t this undercut the message? It seems to me that the Book of Job lacks the courage of its convictions: if the author were really committed to the idea that virtue isn’t always rewarded, shouldn’t the book have ended with Job still bereft of everything?
The story reminded me immediately of the Book of Job and thus subsequently I was confirmed in my suspicion.
A primary role of the Book of Job in the Bible is the reconciliation of reality with a belief in God. It is a crucial point because the empirically experienced reality is that good and bad things happen to people without the apparent influence of some higher being. People may take (or historically have taken) the grandiose and fantastic biblical stories of God’s exploits at face value, but one’s conviction can endure only so much stress that arises from faith’s incongruity with everyday reality. This makes the Book of Job exceedingly self-conscious as of the Old Testament’s standards; it has to be precisely aware of how faith and reality works and the differences between the two, which makes the book sound like it was written by an atheistic marketing expert. But because there is no God the tension cannot be fully neutralized no matter how clever the moralizing is. Thus, the purpose of Job’s ultimate reward is to „bribe” the readership into accepting the moral of the story even though the bribe itself contradicts that moral. The bribe cannot be left out from the Book of Job because then faith would either turn into nothing – because there is no morally meaningful influence from any agent - or believers would have to believe in an explicitly malevolent deity. The illusory promise of divine reward can’t be fully disposed of. There actually is a limit to how morally repugnant your religion can be.
Back to Chiang’s story. Nancy Lebovitz thought that it expresses the idea that God is evil but wants worship. I think this is not the case; rather it is an actually quite faithful reiteration of the Book of Job, with the difference that it tries to realize Job’s message to its full and horrific extent.
Now, the workings of Chiang’s mortal world are essentially the same as our world; the miracles and punishments seem to be just genuinely random, much like most real world accidents and flukes of probability. The angels are just another type of accident. The note also says:
Thinking about natural disasters led to thinking about the problem of innocent suffering. An enormous range of advice has been offered from a religious perspective to those who suffer, and it seems clear that no single response can satisfy everyone; what comforts one person inevitably strikes someone else as outrageous.
However, the epistemic situation of the inhabitants of Chiang’s world differs because they have strong evidence of God, Heaven and Hell. This, I think, is to illustrate the general situation of religious people: they live in a real world and believe in a world of God, Heaven and Hell. The blatant and almost parodic depiction of divine evidences in Chiang’s story serves the purpose to draw attention away from the boring usual atheism vs. theism debate; here a theistic epistemic situation is the premise. In a way, Chiang’s mortal world depicts the world of real-life theists, with Heaven and Hell representing the two ways they can go. Heaven is blind faith, Hell is atheism and the middle world is the unstable world of doubts, rationalizations and constant inner conflicts. It is quite a masterful spin on the Christian universe, where the middle world is also an unstable stage, but with the conflicting forces of moral good and bad.
In the story Heaven is associated with the heavenly light that actually makes one blind. The blind faith scenario of heaven is a total rejection of all individual sense of morality. The Hell scenario is the „decide for yourselves” one. Because the mentioned parallel between Chiang’s mortals and religious people, the main point of the story, I believe, is that if you believe in a God that doesn’t exist, you are going to be pushed around by a neutral universe anyway, and trying to reconcile faith with reality would only cause more mental anguish. If you want to permanently keep your faith, you have to make yourself completely and irreversibly blind, and be ready to accept an arbitrary amount of potential suffering. Just like Job did.
Aside from this above stuff there is also the subject of Sarah, the protagonist’s deceased wife but I haven’t yet thought about that in detail. Plus there are the marvelous depictions of lots of religion vs. innocent victims coping mechanisms etc.
↑ comment by Normal_Anomaly · 2011-04-03T18:58:02.755Z · LW(p) · GW(p)
A whole book of his is available on Google Books. I've read the first 2.5 stories so far and they are all good, but varying shades of unpleasant.
comment by Risto_Saarelma · 2011-04-03T07:24:14.551Z · LW(p) · GW(p)
There's a fresh Metafilter thread on John Baez's interview of Yudkowsky. It also mentions HP:MoR.
Noticed this comment:
I started reading Harry Potter and the Methods of Rationality once and it drove me crazy. The book's Harry Potter doesn't practice rationality, he practices empiricism.
So people actually do start thinking of the Enlightenment era school of philosophy, like some earlier commenters feared. I also remembered a couple of philosophy blog posts from a few years ago, The Remnants of Rationalism and A Lesson Forgotten, which seem to work from the assumption that 'rationalism' will be understood to mean an abandoned school of philosophy.
Redefining established terms is a crank indicator, so stuff like this might be worth paying attention to.
Replies from: Wei_Dai, arundelo↑ comment by Wei Dai (Wei_Dai) · 2011-04-03T08:22:36.033Z · LW(p) · GW(p)
I think Eliezer can't be reasonably accused of trying to redefine "rationality" and the problem is on the part of the Metafilter commenter. It seems easy enough to fix though. Just point them to http://en.wikipedia.org/wiki/Rationality or http://books.google.com/books?id=PBftMFyTCR0C&lpg=PA3&dq=rationality&pg=PA3#v=onepage&q&f=false
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2011-04-03T08:55:16.255Z · LW(p) · GW(p)
Good call. There being an Oxford Handbook of Rationality with a chapter on Bayesianism seems to show that the term is acquiring new connotations on a bit wider scope than just on LW.
Replies from: Sniffnoy, David_Gerard↑ comment by Sniffnoy · 2011-04-03T09:10:46.256Z · LW(p) · GW(p)
Tangentially, looking through this, I note that it appears to address the circularity of basing utility on probability and probability on utility. It claims there's a set of axioms that gets you both at once, and it's due to Leonard Savage, 1954. How has this gone unmentioned here? I'm going to have to look up the details of this.
↑ comment by David_Gerard · 2011-04-03T20:45:08.112Z · LW(p) · GW(p)
We need a decent "Bayesian epistemology" article on LW. The SEP one may suck. And EY's "Intuitive Explanation" is, IME, nothing of the sort.
↑ comment by arundelo · 2011-04-03T15:20:23.085Z · LW(p) · GW(p)
If the Metafilter commenter is saying that the book is mistitled because rationalism is the opposite of empiricism, his or her comment doesn't make sense considering that the book's title uses "rationality", not "rationalism". (Compare Google hits for rationality versus rationalism.)
comment by Bo102010 · 2011-04-05T06:43:23.879Z · LW(p) · GW(p)
I used to have a hobby of reading Christian apologetics to get a better understanding of how the other side lives. I got some useful insights from this, e.g. Donald Miler's Blue Like Jazz was eye-opening for me in that it helped me understand better the psychology of religious faith. However, most books were a slog and I eventually found more entertaining uses for my time.
Today I saw that a workmate of mine was reading Lee Strobel's The Case For Faith earlier. My policy is to not discuss politics or religion at work, so I didn't bring it up there.
I hadn't read that particular book before, so I was curious about its arguments. Reading over the summary, I remembered again why I quit reading Christian apologetics - they are really boring.
The subtitle of The Case Against Faith is A Journalist Investigates the Toughest Objections to Christianity, and is quite untrue. I can almost dismiss each chapter in the time it takes to yawn. Even if Strobel had good answers to the Problem of Evil, or proved that religious people historically have been less violent than non-religious people, or somehow found a gap in current understanding of evolution, he would still be leagues away from providing evidence for a god, let alone his particular god.
I remember being similarly bored by a Christian-turned-Atheist's book John Loftus' Why I Became an Atheist. A common criticism of atheist writers is that they don't engage the more sophisticated arguments of theists. This book illustrates why - the sophisticated arguments are stupid. Loftus accepts Christian scholars' ideas, arguing within spaces previously occupied by dancing angels (e.g. he says on p.371 "In a well-argued chapter... Lowder has defended the idea that Jesus' body was hastily buried before the Sabbath day... but that it was relocated on the Sabbath Day to the public graveyard of the condemned...").
Most of us here would probably lose a live debate in front of an audience against someone like Lee Strobel. Even so, it's a little disappointing to me that even the most skilled theist debater's signature attack relies on bits like "This first cause must also be personal because there are only two accepted types of explanations, personal and scientific, and this can't be a scientific explanation." Because winning the debate by refuting that would be a waste of intellect.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-05T12:54:09.344Z · LW(p) · GW(p)
Running Towards the Gunshots: A Few Words about Joan of Arc was the first thing which gave me a feeling of why anyone would want to be Catholic. However, that's the emotional side, not the arguments.
tl, dr (and be warned, the piece is highly political): Joan of Arc is the patron saint of disaffected Catholics-- not only does the rant give a vivid picture of what it's like to love Catholicism, it's so large and so old that there's a reasonable chance that it will have something to suit a very wide range of people.
comment by David_Gerard · 2011-04-21T18:54:23.961Z · LW(p) · GW(p)
Per talk page - I have just updated the jargon file on the wiki, making it actually a list of jargon with definitions. I've also folded in the previous acronym file, as a jargon file should be a single page. Point your n00bs here. Since it's a wiki, feel free to fix any of my quick one-line definitions you don't like.
comment by TheCosmist · 2011-04-02T20:47:08.675Z · LW(p) · GW(p)
(I'm new here and don't have enough karma to create a thread, so I am posting this question here. Apologies in advance if this is inappropriate.)
Here is a topic I haven’t seen discussed on this forum: the philosophy of “Cosmicism”. If you’re not familiar with it check Wikipedia, but the quick summary is that it’s the philosophy invented by H. P. Lovecraft which posits that humanity’s values have no cosmic significance or absolute validity in our vast cosmos; to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us. Furthermore, all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation. Lovecraft’s conclusion is: “good, evil, morality, feelings? Pure 'Victorian fictions'. Only egotism exists."
Personally I find this point of view difficult to refute – it seems as close to the truth about “life, the universe and everything” as one can have and remain consistent with our current understanding of the universe. At the same time, such a philosophy is rather frightening in that a world of egomaniacal cosmicists who consider human values to be meaningless would be seem to be highly unstable and insane.
I don’t claim to be an exceptionally rational person, so I’m asking the rationalists of this forum: what is your response to Cosmicism?
Replies from: Nisan, cousin_it, sark, Vladimir_Nesov↑ comment by Nisan · 2011-04-03T01:29:35.716Z · LW(p) · GW(p)
cousin_it and Vladimir_Nesov's replies are good answers; at the risk of being redundant, I'll take this point by point.
to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us.
The above is factually correct.
humanity’s values have no cosmic significance or absolute validity in our vast cosmos
The phrases "cosmic significance" and "absolute validity" are confused notions. They don't actually refer to anything in the world. For more on this kind of thing you will want to read the Reductionism Sequence.
all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation
Our efforts would be "ultimately futile" if we were doomed to never achieve our goals, to never satisfy any of our values. If the only things we valued were things like "living for an infinite amount of time", then yes, the heat death of the universe would make all our efforts futile. But if we value things that only require finite resources, like "getting a good night's sleep tonight", then no, our efforts are not a priori futile.
Only egotism exists.
Egotism is an idea, not a thing, so it's meaningless to say that it exists or doesn't exist. You could say "Only egoists exist", but that would be false. You could also say "In the limit of perfect information and perfect rationality, all humans would be egoists", and I believe that's also false. Certainly nothing you've said implies that it's true.
The Metaethics Sequence directly addresses and dissolves the idea that everything seems to be meaningless because there is no objective, universally compelling morality. But the Reductionism Sequence should be read first.
Replies from: jsalvatier, TheCosmist↑ comment by jsalvatier · 2011-04-03T04:38:38.708Z · LW(p) · GW(p)
Very well expressed. Especially since it links to the specific sequence that deals with this instead of generally advising to "read the sequences".
↑ comment by TheCosmist · 2011-04-03T04:29:17.105Z · LW(p) · GW(p)
Wow fantastic thank you for this excellent reply. Just out of curiosity, is there any question this "cult of rationality" doesn't have a "sequence" or a ready answer for? ;)
Replies from: benelliott, Nisan↑ comment by benelliott · 2011-04-03T08:19:53.373Z · LW(p) · GW(p)
The sequences are designed to dissolve common confusions. By dint of those confusions being common, almost everybody falls into them at one time or another, so it should not be surprising that the sequences come up often in response to new questions.
↑ comment by Nisan · 2011-04-03T16:49:50.091Z · LW(p) · GW(p)
You're welcome. The FAQ says:
Replies from: arundeloWhy do you all agree on so much? Am I joining a cult?
We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all. Yes, some of the results that fall out of these basics sound weird if you haven't seen the reasoning behind them, but there's nothing in the laws of physics that prevents reality from sounding weird.
↑ comment by cousin_it · 2011-04-02T20:55:01.053Z · LW(p) · GW(p)
The standard reply here is that duh, values are a property of agents. I'm allowed to have values of my own and strive for things, even if the huge burning blobs of hydrogen in the sky don't share the same goals as me. The prospect of increasing entropy and astrophysical annihilation isn't enough to make me melt and die right now. Obligatory quote from HP:MOR:
Replies from: TheCosmist"There is no justice in the laws of Nature, Headmaster, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky. But they don't have to! We care! There is light in the world, and it is us!"
↑ comment by TheCosmist · 2011-04-02T21:15:55.787Z · LW(p) · GW(p)
So in other words you agree with Lovecraft that only egotism exists?
Replies from: cousin_it, David_Gerard↑ comment by cousin_it · 2011-04-02T21:17:18.958Z · LW(p) · GW(p)
Wha? There's no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I'd try to save it. The fact that insectoid aliens may not adore kittens doesn't change my values one bit.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-04-02T22:49:35.397Z · LW(p) · GW(p)
That's certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no "right," only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts.
In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ -- and they often will -- it must be either "good fences make good neighbors" or "who-whom." In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-02T23:08:17.047Z · LW(p) · GW(p)
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the "correct" counterfactual outcome of war.
Though bargaining may still look like "who whom" if one party is much more powerful than the other.
Replies from: Vladimir_M, AlephNeil↑ comment by Vladimir_M · 2011-04-03T00:06:28.393Z · LW(p) · GW(p)
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don't know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.)
And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
↑ comment by AlephNeil · 2011-04-03T17:36:09.072Z · LW(p) · GW(p)
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements.
By 'negative-sum' do you really mean 'negative for all parties'? Because, taking 'negative-sum' literally, we can imagine a variant of the Prisoner's Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-03T18:55:41.936Z · LW(p) · GW(p)
I suppose I meant "Pareto-suboptimal". Sorry.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-04-03T21:33:28.498Z · LW(p) · GW(p)
I suppose I meant "Pareto-suboptimal".
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-03T22:53:15.922Z · LW(p) · GW(p)
I don't understand your comment. There can be many Pareto optimal outcomes. For example, "Alice gives Bob a million dollars" is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-04-03T23:06:20.531Z · LW(p) · GW(p)
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
↑ comment by David_Gerard · 2011-04-03T08:58:38.203Z · LW(p) · GW(p)
As I commented on What Would You Do Without Morality?:
I expect I'll keep on doing what I'm doing, which is trying to work out what I actually want. [...] So far I haven't lapsed into nihilist catatonia or killed everyone or destroyed the economy. This suggests that assuming a morality is not a requirement for not behaving like a sociopath. I have friends and it pleases me to be nice to them and I have a lovely girlfriend and a lovely three year old daughter who I spend most of my life's efforts on trying to bring up and on the prerequisites to that.
Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them.
This may or may not directly answer your question of what's the correct moral engine to have in one's mind (if there is a single correct moral engine to have in one's mind - and even assuming what's in one's mind has a tremendous effect on one's observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don't actually care about that except insofar as it affects the observed behaviour.
↑ comment by sark · 2011-04-03T13:04:16.705Z · LW(p) · GW(p)
It's perhaps worthwhile pointing out that even as there is nothing to compel you to accept notions such as "cosmic significance" or "only egotism exists", by symmetry, there is also nothing to compel you to reject those notions (except for your actual values of course). So it really comes down to your values. For most humans, the concerns you have expressed are probably confusions, as we pretty much share the same values, and we also share the same cognitive flaws which let us elevate what should be mundane facts about the universe to something acquiring moral force.
Also, it's worth pointing out that there is no need for your values to be "logically consistent". You use logic to figure out how to go about the world satisfying your values, and unless your values specify a need for a logically consistent value system, there is no need to logically systematize your values.
↑ comment by Vladimir_Nesov · 2011-04-02T21:04:51.830Z · LW(p) · GW(p)
Read the sequences and you'll probably learn to not make the epistemic errors that generate this position, in which case I expect you'll change your mind. I believe it's a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem. (On the other hand, the metaethics sequence, which is more directly relevant to your problem, is relatively hard to understand, so success is not guaranteed, and you can benefit from a targeted argument at that point.)
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-03T08:41:51.372Z · LW(p) · GW(p)
Read the sequences
You know, I was hoping the gentle admonition to casually read a million words had faded away from the local memepool.
Your usage here also happens to serve as an excellent demonstration of the meaning of the phrase as described on RW. I suggest you try not to do that. Pointing people to a particular post or at worst a particular sequence is much more helpful. (I realise it's also more work before you hit "comment", but I suggest that's a feature of such an approach rather than a bug.)
and you'll probably learn to not make the epistemic errors that generate this position
Do please consider the possibility that to read the sequences is not, in fact, to cut'n'paste them into your thinking wholesale.
TheCosmist: the sequences are in fact useful for working out what people here think, and for spotting when what appears to be an apposite comment by someone is in fact a callout. ciphergoth has described LW as "a fan site for the sequences", which it's growing into more than, but which is still useful to know as the viewpoint of many long-term readers. It took me a couple of months of casual internet-as-television-time reading to get through them, since I was actively participating here and all.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-03T09:07:03.210Z · LW(p) · GW(p)
Sequences are a specific method of addressing this situation, not a general reference. I don't believe individual references would be helpful, instead I suggest systematic training. I wrote:
I believe it's a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem.
You'd need to address this argument, not just state a deontological maxim that one shouldn't send people to read the sequences.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-03T09:09:44.380Z · LW(p) · GW(p)
I wasn't stating a deontological maxim - I was pointing that you were being bloody rude in a highly unproductive manner that's bad for the site as a whole. "I suggest you try not to do that."
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-03T09:32:25.762Z · LW(p) · GW(p)
Again, you fail to address the actual argument. Maybe the right thing to do is to stay silent, you could argue that. But I don't believe that pointing out references to individual ideas would be helpful in this case.
Also, consider "read the sequences" as a form of book recommendation. Book recommendations are generally not considered "bloody rude". If you never studied topology, and want to understand Smirnov metrization theorem, "study the textbook" is the right kind of advice.
Actually changing your mind is an advanced exercise.
comment by timtyler · 2011-04-16T15:06:12.825Z · LW(p) · GW(p)
Friendly AI: A Dangerous Delusion?
By: Hugo de Garis - Published: April 15, 2011
http://hplusmagazine.com/2011/04/15/friendly-ai-a-dangerous-delusion/
Replies from: timtylercomment by wedrifid · 2011-04-08T07:31:08.626Z · LW(p) · GW(p)
The latest XKCD was brilliant. :)
comment by David_Gerard · 2011-04-02T18:57:14.046Z · LW(p) · GW(p)
I have only just discovered that Hacker News is worth following. Since the feed of stuff I read is Twitter, that would be @newsycombinator. I started going back through the Twitter feed a few hours ago and my brain is sizzling. Note that I am not a coder at all, I'm a Unix sysadmin. Work as any sort of computer person? You should have a look.
Replies from: cousin_it, atucker↑ comment by cousin_it · 2011-04-02T20:37:41.875Z · LW(p) · GW(p)
The YC/HN community was initially built on Paul Graham's essays, just like LW was built on Eliezer's sequences. Those essays are really, really good. If you haven't read them already, here's a linky, start from the bottom.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-02T21:00:03.746Z · LW(p) · GW(p)
I have indeed :-)
It's annoying that @newsycombinator is to the linked pages themselves, not to the Hacker News discussion.
↑ comment by atucker · 2011-04-02T22:36:40.727Z · LW(p) · GW(p)
I actually got to OB/LW through Hacker News.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-02T22:40:34.029Z · LW(p) · GW(p)
I have known about Hacker News for ages, mentally filing it away as yet another Internet news aggregation site. However, I just happened to look at @newsycombinator and was quite surprised at how much of it was gold.
Replies from: Alexandros↑ comment by Alexandros · 2011-04-03T09:04:48.176Z · LW(p) · GW(p)
It is another news aggregation service, but it just happens to be the best :). There is a credible hypothesis that it's not as good as it used to, as well. But it's still head and shoulders over everything else (minus LW). I also came to OB via HN if I recall correctly.
comment by TobyBartels · 2011-05-24T20:56:28.440Z · LW(p) · GW(p)
Is it just me, or do you feel a certain respect for Harold Camping? He describes himself as "flabbergasted" that the world didn't end as he predicted. He actually noticed his confusion!
(I can't find the Open Thread for May 2011.)
Replies from: jimrandomh↑ comment by jimrandomh · 2011-05-24T21:28:25.321Z · LW(p) · GW(p)
Is it just me, or do you feel a certain respect for Harold Camping? He describes himself as "flabbergasted" that the world didn't end as he predicted. He actually noticed his confusion!
He also predicted that the world would end on May 21, 1988 and September 7, 1994. I don't think respect is appropriate.
Replies from: TobyBartels↑ comment by TobyBartels · 2011-05-25T00:16:03.610Z · LW(p) · GW(p)
Too bad! I see that the latest reports have him updating to October, so he didn't attend to his confusion for very long this time either.
comment by RobinZ · 2011-04-10T01:47:14.820Z · LW(p) · GW(p)
Via 538: How Feynman Thought on the Freakonomics blog.
comment by bogus · 2011-04-02T22:35:52.529Z · LW(p) · GW(p)
Reposting from the latest HP:MoR discussion thread, since not everyone reads recent comments and I'm not sure this warrants a full post:
Fanfiction.net user Black Logician has announced Harry's Game, a spinoff of HP:MoR which branches out around Chapter 65-67 of the original fic. From his post at the HP:MoR review board:
...Hermione has already formed SPHEW. Quirell though doesn't dismantle Harry's army, but goes for an alternative condition to make the army wars more of a challenge to Harry. ...
Please use ROT13 for spoilers when discussing Harry's Game.
Replies from: Alicorn↑ comment by Alicorn · 2011-04-02T22:40:22.071Z · LW(p) · GW(p)
The writing errors in this story are very distracting. I did not click past chapter 1. Is there something to recommend it so strongly that I should get over the bad grammar etc.?
Replies from: bogus, Lila, FAWS↑ comment by FAWS · 2011-04-03T15:16:03.033Z · LW(p) · GW(p)
It's a lot more Ender's Game like than MoR already was. The ideas are good to decent, the execution questionable and the writing poor (by fanfiction worth reading standards, decent by average fanfiction standards). I found it fairly enjoyable, but I mostly managed to tune out the quality of the writing.
I'd recommend it to anyone who loves MoR for the clever plots, and anyone who enjoys the clever plots and can get over bad writing.
comment by wedrifid · 2011-04-25T06:18:37.930Z · LW(p) · GW(p)
I just had a startling revelation. I had been glancing now and then at my karma for the last few days and noticed that it was staying mostly constant. Only going up now and then. This is despite a lot of my comments getting a whole bunch of upvotes. So naturally I figured I had offended one or more folks and they were downvoting me steadily to keep it constant. I don't exactly tiptoe around to avoid getting anyone offside and I don't really mind that much if people use karma hits as a way to get their vengeance on. It saves them taking it out via actual comments in the slightly more real social reality.
But I just looked at the bottom of the sidebar and slapped myself. Left aligned text formatting in a limited space is roughly equivalent to taking significant figures (with a floor instead of a round). Oops. Apparently nobody hates me after all, just, well, too many people love me. :P
comment by David_Gerard · 2011-04-17T12:53:25.195Z · LW(p) · GW(p)
My nomination for Worst Use of the word "Bayesian", April 2011. This may answer my earlier question as to whether creationists, birthers, etc adopting the notion of Bayes' theorem is a good idea or not. Remember: choose your prior based on your bottom line!
comment by ata · 2011-04-15T06:46:10.678Z · LW(p) · GW(p)
To anyone who knows: How active are the fortnightly Cambridge, MA meetups? There seem to be very few RSVPs on the meetup.com page, but I suppose it's possible that if there are any regular attendees they don't always bother RSVPing.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-04-15T12:04:30.136Z · LW(p) · GW(p)
We generally just don't bother RSVPing. Median attendance is 4, occasionally much more.
comment by scientism · 2011-04-07T04:12:42.575Z · LW(p) · GW(p)
Hypothetical situation: Let's say while studying rationality you happened across a technique that proved to give startlingly good results. It's not an effortless path to truth but the work is made systematic and straightforward. You've already achieved several novel breakthroughs in fields of interest where you've applied the technique (this has advanced your career and financial standing). However, you've told nobody and, since nobody is exploring this area, you find it unlikely anybody will independently discover the same technique. You have no reason to believe others would apply this technique to areas you value and therefore doubt the benefits of sharing it widely. There could be a significant first mover advantage to being the only person who practices the technique.
Questions: What do you do? Would you share it with the world on principle? Would you try to establish a trusted group to practice the technique? Would you keep it to yourself until you could improve your position to the point where you'd have greater control and wouldn't have to watch hopelessly as the technique was applied to immoral ends by others with greater resources than you? Or is there another option?
Replies from: wedrifidcomment by Bo102010 · 2011-04-05T06:43:29.656Z · LW(p) · GW(p)
I used to have a hobby of reading Christian apologetics to get a better understanding of how the other side lives. I got some useful insights from this, e.g. Donald Miler's Blue Like Jazz was eye-opening for me in that it helped me understand better the psychology of religious faith. However, most books were a slog and I eventually found (more entertaining uses for my time)[http://projecteuler.net/index.php?section=problems].
Today I saw that an workmate of mine was reading Lee Strobel's The Case For Faith earlier. My policy is to not discuss politics or religion at work, so I didn't bring it up there.
I hadn't read that particular book before, so I was curious about its arguments. Reading over the summary, I remembered again why I quit reading Christian apologetics - they are really boring.
The subtitle of The Case Against Faith is A Journalist Investigates the Toughest Objections to Christianity, and is quite untrue. I can almost dismiss each chapter in the time it takes to yawn. Even if Strobel had good answers to the Problem of Evil, or proved that religious people historically have been less violent than non-religious people, or somehow found a gap in current understanding of evolution, he would still be leagues away from providing evidence for a god, let alone his particular god.
I remember being similarly bored by a Christian-turned-Atheist's book John Loftus' Why I Became an Atheist. A common criticism of atheist writers is that they don't engage the more sophisticated arguments of theists. This book illustrates why - the sophisticated arguments are stupid. Loftus accepts Christian scholars' ideas, arguing within spaces previously occupied by dancing angels (e.g. he says on p.371 "In a well-argued chapter... Lowder has defended the idea that Jesus' body was hastily buried before the Sabbath day... but that it was relocated on the Sabbath Day to the public graveyard of the condemned...").
Most of us here would probably lose a live debate in front of an audience against someone like Lee Strobel. Even so, it's a little disappointing to me that even the most skilled theist debater's signature attack relies on bits like "This first cause must also be personal because there are only two accepted types of explanations, personal and scientific, and this can't be a scientific explanation." Because winning the debate by refuting that would be a waste of intellect.
comment by JoshuaZ · 2011-04-03T05:20:02.186Z · LW(p) · GW(p)
Today's SMBC has an amusing take on the simulation argument and attempting to guess the goals of the simulation creators.
Replies from: CronoDAS↑ comment by CronoDAS · 2011-04-03T07:02:06.755Z · LW(p) · GW(p)
For some reason, that comic reminds me of a particular Isaac Asimov story.
comment by ewang · 2011-04-03T04:08:29.226Z · LW(p) · GW(p)
Does anyone else have religiophobia? I get irrationally scared every time I see someone passing out pocket bibles or knocking on doors with pamphlets. I'm afraid of...well, of course there isn't much to be afraid of, or else it wouldn't be a phobia.
Replies from: JoshuaZ, Normal_Anomaly, David_Gerard↑ comment by JoshuaZ · 2011-04-03T04:13:13.655Z · LW(p) · GW(p)
Does anyone else have religiophobia? I get irrationally scared every time I see someone passing out pocket bibles or knocking on doors with pamphlets. I'm afraid of...well, of course there isn't much to be afraid of, or else it wouldn't be a phobia.
Not really. I only have annoyance that whenever I see such people I'm always too busy to talk to them and find out more about what religion they are. I consider this to be evidence that there is a deity and that that deity treats me sort of how one might treat a cat when one has recently obtained a laser pointer.
↑ comment by Normal_Anomaly · 2011-04-03T22:21:21.077Z · LW(p) · GW(p)
I don't get scared when I see people doing this, but I do have an irrational desire to go get into a long useless argument. I'm always too busy to have to fight it, though.
↑ comment by David_Gerard · 2011-04-03T08:36:05.685Z · LW(p) · GW(p)
Fortunately, we have a defensive weapon (PDF) to hand.