Welcome to Less Wrong! (5th thread, March 2013)

post by orthonormal · 2013-04-01T16:19:17.933Z · LW · GW · Legacy · 1746 comments

Contents

  A few notes about the site mechanics
  A few notes about the community
  A list of some posts that are pretty awesome
None
1746 comments
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.

1746 comments

Comments sorted by top scores.

comment by atomliner · 2013-04-12T08:31:40.315Z · LW(p) · GW(p)

Hello! I call myself Atomliner. I'm a 23 year old male Political Science major at Utah Valley University.

From 2009 to 2011, I was a missionary for the Mormon Church in northeastern Brazil. In the last month I was there, I was living with another missionary who I discovered to be a closet atheist. In trying to help him rediscover his faith, he had me read The God Delusion, which obliterated my own. I can't say that book was the only thing that enabled me to leave behind my irrational worldview, as I've always been very intellectually curious and resistant to authority. My mind had already been a powder keg long before Richard Dawkins arrived with the spark to light it.

Needless to say, I quickly embraced atheism and began to read everything I could about living without belief in God. I'm playing catch-up, trying to expand my mind as fast as I can to make up for the lost years I spent blinded by religious dogma. Just two years ago, for example, I believed homosexuality was an evil that threatened to destroy civilization, that humans came from another planet, and that the Lost Ten Tribes were living somewhere underground beneath the Arctic. Needless to say, my re-education process has been exhausting.

One ex-Mormon friend of mine introduced me to Harry Potter and the Methods of Rationality, which I read only a few chapters of, but I was intrigued by the concept of Bayes Theorem and followed a link here. Since then I've read From Skepticism to Technical Rationality and many of the Sequences. I'm hooked! I'm really liking what I find here. While I may not be a rationalist now, I would really like to be.

And that's my short story! I look forward to learning more from all of you and, hopefully, contributing in the future. :)

Replies from: Eliezer_Yudkowsky, MugaSofer, Kawoomba, JohnH, private_messaging
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-12T21:14:52.699Z · LW(p) · GW(p)

Welcome to LW! Don't worry about some of the replies you're getting, polls show we're overwhelmingly atheist around here.

Replies from: MugaSofer, JohnH
comment by MugaSofer · 2013-04-12T21:39:53.022Z · LW(p) · GW(p)

This^

That said, my hypothetical atheist counterpart would have made the exact same comment. I can't speak for JohnH, but I can see someone with experience of Mormons not holding those beliefs being curious regardless of affiliation. And, of course, the other two - well, three now - comments are from professed atheists. So far nobody seems willing to try and reconvert him or anything.

comment by JohnH · 2013-04-13T17:30:49.334Z · LW(p) · GW(p)

Some of that might be because of evaporative cooling. Reading the sequences is more likely to cause a theist to ignore Less Wrong then it is to change their beliefs, regardless of how rational or not a theist is. If they get past that point they soon find Less Wrong is quite welcoming towards discussions of how dumb or irrational religion is but fairly hostile to those that try and say that religion is not irrational; as in this welcome thread even points that out.

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs. What atomliner mentions as his previous beliefs has absolutely no relation to what is found in Preach My Gospel, the missionary manual that he presumably had been studying for those two years, or to anything else that is found in scripture or in the teachings of the church. So are the beliefs that he gives as what he previously believed actually what he believed and if so what did he think of the complete lack of those beliefs being found in scripture and the publications of the church that he belonged to and where did he pick up these non standard beliefs? Or is something else entirely going on when he says that those were his beliefs?

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion. So is this a failure of the religion to communicate what the actual beliefs are, a failure of the ex-theist to discover what the beliefs of the religion really are and think critically about, in Mormon terms, "faith promoting rumors" (also known as lies and false doctrine, in Mormon terms), or are these non-standard beliefs cobbled together from "faith promoting rumors" after the atheist is already an atheist to justify atheism?

I know that atheists can deal with a lot of prejudice from believers about why they are atheists so I would think that atheists would try and justify their beliefs based on the best beliefs and arguments of a religion and not extreme outliers for both, as otherwise it plays to the prejudice. Or at least come up with something that actually are real beliefs. For any ex-Mormon there are entire websites of ready made points of doubt which are really easy to find, there should be no need to come up with such strange outlier beliefs to justify oneself, and if justifying isn't what he is doing then I am really very interested in knowing how and why he held those beliefs.

Replies from: Eliezer_Yudkowsky, Vaniver, Estarlio, atomliner, CCC, None, MugaSofer
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-13T21:19:47.250Z · LW(p) · GW(p)

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions. I am also suspicious that any recounting whatsoever of what went wrong will be greeted by, "But that's not exactly what the most sophisticated theologians say, even if it's what you remember perfectly well being taught in school!"

This obviously won't be true in my own case since Orthodox Jews who stay Orthodox will put huge amounts of cumulative effort into learning their religion's game manual over time. But by the same logic, I'm pretty sure I'm talking about a very standard element of the religion when I talk about later religious authorities being presumed to have immensely less theological knowledge than earlier authorities and hence no ability to declare earlier authorities wrong. As ever, you do not need a doctorate in invisible sky wizard to conclude that there is no invisible sky wizard, and you also don't need to know all the sophisticated excuses for why the invisible sky wizard you were told about is not exactly what the most sophisticated dupes believe they believe in (even as they go on telling children about the interventionist superparent). It'd be nice to have a standard, careful and correct explanation of why this is a valid attitude and what distinguishes it from the attitude of an adolescent who finds out everything they were told about quantum mechanics is wrong, besides the obvious distinction of net weight of experimental evidence (though really that's just enough).

LW has reportedly been key in deconverting many, many formerly religious readers. Others will of course have fled. It takes all kinds of paths.

Replies from: MugaSofer, JohnH, Baruta07, Vaniver
comment by MugaSofer · 2013-04-14T18:49:44.763Z · LW(p) · GW(p)

As ever, you do not need a doctorate in invisible sky wizard to conclude that there is no invisible sky wizard, and you also don't need to know all the sophisticated excuses for why the invisible sky wizard you were told about is not exactly what the most sophisticated dupes believe they believe in (even as they go on telling children about the interventionist superparent).

The trouble with this heuristic is it fails when you aren't right to start with. See also: creationists.

That said, you do, in fact, seem to understand the claims theologians make pretty well, so I'm not sure why you're defending this position in the first place. Arguments are soldiers?

But by the same logic, I'm pretty sure I'm talking about a very standard element of the religion when I talk about later religious authorities being presumed to have immensely less theological knowledge than earlier authorities and hence no ability to declare earlier authorities wrong.

Well, I probably know even less about your former religion than you do, but I'm guessing - and some quick google-fu seems to confirm - that while you are of course correct about what you were thought, the majority of Jews would not subscribe to this claim.

You hail from Orthodox Judaism, a sect that contains mostly those who didn't reject the more easily-disprove elements of Judaism (and indeed seems to have developed new beliefs guarding against such changes, such as concept of a "written and oral Talmud" that includes the teachings of earlier authorites.) Most Jews (very roughly 80%) belong to less extreme traditions, and thus, presumably, are less likely to discover flaws in them. Much like the OP belonging to a subset of Mormons who believe in secret polar Israelites.

I am also suspicious that any recounting whatsoever of what went wrong will be greeted by, "But that's not exactly what the most sophisticated theologians say, even if it's what you remember perfectly well being taught in school!"

Again, imagine a creationist claiming that they were taught in school that a frog turned into a monkey, dammit, and you're just trying to disguise the lies you're feeding people by telling them they didn't understand properly! If a claim is true, it doesn't matter if a false version is being taught to schoolchildren (except insofar as we should probably stop that.) That said, disproving popular misconceptions is still bringing you closer to the truth - whatever it is - and you, personally, seem to have a fair idea of what the most sophisticated theologians are claiming in any case, and address their arguments too (although naturally I don't think you always succeed, I'm not stupid enough to try and prove that here.)

Replies from: Estarlio
comment by Estarlio · 2013-04-14T21:29:46.058Z · LW(p) · GW(p)

The trouble with this heuristic is it fails when you aren't right to start with.

Disbelieving based on partial knowledge is different from disbelieving based on mistaken belief.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-15T11:23:28.792Z · LW(p) · GW(p)

I'm not sure what you mean by this.

I mistakenly believe that learning more about something will not change my probability estimate, because the absurdity heuristic tells me it's too inferentially distant to be plausible - which has the same results if you are distant from reality and the claim is true, or correct and the claim is false.

Replies from: Estarlio
comment by Estarlio · 2013-04-15T20:19:13.213Z · LW(p) · GW(p)

I'm not sure what you mean by this.

Being mistaken about something is different from not knowing everything there is to know about it.

If I'm wrong about a subject, then I don't know everything there is to know about it (assuming I'm reasoning correctly on what I know.)

But if I don't know everything there is to know about a subject, then I'm not necessarily wrong about that subject.

The former entails the latter, but the latter does not entail the former. One doesn't need a degree in biology to correct, or be corrected, about the frog thing - anymore than one needs a degree in sky wizardy to correct or be corrected about god.

Given that you can't know everything about even relatively narrow subject areas these days, (with ~7 billion humans on Earth we turn out a ridiculous amount of stuff,) what we're really dealing with here is an issue of trust: When someone says that you need to know more to make a decision, on what grounds do you decide whether or not they're just messing you around?

There's a major dis-analogy between how the Frog-based anti-evolutionist (AE) and the atheist (AT) 's questions are going to be addressed in that regard.

When the AE challenges evolution there are obvious touching stones, ideally he's told that the frog thing never happened and given a bunch of stuff he can go look up if he's interested. When the AT challenges theology he's told that he doesn't know enough, i.e. he hasn't exhausted the search space, but he's not actually pointed at anything that addresses his concern. It's more a sort of “Keep looking until you find something. Bwahahahaaa, sucker.” response.

That happens because of what evidence does and how we get it. Say, you're trying to decide whether the Earth is flat: To discover that it's vaguely spherical doesn't take a lot of evidence. I could drive to a couple of different locations and prove it to a reasonable degree of accuracy with sticks - it would not be difficult. (Or I could ask one of my friends in another city to take a measurement for me, but regardless the underlying methodology remains more or less the same.) That's an Eratosthenes level of understanding (~200BC). To discover the shape that the Earth actually is closer to an oblate spheroid, however, you need to have at least a Newton level of understanding (~1700 AD.) to predict that it being spun ought to make it bulge around the equator.

Evidence is something like, 'that which alters the conditional probability of something being observed.' But not all evidence alters the probability to the same degree. If you're off by a lot, a little bit of evidence should let you know. The more accurate you want to get the more evidence you need. Consequently, knowledge of search spaces tends to be ordered by weightiness of evidence unless the other person is playing as a hostile agent.

Even to ask the trickier questions that need that evidence requires a deep understanding that you have tuned in from a more general understanding. The odds that you'll ask a relevant question without that understanding, just by randomly mooshing concepts together, are slim.

Now the AT probably doesn't know a lot about religion. Assuming that the atheist is not a moron just randomly mooshing concepts together, her beliefs would off by a lot; she seems likely to disagree with the theist about something fairly fundamental about how evidence is meant to inform beliefs.

So, here the AT is sitting with her really weight super-massive black hole of a reason to disbelieve - and the response from the Christian is that he doesn't know everything about god. That response is missing the references that someone who actually had a reason they could point to would have. More importantly that response claims that you need deep knowledge to answer a question that was asked with shallow knowledge.

The response doesn't even look the same as the response to the frog problem. Everyone who knows even a little bit about evolution can correct the frog fella. Whereas, to my knowledge, no Christian has yet corrected a rational atheist on his or her point of disbelief. (And if they have why aren't they singing it from the rooftops - if they have as one might call it, a knock-down argument why aren't the door to door religion salesmen opening with that?)

Strictly speaking neither of them knows everything about their subjects, or likely even very much of the available knowledge. But one clearly knows more than the other and there are things that such knowledge lets him do that the other can't; point us towards proof, answer low level questions with fairly hefty answers; and is accorded an appropriately higher trust in areas that we've not yet tested ourselves.

Of course I acknowledge the possibility that a Christian, or whoever, might be able to pull off the same stunt. But since I've never seen it, and never heard of anyone who's seen it, and I'd expect to see it all over the place if there actually was an answer lurking out there.... And since I've talked two Christians out of their beliefs in the past who'd told me that I just needed to learn more about religion and know that someone who watched that debate lost their own faith as a consequence of being unable to justify their beliefs. (Admittedly I can't verify this to you so it's just a personal proof.) It seems improbable to me that they've actually got an answer.

Of course if they have such an answer all they have to do is show it to me. In the same manner as the frog-person.

(I can actually think of one reason that someone who could prove god might choose not to: If you don't know about god, under some theologies, you can't go to hell. You can't win a really nice version of heaven either but you get a reasonable existence. They had to pull that move because they didn't want to tell people that god sent their babies went to hell.

However, this latter type of person would seem mutually exclusive with the sort of person who would be interested in telling you to look more deeply into religion to begin with. I'd imagine someone who viewed your taking on more duties to not go to hell probably ought to be in the business of discouraging you joining or investigating religion.)

Anyway, yeah. I think you can subscribe to E's heuristic quite happily even in areas where you acknowledge that you're likely to be off by a long way.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T14:20:34.397Z · LW(p) · GW(p)

When the AE challenges evolution there are obvious touching stones, ideally he's told that the frog thing never happened and given a bunch of stuff he can go look up if he's interested. When the AT challenges theology he's told that he doesn't know enough, i.e. he hasn't exhausted the search space, but he's not actually pointed at anything that addresses his concern. It's more a sort of “Keep looking until you find something. Bwahahahaaa, sucker.” response.

I can assure you, I have personally seen atheists make arguments that are just as misinformed as the frog thingie.

For that matter, I've seen people who don't know much about evolution but are arguing for it tell creationists that a counterpoint to their claim exists somewhere, even though they don't actually know of such a "knock-down argument". And they were right.

Also, you seem to be modelling religious people as engaging in bad faith. Am I misreading you here?

The response doesn't even look the same as the response to the frog problem. Everyone who knows even a little bit about evolution can correct the frog fella.

Sure, but that was what we call an example. Creationists often make far more complex and technical-seeming arguments, which may well be beyond the expertise of the man on the street.

Whereas, to my knowledge, no Christian has yet corrected a rational atheist on his or her point of disbelief.

Maybe I parsed this wrong. Are you saying no incorrect argument has ever been made for atheism?

(And if they have why aren't they singing it from the rooftops - if they have as one might call it, a knock-down argument why aren't the door to door religion salesmen opening with that?)

Well, many do open with what they consider to be knock-down arguments, of course. But many such arguments are, y'know, long, and require considerable background knowledge.

And since I've talked two Christians out of their beliefs in the past who'd told me that I just needed to learn more about religion and know that someone who watched that debate lost their own faith as a consequence of being unable to justify their beliefs. (Admittedly I can't verify this to you so it's just a personal proof.) It seems improbable to me that they've actually got an answer.

If you have such an unanswerable argument, why aren't you "singing it from the rooftops"?

I think you can subscribe to E's heuristic quite happily even in areas where you acknowledge that you're likely to be off by a long way.

Minor point, but you realize EY wasn't the first to make this argument? And while I did invent this counterargument, I'm far from the first to do so. For example, Yvain.

Replies from: Estarlio
comment by Estarlio · 2013-04-19T18:34:40.221Z · LW(p) · GW(p)

I can assure you, I have personally seen atheists make arguments that are just as misinformed as the frog thingie.

For that matter, I've seen people who don't know much about evolution but are arguing for it tell creationists that a counterpoint to their claim exists somewhere, even though they don't actually know of such a "knock-down argument". And they were right.

Well, that's why I said ideally. Lots of people believe evolution as a matter of faith rather than reason. I'd tend to say it's a far more easily justified faith - after all you can find the answers to the questions you're talking about very easily, or at least find the general direction they're in, and the more rational people seem almost universally to believe in it, and it networks into webs of trust that seem to allow you to actually do things with your beliefs, but it's true that many people engage with it only superficially. You'd be foolish to believe in evolution just because Joe Blogs heard that we evolved on TV. Joe Blogs isn't necessarily doing any more thinking, if that's all he'll give you to go on, than if he'd heard from his pastor that god did it all.

Joe Blogs may be able to give you good reasons for believing in something without giving you an answer on your exact point - but more generally you shouldn't believe it if all he's got in his favour is that he does and he's got unjustified faith that there must be an answer somewhere.

A heuristic tends towards truth, it's the way to bet. There are situations where you follow the heuristic and what you get is the wrong answer, but the best you can do with the information at hand.

Also, you seem to be modelling religious people as engaging in bad faith. Am I misreading you here?

I consider someone who, without good basis, tells you that there's an answer and doesn't even point you in its direction, to be acting in bad faith. That's not all religious people but it seems to me at the moment to be the set we'd be talking about here.

Sure, but that was what we call an example. Creationists often make far more complex and technical-seeming arguments, which may well be beyond the expertise of the man on the street.

Maybe so, but going back to our heuristics those arguments don't hook into a verifiable web of trust.

In case I wasn't clear earlier: I do believe that when many people believe in something with good basis they're often believing in the work of a community that produces truth according to certain methods - that what's being trusted is mostly people and little bits here and there that you can verify for yourself. What grounds do you have for trusting pastors, or whoever, know much about the world - that they're good and honest producers of truth?

Maybe I parsed this wrong. Are you saying no incorrect argument has ever been made for atheism?

No, I'm saying that to my knowledge no Christian has yet corrected someone who's reasonably rational on their reason for disbelieving.

Well, many do open with what they consider to be knock-down arguments, of course. But many such arguments are, y'know, long, and require considerable background knowledge.

Knockdown arguments about large differences of belief tend to be short, because they're saying that someone's really far off, and you don't need a lot of evidence to show that someone's a great distance out. Getting someone to buy into the argument may be more difficult if they don't believe that argument is a valid method, (and a great many people don't really,) but the argument itself should be quite small.

If someone's going to technicality you to death, that's a sign that their argument is less likely to be correct if they're applying it to a large difference of belief. Scientists noticeably don't differ on the large things - they might have different interpretations of precise matters but the weight of evidence when it comes to macroscopic things is fairly overwhelming.

If you have such an unanswerable argument, why aren't you "singing it from the rooftops"?

I don't think that people who believe in god are necessarily worse off than people who don't. If you could erase belief in god from the world, I doubt it would make a great deal of difference in terms of people behaving rationally. If anything I'd say that the reasons that religion is going out of favour have more to do with a changing moral character of society and the lack of an ability to provide a coherent narrative of hope than they do with a rise of more rationally based ideologies.

Consequently, it's not an efficient use of my time. While you can say 'low probability prior, no supporting evidence, no predictive power,' in five seconds, that's going to make people who don't have a lot of intellectual courage recoil from what you're suggesting - if they understand it at a gut level at all - and in any case teaching the tools to understand what that means can take hours. And teaching someone to bring their emotions in line with justified beliefs can take months or years on top of that. Especially if you're going to have to sit down with them and walk them through all the steps to come to a belief that they don't really want very much in the first place.

Okay, sure, 'that which can be destroyed by the truth should be' - but at what cost, in what order? Don't you have better things to do with your time than pick on Christians whose lives may even be made worse by your doing so if they don't subsequently become more rational and develop well actualised theories of happiness and so on? Can you really provide a better life than a belief in god does for them? Even if you assume that making someone disbelieve god is a low-effort task, it wouldn't be as simple as just having someone disbelieve if you were to do it to promote their interests.

If there are a more efficient way of doing it then I might be up for that, but I'm just more generally interested in raising the sanity waterline that I am with swating individual beliefs here and there.

Minor point, but you realize EY wasn't the first to make this argument? And while I did invent this counterargument, I'm far from the first to do so. For example, Yvain.

I do yes, I was made to read Dawkin's awful book a few years back in school. =p

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T19:56:05.286Z · LW(p) · GW(p)

Well, that's why I said ideally. Lots of people believe evolution as a matter of faith rather than reason.

Sorry, I was saying I agreed with them. You don't have to know every argument for a position to hold it, you just have to be right.

Mind you, I generally do learn the arguments, but I'm weird like that.

I consider someone who, without good basis, tells you that there's an answer and doesn't even point you in its direction, to be acting in bad faith. That's not all religious people but it seems to me at the moment to be the set we'd be talking about here.

I'm talking more about the set of everybody who tells you to read the literature. Sure, it's a perfectly good heuristic as long as you only use it when you're dealing with that particular subset.

What grounds do you have for trusting pastors, or whoever, know much about the world - that they're good and honest producers of truth?

Well, I was thinking more theologians, but to be fair they're as bad as philosophers. Still, they've spent millennia talking about this stuff.

No, I'm saying that to my knowledge no Christian has yet corrected someone who's reasonably rational on their reason for disbelieving.

Sorry, but I'm going to have to call No True Scotsman on this. How many theists who were rational in their reasons for believing have been corrected by atheists? How many creationists who were rational in their reasons for disbelieving in evolution have been corrected by evolutionists?

I don't think that people who believe in god are necessarily worse off than people who don't. If you could erase belief in god from the world, I doubt it would make a great deal of difference in terms of people behaving rationally.

Point.

Um ... as a rationalist and the kind of idiot who exposes themself to basilisks, could you tell me this argument? Maybe rot13 it if you're not interested in evangelizing.

I do yes, I was made to read Dawkin's awful book a few years back in school. =p

Man, I'd forgotten that was the first place I came across that. Ah, nosalgia ... terrible book, though.

Replies from: Estarlio, Estarlio
comment by Estarlio · 2013-04-28T21:24:48.549Z · LW(p) · GW(p)

Comment too long - continued from last:

Point.

Um ... as a rationalist and the kind of idiot who exposes themself to basilisks, could you tell me this argument? Maybe rot13 it if you're not interested in evangelizing.

V fhccbfr gung'f bxnl.

Gur svefg guvat abgr vf gung vs lbh ybbx ng ubj lbh trg rivqrapr, jung vg ernyyl qbrf, gura V'ir nyernql tvira bar: Ybj cevbe, (r.t. uvtu pbzcyrkvgl,) ab fhccbegvat rivqrapr. Crefbanyyl gung'f irel pbaivapvat. V erzrzore jura V jnf lbhatre, naq zl cneragf jrer fgvyy va gurve 'Tbbq puvyqera tb gb Puhepu' cunfr, zl pbhfva, jub jnf xvaqn fjrrg ba zr, fnvq gb zr 'Jul qba'g lbh jnag gb tb gb Puhepu? Qba'g lbh jnag gb tb gb urnira?' naq V nfxrq gurz 'Qba'g lbh jnag gb tb gb Aneavn? Fnzr guvat.' N ovg cvguvre creuncf ohg lbh trg gur cbvag, gur vqrn bs oryvrivat vg jvgubhg fbzrbar cbalvat hc rivqrapr unf nyjnlf orra bqq gb zr - creuncf whfg orpnhfr V jnf fb hfrq gb nqhygf ylvat ol gur gvzr V jnf byq rabhtu gb haqrefgnaq gur vqrn bs tbq ng nyy.

Ohg gur cbvag vf, bs pbhefr, jung pbafgvghgrf rivqrapr? Vg zvtug frrz yvxr gurer'f jvttyr ebbz gurer, ng yrnfg vs lbh ernyyl jnag gb or pbaivaprq bs n tbq. Bar nafjre vf gung rivqrapr qbrf fbzrguvat gb gur cebonovyvgl bs na bofreingvba - vs lbh bhgchg gur fnzr cerqvpgrq bofreingvbaf ertneqyrff bs gur rivqrapr, gura vg'f whfg n phevbfvgl fgbccre engure guna rivqrapr.

Fb, ornevat gung va zvaq: Gurer ner znal jnlf bs cuenfvat gur nethzrag sbe tbq jura lbh'er gelvat gb svyy va gung rivqrapr - frafvgvivgl gb vavgvny pbaqvgvbaf vf creuncf gur zbfg erfcrpgnoyr bar gb zl zvaq - ohg abar bs gurz frrz gb zrna n guvat jvgubhg gur sbyybj nethzrag, be nethzragf gung ner erqhpvoyr gb vg, ubyqvat:

'Gurer vf n tbq orpnhfr rirelguvat gung rkvfgf unf n pnhfr & yvxr rssrpgf ner nyvxr va gurve pnhfrf.'

Vs lbh qba'g ohl vagb gung gura, juvyr lbh'ir fgvyy tbg inevbhf jnlf gb qrsvar tbq, lbh'ir tbg ab ernfba gb. (Naq vg'f abg vzzrqvngryl pyrne ubj gubfr bgure jnlf trgf lbh nalguvat erfrzoyvat rivqrapr gung lbh pna gura tb ba gb hfr.) Rira jvgu ernfba/checbfr onfrq gurbybtvrf, yvxr Yrvoavm, gur haqreylvat nffhzcgvba vf gb nffhzr gung guvatf ner gur fnzr - 'Jung vf gehr bs [ernfbaf sbe gur rkvfgrapr bs] obbxf vf nyfb gehr bs gur qvssrerag fgngrf bs gur jbeyq, sbe gur fgngr juvpu sbyybjf vf....' Gurer ur'f nffhzvat gung obbxf unir n ernfba naq gung gur jbeyq orunirf va gur fnzr jnl, uvf npghny nethzrag tbrf ba gb nffhzr n obbx jvgu ab nhgube naq rffragvnyyl eryvrf ba gur vaghvgvba gung jr unir gung guvf jbhyq or evqvphybhf, juvpu gb zl zvaq znxrf uvf nethzrag erqhpvoyr gb gur jngpuznxre nethzrag.

Nalubb.

Lbh pna trg nebhaq gur jngpuznxre guvatl yvxr guvf:

1) Rirelguvat gung rkvfgf unf n pnhfr.

Guvf bar'f abg jbegu nethvat bire. N cevzr zbire qbrfa'g, bs vgfrys, vzcyl n fragvrag tbq va gur frafr pbzzbayl zrnag. V qba'g xabj jurgure gurer jnf be jnfa'g n cevzr zbire, V fhfcrpg jr qba'g unir gur pbaprcghny ibpnohynel gb npghnyyl gnyx nobhg perngvba rk-avuvyb va n zrnavatshy jnl.

2) Yvxr rssrpgf ner nyvxr va gurve pnhfrf.

Guvf vf gur vzcbegnag bar.

Gur nffhzcgvba vf gung lbh'ir tbg n qrfvtare va gur fnzr jnl jr qrfvta negrsnpgf - uvtu pbzcyrkvgl cerffhcbfr bar, cerfhznoyl. Ubjrire, gung qbrfa'g ernyyl yvar hc jvgu ubj vairagvba jbexf:

Vs lbh jrer whfg erylvat ba trargvp tvsgf - VD be jung unir lbh - gura lbh'q trg n erthyne qvfgevohgvba jura lbh tencurq vairagvba ntnvafg VD. Ohg lbh qba'g. Lbh qba'g trg n Qnivapv jvgubhg n Syberapr. Be ng gur irel yrnfg jvgubhg gur vagryyrpghny raivebazrag bs n Syberapr. Gur vqrn gung crbcyr whfg fvg gurer naq pbzr hc jvgu vqrnf bhg bs guva nve vf abafrafr. Gur vqrn gung lbh hfr gb perngr fbzrguvat pbzr sebz lbhe rkcrevraprf va gur jbeyq vagrenpgvat jvgu gur fgehpgher bs lbhe oenva. Vs lbh ybpx fbzrbar va frafbel qrcevingvba sbe nyy gurve yvsr, gura lbh'er abg tbvat gb trg ahpyrne culfvpf bhg bs gurz ng gur bgure raq. Tneontr va tneontr bhg.

Vs yvxr rssrpgf ernyyl ner nyvxr va gurve pnhfrf, gura lbh qba'g trg n tbq jvgubhg n jbeyq. Gur vasbezngvba sbe perngvba qbrfa'g whfg zntvpnyyl nccrne hcba cbfvgvat n perngbe. Naq vs vasbezngvba vf va jbeyqf, engure guna perngbef, nf frrzf gb or gur pnfr vs lbh'er fnlvat yvxr rssrpgf yvxr pnhfrf, gura jul cbfvg n tbq ng nyy? Gur nffhzcgvba qbrfa'g qb nal jbex - abguvat zber unir orra rkcynvarq nobhg jurer gur vasbezngvba naq fgehpgher bs gur jbeyq pnzr sebz nsgre lbh'ir znqr gur nffhzcgvba guna jnf znqr orsber.

Vg'f n snveyl cbchyne zbir va gurbybtl gb pynvz gung lbh pna'g xabj gur zvaq bs tbq. Ohg rira pnyyvat vg n zvaq znxrf n ybg bs nffhzcgvbaf - naq jura lbh fgneg erzbivat gubfr nffhzcgvbaf naq fnlvat fghss gb trg bhg bs gur nobir nethzrag yvxr 'jryy, gur vqrn jnf nyjnlf gurer, va Tbq' jung ner lbh ernyyl qbvat gung'f qvssrerag gb cbfvgvat na haguvaxvat cevzr zbire? Ubj qbrf vasbezngvba va n fgngvp fgehpgher pbafgvghgr n zvaq ng nyy?

Jurer qvq gur vasbezngvba gb trg gur jbeyq pbzr sebz? V qba'g xabj, ohg hayrff lbh pna fnl ubj tbq znqr gur jbeyq - jurer ur tbg uvf vqrnf sebz - gur cerzvfr vf whfg... gur jbeyq jbhyq ybbx gur fnzr gb lbh jurgure tbq jnf gurer be abg, fb jung lbh'er gnyxvat nobhg qbrfa'g pbafgvghgr rivqrapr bs gurve rkvfgrapr. Lbh unir gb xabj gur angher bs tbq, rira vs whfg va trareny grezf, gb qvfgvathvfu vg sebz n cevzr zbire. Fhccbfvat na ntrag va gur svefg cynpr jnf zrnag gb or jung tbg lbh bhg bs gung ceboyrz naq jura vg qbrfa'g....

Gung gb zl zvaq vf n snveyl nofbyhgr nethzrag ntnvafg tbq. Gura lbh'ir whfg tbg uvf cevbe cebonovyvgl naq jungrire culfvpny cebbsf gung fcrpvsvp eryvtvbaf cerffhcbfr, gung lbh'q irevsl ba gurve bja zrevgf; v.r. ceviryvqtrq vasbezngvba gung pbhyq bayl unir pbzr sebz zrrgvat fbzrguvat tbqyl-cbjreshy, (abar bs juvpu frrzf gb unir ghearq hc lrg.)

V qba'g xabj, znlor lbh qba'g svaq gur nethzrag pbaivapvat - gur uvg engr va gung ertneq vfa'g cnegvphyneyl uvtu. Ohg V'ir abg sbhaq n aba-snvgu-onfrq nethzrag gung guvf qbrfa'g znc bagb va fbzr sbez be nabgure lrg.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-29T22:04:42.408Z · LW(p) · GW(p)

Thank you for sharing. It was, I must say, probably the best-posed argument for atheism I've ever read, and I could probably go on for days about why it doesn't move me. So I won't.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-29T22:50:34.634Z · LW(p) · GW(p)

Chicken!

Replies from: MugaSofer
comment by MugaSofer · 2013-05-01T18:22:53.098Z · LW(p) · GW(p)

Estarlio has specifically stated that they consider arguing over this a waste of their time. To be honest, so do I.

comment by Estarlio · 2013-04-28T21:24:35.004Z · LW(p) · GW(p)

Sorry, it's taken so long to reply. I'm easily distracted by shiny objects and the prospect of work.

Let's see:

Sorry, I was saying I agreed with them. You don't have to know every argument for a position to hold it, you just have to be right.

It seems to me at the moment that you don't know if you're right. So while you don't have to know every argument for a position to hold it, if you're interested in producing truth, it's desirable to have evidence on your side - either via the beliefs of others who have a wider array of knowledge on the subject than yourself and are good at producing truth or via knowing the arguments yourself.

Mind you, I generally do learn the arguments, but I'm weird like that.

I never have the time to learn all the arguments. Though I tend to know a reasonable number by comparison to most people I meet I suppose - not that that's saying much.

I'm talking more about the set of everybody who tells you to read the literature. Sure, it's a perfectly good heuristic as long as you only use it when you're dealing with that particular subset.

Ah, more generally then that depends on who's telling you to do it and what literature they're telling you to read. If someone's asking you to put in a fairly hefty investment of time then it seems to me that requires a fairly hefty investment of trust, sort of like Let's see some cards before we start handing over money. You don't have to see the entirety of their proof up front but if they can't provide at least a short version and haven't given you any other reason to respect their ability to find truth....

Like if gwern or someone told me that there was a good proof of god in something - I've read gwern's website and respect their reasoning - that would make me inclined to do it. If I saw priests and the like regularly making coherent arguments and they had that visible evidence in their ability to find truth, then they'd get a similar allowance. But it's like they don't want to show their cards at the moment - or aren't holding any - and whenever I've given them the allowance anyway it's turned out to be a bit of a waste. So that trust's not there for them anymore.

Well, I was thinking more theologians, but to be fair they're as bad as philosophers. Still, they've spent millennia talking about this stuff.

That's true. I just wonder - it's not well ordered or homogenous.

If everyone was writting about trivial truths then you'd expect it to mostly agree with itself - lots of people saying more or less the same stuff. If it was deep knowledge then you'd expect the deep knowledge to be on the top of the heap. Insights relevant to a widely felt need impose an ordering effect on the search space. Which is to say, lots of people know about them because they're so useful.

It's entirely possible they've just spent millennia talking about not very much at all. I mean you read Malebranche, for instance, and he was considered at the time to be doing very good work. But when you read it, it's almost infantile in its misunderstandings. If that's what passed muster it does't imply good things about what they were doing with the rest of their two thousand years or so.

I'm not sure whether that's particularly clear, reading it back. When people are talking sense then the people from previous eras don't appear to pass muster to people from modern eras. They might appear smart, but they're demonstrably wrong. If Malebranche is transparently wrong to me, and I'm not especially familiar with Christian works, nor am I the smartest man who ever lived - I've met one or two people in my life I consider as smart as myself.... That's not something that looks like an argument that's the product of thousands of years of meaningful work, or that could survive as something respectable in an environment where thousands of years of work had been put in.

Sorry, but I'm going to have to call No True Scotsman on this. How many theists who were rational in their reasons for believing have been corrected by atheists? How many creationists who were rational in their reasons for disbelieving in evolution have been corrected by evolutionists?

What difference does either of those make to the claim about atheistic rationalists? I'm not making a universal claim that all rationalists are atheistic, I'm making a claim about the group of people who are rationalists and are atheistic.

NTS would be if I said no rational atheist had, to my knowledge, ever been corrected on their point of disbelief by a Christian and you said sometihng like,

"Well, Elizer is a rationalist and he's become a Christian after hearing my really awesome argument."

And then I was all, "Well obviously Elizer's a great big poopy-head rather than a rationalist."

To my mind, Elizer and a reasonable distribution of other respectable rationalists becoming Christians in response to an argument (so that we know it's not just a random mental breakdown,) would be very hefty evidence in favour of there being a good argument for being Christian out there.

However, to answer your questions: I don't know on the creationist front, but on the Christian front I personally know of ... actually now I think of it longer I know of four, one of my friends in the US changed his mind too.

I do know of one person who's gone the other way too. But not someone that I'd considered particularly rational before they did so.

comment by JohnH · 2013-04-13T21:54:22.712Z · LW(p) · GW(p)

I believe the result is that atheists have an above average knowledge of world religions, similar to Jews (and Mormons) but I don't know of results that show they have an above average knowledge of their previous religion. Assuming most of them were Christians then the answer is possibly.

In this particular case I happen to know precisely what is in all of the official church material; I will admit to having no idea where his teachers may have deviated from church publications, hence me wondering where he got those beliefs.

I suppose I can't comment on what the average believer of various other sects know of their sects beliefs, only on what I know of their sects beliefs. Which leaves the question of plausibility that I know more then the average believer of say Catholicism or Evangelical Christianity or other groups not my own.

[edit] Eliezer, I am not exactly new to this site and have previously responded in detail to what you have written here. Doing so again would get the same result as last time.

comment by Baruta07 · 2013-04-14T19:50:23.212Z · LW(p) · GW(p)

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions.

As a Grade 11 student currently attending a catholic school (and having attended christian schools all my life) I would have to vouch for the accuracy of the statement; thanks to CCS I've learned a tremendous amount about Christianity although in my case there was a lot less Homosexuality is bad then is probably the norm and more focus on the positive moral aspects...

I currently attend Bishop Carroll HS and even though it is a catholic school I have no desire to change schools because of the alternate religious courses they offer and because it's generally a great school. From my experiences there are a ton of non-religious students as well as several more unusual religions represented. I personally would recommend the school for any HS students in Calgary wishing to have a non-standard HS experience.

comment by Vaniver · 2013-04-14T18:55:13.197Z · LW(p) · GW(p)

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions.

How much of this effect do you think is due to differences in intelligence?

comment by Vaniver · 2013-04-14T18:58:45.086Z · LW(p) · GW(p)

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs.

Suppose there is diversity within a religion, on how much the sensible and silly beliefs are emphasized. If the likelihood of a person rejecting a religion is positively correlated with the religion recommending silly beliefs, then we should expect that the population of atheist converts should have a larger representation of people raised in homes where silly beliefs dominated than the population of theists. That is, standard evaporative cooling, except that the reasonable people who leave become atheists, and similarly reasonable people who are in a 'warm' religious setting can't relate. (I don't know if there is empirical support for this model or not.)

comment by Estarlio · 2013-04-14T22:15:37.453Z · LW(p) · GW(p)

I know that atheists can deal with a lot of prejudice from believers about why they are atheists so I would think that atheists would try and justify their beliefs based on the best beliefs and arguments of a religion and not extreme outliers for both, as otherwise it plays to the prejudice.

Really? It don't think it takes an exceptional degree of rationality to reject religion.

I suspect what you mean is that atheists /ought/ to justify their disbelief on stronger grounds than the silliest interpretation of their opponent's beliefs. Which is true, you shouldn't disbelieve that there's a god on the grounds that one branch of one religion told you the royal family were aliens or something - that's just an argument against a specific form of one religion not against god in general.

But I suspect the task would get no easier for religion if it were facing off against more rational individuals, who'd want the strongest form of the weakest premise. (In this case I suspect something like: What you're talking about is really complex/improbable, before we get down to talking about the specifics of any doctrine, where's your evidence that we should entertain a god at all?)

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs.

Selection bias maybe? You're talking to the atheists who have an emotional investment in debating religion. I'd suspect that those who'd been exposed to the sillier beliefs would have greater investment, and that stronger rationalists would have a lower investment or a higher investment in other pursuits. Or maybe atheists tend to be fairly irrational. shrug

comment by atomliner · 2013-04-14T10:30:56.656Z · LW(p) · GW(p)

I was not trying to justify my leaving the Mormon Church in saying I used to believe in the extraordinary interpretations I did. I just wanted to say that my re-education process has been difficult because I used to believe in a lot of crazy things. Also, I'm not trying to make a caricature of my former beliefs, everything I have written here about what I used to believe I will confirm again as an accurate depiction of what was going on in my head.

I think it is a misstatement of yours to say that these beliefs have "absolutely no relation to... anything else that is found in scripture or in the teachings of the church". They obviously have some relation, being that I justified these beliefs using passages from The Family: A Proclamation to the World, Journal of Discourses and Doctrine & Covenants, pretty well-known LDS texts. I showed these passages in another reply to you.

Replies from: CCC
comment by CCC · 2013-04-14T10:47:16.692Z · LW(p) · GW(p)

They obviously have some relation, being that I justified these beliefs using passages from The Family: A Proclamation to the World, Journal of Discourses and Doctrine & Covenants, pretty well-known LDS texts. I showed these passages in another reply to you.

In all fairness, JohnH wrote his post before you showed him those passages. So that data was not available to him at the time of writing.

comment by CCC · 2013-04-13T21:29:20.485Z · LW(p) · GW(p)

Some of that might be because of evaporative cooling. Reading the sequences is more likely to cause a theist to ignore Less Wrong then it is to change their beliefs, regardless of how rational or not a theist is.

I agree intuitively with your second sentance (parsing 'beliefs' as 'religious beliefs'); but as I assign both options rather low probabilities, I suspect that it isn't enough to cause much in the way of evaporative cooling.

but fairly hostile to those that try and say that religion is not irrational

I haven't really seen that hostility, myself.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

Hmmm. It seems likely that the non-standard forms have glaring flaws; close inspection finds the flaws, and a proportion of people therefore immediately assume that all religions are equally incorrect. Which is flawed reasoning in and of itself; if one religion is flawed, this does not imply that all are flawed.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-14T18:33:01.797Z · LW(p) · GW(p)

but fairly hostile to those that try and say that religion is not irrational

I haven't really seen that hostility, myself.

I think John means "hostility" more in the sense of "non-receptiveness" rather than actively attacking those who argue for theism.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

Hmmm. It seems likely that the non-standard forms have glaring flaws; close inspection finds the flaws, and a proportion of people therefore immediately assume that all religions are equally incorrect.

Yup, this seems to fit.

Replies from: JohnH, CCC
comment by JohnH · 2013-04-14T19:04:30.604Z · LW(p) · GW(p)

Being called a moron seems hostile to me, just to use an example right here.

Replies from: CCC, MugaSofer, Kawoomba
comment by CCC · 2013-04-14T19:18:02.375Z · LW(p) · GW(p)

That was certainly hostile, yes. However, I take the fact that the post in question is at -10 karma to suggest that the hostility is frowned upon by the community in general.

comment by MugaSofer · 2013-04-15T11:10:40.725Z · LW(p) · GW(p)

Sorry, I should have specified "except for Kawoomba".

comment by Kawoomba · 2013-04-14T19:07:50.968Z · LW(p) · GW(p)

That which can be destroyed by the truth should be. Also, the spelling is unchanged, and I'd just seen a certain Tarantino movie.

Edit: Also, politeness has its virtues and is often more effective in achieving one's goals - yet Crocker's Rules are certainly more honest. Checking the definition of moron - at least as it pertains to that aspect of a person's belief system - I mean, who would seriously dispute its applicability, even before South Park immortalized Joseph Smith's teachings?

Replies from: PhilGoetz, CCC
comment by PhilGoetz · 2013-04-14T19:58:14.959Z · LW(p) · GW(p)

I dispute its applicability, because I've known very smart Mormons. Humans are not logic engines. It's rare to find even a brilliant person who doesn't have some blind spot.

Even if it were clinically applicable, you presented it as an in-group vs. out-group joke, which is an invitation for people from one tribe to mock people from another tribe. Its message was not primarily informational.

Crocker's Rules are not an invitation to be rude.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-14T20:05:42.280Z · LW(p) · GW(p)

I don't doubt there are Mormons with a higher IQ than myself, and more knowledgeable in many fields. Maybe the term "stupid person" is too broad, I meant it with Mormonism as the referent, and as being limited in scope to that. Yet it is disheartening that there are such obvious self-deceiving failures of reasoning, and courtesy afforded to dumb beliefs may prop up the Potemkin village, may help hide the elephant behind the curtain.

Reveal Oz in the broad daylight of reason, so that those very smart Mormons you know must address that blind spot.

Replies from: JohnH
comment by JohnH · 2013-04-14T20:09:31.232Z · LW(p) · GW(p)

Calling us morons doesn't reveal anything to reason or even attempt to force me to address what you may think of as a blind spot.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-14T20:23:15.848Z · LW(p) · GW(p)

It stands to reason that if you've successfully read even parts of the Sequences, or other rationality related materials, and yet believe in the Book of Mormon, there's little that will force you to address that blind ... area ..., so why not shock therapy. Or are you just too looking forward to your own planet / world? (Brigham Young, Journal of Discourses 18:259) Maybe that's just to be taken metaphorically though, for something, or something other?

Replies from: JohnH
comment by JohnH · 2013-04-14T21:13:12.693Z · LW(p) · GW(p)

Why go to the Journal of Discourses? D&C 132 clearly states that those that receive exaltation will be gods, the only question is whether that involves receiving a planet or just being part of the divine council. The Bible clearly states that we will be heirs and joint heirs with Christ. The Journal of Discourses is not something that most members look to for doctrine as it isn't scripture. I, and any member, am free to believe whatever I want to on the subject (or say we don't know) because nothing has been revealed on the subject of exaltation and theosis other then that.

Personally, I think there are some problems with the belief that everyone will have a planet due to some of the statements that Jesus makes in the New Testament but I could be wrong and I am not about to explain the subject here, though I may have attempted to do so in the past.

comment by CCC · 2013-04-15T08:36:06.494Z · LW(p) · GW(p)

Crocker's Rules are not an excuse for you to be rude to others. They are an invitation for others to ignore politeness when talking to you. They are not an invitation for others to be rude to you for the sake of rudeness, either; only where it enables some other aim, such as efficient transfer of information.

What you did, when viewed from the outside, is a clear example of rudeness for the sake of rudeness alone. I don't see how Crocker's rules are relevant.

comment by CCC · 2013-04-14T18:58:53.564Z · LW(p) · GW(p)

I think John means "hostility" more in the sense of "non-receptiveness" rather than actively attacking those who argue for theism.

Ah. To my mind, that would be 'neutrality', not 'hostility'.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-15T11:30:51.380Z · LW(p) · GW(p)

Ironically, this turned out not to be the case; he was thinking of Kawoomba, our resident ... actually, I'd assumed he only attacked me on this sort of thing.

Replies from: CCC
comment by CCC · 2013-04-15T17:32:23.629Z · LW(p) · GW(p)

Ironically, this turned out not to be the case

A common problem when one person tries to explain the words of another to a third party, yes.

Funny thing - I had a brief interaction over private messaging with Kawoomba on the subject of religion some time back, and he seemed reasonable at the time. Mildly curious, firmly atheistic, and not at all hostile.

I'm not sure if he changed, or if he's hostile to only a specific subcategory of theists?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T14:25:23.188Z · LW(p) · GW(p)

As I said, I'd assumed it was just me; we got into a rather lengthy argument some time ago on whether human ethics generalize, and he's been latching onto anything I say that's even tangentially related ever since. I'm not sure why he's so eager to convince me, since he believes his values are incompatible with mine, but it seems it may have something to do with him pattern-matching my position with the Inquisition or something.

comment by [deleted] · 2013-04-14T20:10:15.920Z · LW(p) · GW(p)

Have you noticed any difference between first and second generation atheists, in regard to caricaturing or contempt for religion?

comment by MugaSofer · 2013-04-13T18:15:53.624Z · LW(p) · GW(p)

Reading the sequences is more likely to cause a theist to ignore Less Wrong then it is to change their beliefs, regardless of how rational or not a theist is.

Really? I would have expected most aspiring rationalists who happen to be theists to be mildly irritated by the anti-theism bits, but sufficiently interested by the majority that's about rationality. Might be the typical mind fallacy, though.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

I would assume this is because the standard version of major religions likely became so by being unusually resistant to deconversion - including through non-ridiculousness.

EDIT: also, I think those were intended as examples of things irrational people believe, not necessarily Mormons specifically.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T18:44:37.427Z · LW(p) · GW(p)

I would have expected most aspiring rationalists who happen to be theists to be mildly irritated by the anti-theism bits

Well, I don't strongly identify as a theist, so it's hard for me to have an opinion here.

That said, if I imagine myself reading a variant version of the sequences (and LW discourse more generally) which are anti-some-group-I-identify-with in the same ways.... for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

Perhaps that's simply an indication of my inadequate rationality.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

I would assume this is because the standard version of major religions likely became so by being unusually resistant to deconversion - including through non-ridiculousness.

That's possible. Another possibility is that when tribe members talk about their tribe, they frequently do so charitably (for example, in nonridiculous language, emphasizing the nonridiculous aspects of their tribe), while when ex-members talk about their ex-tribe, the frequently do so non-charitably.

This is similar to what happens when you compare married people's descriptions of their spouses to divorced people's descriptions of their ex-spouses... the descriptions are vastly different, even if the same person is being described.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T20:22:06.394Z · LW(p) · GW(p)

Well, I don't strongly identify as a theist, so it's hard for me to have an opinion here.

That said, if I imagine myself reading a variant version of the sequences (and LW discourse more generally) which are anti-some-group-I-identify-with in the same ways.... for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

Perhaps that's simply an indication of my inadequate rationality.

I can confirm that it is indeed annoying, and worse still can act to reduce the persuasiveness of a point (for example, talking about how large groups of people/experts/insert other heuristic here fails with regard to religion.) Interestingly, it's annoying even if I agree with the criticism in question, which would suggest it's probably largely irrational, and certain rationality techniques reduce it, like the habit of ironmanning people's points by, say, replacing "religion" with racism or the education system or some clinically demonstrated bias or whatever.

That's possible. Another possibility is that when tribe members talk about their tribe, they frequently do so charitably (for example, in nonridiculous language, emphasizing the nonridiculous aspects of their tribe), while when ex-members talk about their ex-tribe, the frequently do so non-charitably.

This is similar to what happens when you compare married people's descriptions of their spouses to divorced people's descriptions of their ex-spouses... the descriptions are vastly different, even if the same person is being described.

There's probably a bit of that too, but (in my experience) most atheists believed an oddly ... variant ... version of their faith, whether it's because they misunderstood as a child or simply belonged to a borderline splinter group. Mind you, plenty of theists are the same, just unexamined.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-13T21:05:14.215Z · LW(p) · GW(p)

for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

These examples are not at all analogous. Claims about the existence of divine agents - or the accuracy of old textbooks - are epistemological claims about the world and not up to personal preferences. What do I know and how do I know it?

Claims about preferences can by definition not be objectively right or wrong, but only be accurate or inaccurate relative to their frame of reference, to the agent they are ascribed to. Even if that agent were some divine entity. Jesus would like you to do X, but Bob wouldn't.

Or, put differently:

"There is a ball in the box" - Given the same evidence, Clippy and an FAI will come to the same conclusion. Personal theist claims mostly fall in this category ("This book was influenced by being X", "the universe was created such-and-such", "I was absolved from my sins by a god dying for me").

"I prefer a ball in the box over no ball in the box" - Given the same evidence, rational actors do not have to agree, their preferences can be different. Sexual preferences, for example.

The reason that theists are generally regarded as irrational in their theism is because there is no reason to privilege the hypothesis that any particular age old cultural text somehow accurately describes important aspects of the universe, even if you'd ascribe to some kind of first mover. Like watching William Craig debates, who goes from some vague "First Cause" argument all the way to "the Bible is right because of the ?evidence? of a supernatural resurrection". That's a long, long way to skip and gloss over. Arguing for a first mover (no restriction other than "something that started the rest") is to arguing for the Abrahamic god what predicting the decade of your time of death would be to predicting the exact femtosecond of your death.

Such motivated cognition compromises many other aspects of one's reasoning unless it's sufficiently cordoned off, just like an AI that steadfastly insisted that human beings are all made of photons, and needed to somehow warp all its other theories to accommodate that belief.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T21:37:35.727Z · LW(p) · GW(p)

Well, this explains the mystery of why that got downvoted by someone.

for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

Firstly, you're replying to an old version of my comment - the section you're replying to is part of a quote which had a formatting error, which is why it forms a complete non-sequitur taken as a reply. I did not write that, I merely replied to it.

These examples are not at all analogous. Claims about the existence of divine agents - or the accuracy of old textbooks - are epistemological claims about the world and not up to personal preferences.

You know, I agree with you, homosexuality isn't a great example there. However, it's trivially easy to ironman as "homosexuality is moral" or some other example involving the rationality skills of the of the general populace.

Claims about preferences can by definition not be objectively right or wrong, but only be accurate or inaccurate relative to their frame of reference, to the agent they are ascribed to. Even if that agent were some divine entity. Jesus would like you to do X, but Bob wouldn't.

The fact that something is true only relative to a frame of reference does not mean it "can by definition not be objectively right or wrong". For example, if I believe it is correct (by my standards) to fly a plane into a building full of people, I am objectively wrong - this genuinely, verifiably doesn't satisfy my preferences. I may have been persuaded a Friendly superintelligence has concluded that it is, or that it will cause me to experience subjective bliss (OK, this one is harder to prove outright, we could be in a simulation run by some very strange people. It is, however, irrational to believe it based on the available evidence.)

"There is a ball in the box" - Given the same evidence, Clippy and an FAI will come to the same conclusion. Personal theist claims mostly fall in this category ("This book was influenced by being X", "the universe was created such-and-such", "I was absolved from my sins by a god dying for me").

"I prefer a ball in the box over no ball in the box" - Given the same evidence, rational actors do not have to agree, their preferences can be different.

Ayup.

Sexual preferences, for example.

As I said earlier, it's trivially easy to ironman that reference to mean one of the political positions regarding the sexual preference. If he had said "abortion", would you tell him that a medical procedure is a completely different thing to an empirical claim?

The reason that theists are generally regarded as irrational in their theism is because there is no reason to privilege the hypothesis that any particular age old cultural text somehow accurately describes important aspects of the universe, even if you'd ascribe to some kind of first mover.

Forgive me if I disagree with that particular empirial claim about how our community thinks.

Like watching William Craig debates, who goes from some vague "First Cause" argument all the way to "the Bible is right because of the ?evidence? of a supernatural resurrection".

"The Bible is right because of the evidence of a supernatural resurrection" is an argument in itself, not something one derives from the First Cause. However, the prior of supernatural resurrections might be raised by a particular solution to the First Cause problem, I suppose, requiring that argument to be made first.

Arguing for a first mover (no restriction other than "something that started the rest") is to arguing for the Abrahamic god what predicting the decade of your time of death would be to predicting the exact femtosecond of your death.

I guess I can follow that analogy - you require more evidence to postulate a specific First Mover than the existence of a generalized First Cause - but I have no idea how it bears on your misreading of my comment.

Such motivated cognition compromises many other aspects of one's reasoning unless it's sufficiently cordoned off, just like an AI that steadfastly insisted that human beings are all made of photons, and needed to somehow warp all its other theories to accommodate that belief.

Source? I find most rationalists encounter more irrational beliefs being protected off from rational ones than the inverse.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-14T08:22:03.766Z · LW(p) · GW(p)

"homosexuality is moral"

How is that example any different, how is it not also a matter of your individual moral preferences? Again, you can imagine a society or species of rational agents that regard homosexuality as moral, just as you can imagine one that regards it as immoral.

The fact that something is true only relative to a frame of reference does not mean it "can by definition not be objectively right or wrong".

By objectively right or wrong I meant right or wrong regardless of the frame of reference (as it's usually interpreted as far as I know). Of course you can be mistaken about your own preferences, and other agents can be mistaken when describing your preferences.

"Agent A has preference B" can be correct or incorrect / right or wrong / accurate or inaccurate, but "Preference B is moral, period, for all agents" would be a self-contradictory nonsense statement.

If he had said "abortion", would you tell him that a medical procedure is a completely different thing to an empirical claim?

Of course "I think abortion is moral" can widely differ from rational agent to rational agent. Clippy talking to AbortAI (the abortion maximizing AI) could easily agree about what constitutes an abortion, or how that procedure is usually done. Yet they wouldn't need to agree about the morality each of them ascribes to that procedure. They would need to agree on how others ("this human in 21th century America") morally judge abortion, but they could still judge it differently. It is like "I prefer a ball in the box over no ball in the box", not like "There is a ball in the box".

Forgive me if I disagree with that particular empirial claim about how our community thinks.

I forgive you, though I won't die for your sins.

"The Bible is right because of the evidence of a supernatural resurrection" is an argument in itself.

It is ... an argument ... strictly formally speaking. What else could explain some eye witness testimony of an empty grave, if not divine intervention?

However, the prior of supernatural resurrections might be raised by a particular solution to the First Cause problem.

Only when some nonsense about "that cause must be a non-physical mind" (without defining what a non-physical mind is, and reaching that conclusion by saying "either numbers or a mind could be first causes, and it can't be numbers") is dragged in, even then the effect on the prior of some particular holy text on some planet in some galaxy in some galactic cluster would be negligible.

but I have no idea how it bears on your misreading of my comment.

"I can confirm that it is indeed annoying", although I of course admit that this is branching out on a tangent - but why shouldn't we, it's a good place for branching out without having to start a new topic, or PMs.

Not everything I write needs to be controversial between us, it can be related to a comment I respond to, and you can agree or disagree, engage or disengage at your leisure.

I find most rationalists encounter more irrational beliefs being protected off from rational ones than the inverse.

What do you mean, protected off in the sense of compartmentalized / cordoned off?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-14T16:43:38.143Z · LW(p) · GW(p)

How is that example any different, how is it not also a matter of your individual moral preferences? Again, you can imagine a society or species of rational agents that regard homosexuality as moral, just as you can imagine one that regards it as immoral.

We seem to be using "moral" differently. You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically. I find this is more useful, for the reasons EY puts forth in the sequences.

By objectively right or wrong I meant right or wrong regardless of the frame of reference (as it's usually interpreted as far as I know). Of course you can be mistaken about your own preferences, and other agents can be mistaken when describing your preferences.

If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?

Of course "I think abortion is moral" can widely differ from rational agent to rational agent. Clippy talking to AbortAI (the abortion maximizing AI) could easily agree about what constitutes an abortion, or how that procedure is usually done. Yet they wouldn't need to agree about the morality each of them ascribes to that procedure. They would need to agree on how others ("this human in 21th century America") morally judge abortion, but they could still judge it differently. It is like "I prefer a ball in the box over no ball in the box", not like "There is a ball in the box".

Again, I think we're arguing over terminology rather than meaning here.

I forgive you, though I won't die for your sins.

Zing!

It is ... an argument ... strictly formally speaking. What else could explain some eye witness testimony of an empty grave, if not divine intervention?

Because that's the only eyewitness testimony contained in the Bible.

Only when some nonsense about "that cause must be a non-physical mind" (without defining what a non-physical mind is, and reaching that conclusion by saying "either numbers or a mind could be first causes, and it can't be numbers") is dragged in, even then the effect on the prior of some particular holy text on some planet in some galaxy in some galactic cluster would be negligible.

Well, since neither of actually have a solution to the First Cause argument (unless you're holding out on me) that's impossible to say. However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.

"I can confirm that it is indeed annoying", although I of course admit that this is branching out on a tangent - but why shouldn't we, it's a good place for branching out without having to start a new topic, or PMs.

What does the relative strength of evidence required for various "godlike" hypotheses have to do with the annoyance of seeing a group you identify with held up as an example of something undesirable?

Not everything I write needs to be controversial between us, it can be related to a comment I respond to, and you can agree or disagree, engage or disengage at your leisure.

Uh ... sure ... I don't exactly reply to most comments you make.

What do you mean, protected off in the sense of compartmentalized / cordoned off?

Yup.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T09:56:49.948Z · LW(p) · GW(p)

You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically.

Which humans? Medieval peasants? Martyrs? Witch-torturers? Mercenaries? Chinese? US-Americans? If so, which party, which age-group?

If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?

The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.

However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.

Negligibly so, especially if it's non verifiable second hand stories passed down through the ages, and when the whole system is ostentatiously based on non-falsifiability in an empirical sense.

You realize that your fellow Christians from a few centuries back would burn you for heresy if you told them that many of the supernatural magic tricks were just meant as metaphors. Copernicus didn't doubt Jesus Christ was a god-alien-human. They may not even have considered you to be a Christian. Nevermind that, the current iteration has gotten it right, doesn't it? Your version, I mean.

Because that's the only eyewitness testimony contained in the Bible.

There are three little pigs who saw the big bad wolf blowing away their houses, that's three eyewitnesses right there.

Do Adam and Eve count as eyewitnesses for the Garden of Eden?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T10:31:24.676Z · LW(p) · GW(p)

The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.

OK. So moral realism is false, and moral relativism is true and that's provable in a paragraph. Hmmm. Aliens and other societies might have all sorts of values, but that does not necessarily mean they have all sorts of ethical values. "Murder is good" might not be a coherent ethical principle, any more than "2+2=5" is a coherent mathematical one. The says-so of authorities, or Authorities is not the only possible source of objectivity.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T10:53:14.613Z · LW(p) · GW(p)

So if you constructed an artificial agent, you would somehow be stopped from encoding certain actions and/or goals as desirable? Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?

Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference. Seems pretty variable.

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-18T10:56:58.141Z · LW(p) · GW(p)

Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?

It would be correctly describing its preferences, and its preferences would not be ethically correct. You could construct an AI that frimly believed 2+2=5. And it would be wrong. As before, you are glibly assuming that the word "ethical" does no work, and can be dropped from the phrase "ethical value".

Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference.

All of those are believed ethical. It's very shallow to argue for relativism by ignoring the distinction between believed-to-be-true and true.

Replies from: Kawoomba, Estarlio
comment by Kawoomba · 2013-04-18T11:12:24.557Z · LW(p) · GW(p)

Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.

Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do you think to know at least some of what's universally ethical, or could you unknowingly be the evil twin believing to be ethical?

Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?

All advanced societies will agree about 2+2!=5, because that's falsifiable. Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?

Replies from: ArisKatsaris, PrawnOfFate, ArisKatsaris
comment by ArisKatsaris · 2013-04-18T13:03:27.231Z · LW(p) · GW(p)

Who gets to set the axioms and rules for ethicality?

Axioms are what we use to logically pinpoint what it is we are talking about. If our world and theirs has different axioms for "ethicality", then they simply don't have what we mean by "ethicality" -- and we don't have what they mean by the word "ethicality".

Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.

Replies from: Creutzer, Kawoomba, MugaSofer, PrawnOfFate
comment by Creutzer · 2013-04-19T12:35:00.869Z · LW(p) · GW(p)

Unfortunately, words of natural language have the annoying property that it's often very hard to tell if people are disagreeing about the extension or the meaning. It's also hard to tell what disagreement about the meaning of a word actually is.

Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.

The analogy is flawed. German and English speakers don't disagree about the word (conceived as a string of phonemes; otherwise "tier" and "Tier" are not identical), and it's not at all clear that disagreement about the meaning of words is the same thing as speaking two different languages. It's certainly phenomenologically pretty different.

I do agree that reducing it to speaking different languages is one way to dissolve disagreement about meaning. But I'm not convinced that this is the right approach. Some words are in acute danger of being dissolved with the question in that it will turn out that almost everyone has their own meaning for the word, and everybody is talking past each other. It also leaves you with a need to explain where this persistent illusion that people are disagreeing when they're in fact just talking past each other (which persists even when you explain to them that they're just speaking two different languages; they'll often say no, they're not, they're speaking the same language but the other person is using the word wrongly) comes from.

Of course, all of this is connected to the problem that nobody seems to know what kind of thing a meaning is.

comment by Kawoomba · 2013-04-18T13:38:01.222Z · LW(p) · GW(p)

So there is an objective measure for what's "right" and "wrong" regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?

Well, let's define a series of ethics, from ethics1 to ethicsn. Let's call your system of ethics which contains a "correct" conclusion such as "murder is WONG", say, ethics211412312312.

Why should anyone care about ethics211412312312?

(If you don't mind, let's consolidate this into the other sub-thread we have going.)

Replies from: PrawnOfFate, nshepperd
comment by PrawnOfFate · 2013-04-18T14:22:25.376Z · LW(p) · GW(p)

but other people may just decide not to give a hoot, using some other definition of ethics

If what they have can't do what ethics is supposed to do, why call it ethics?

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:23:16.707Z · LW(p) · GW(p)

What is ethics supposed to do?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:38:31.547Z · LW(p) · GW(p)

Reconcile one's preferences with those of others.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:47:37.430Z · LW(p) · GW(p)

That's one specific goal that you ascribe to your ethics-subroutine, the definition entails no such ready answer.

Ethics:

"Moral principles that govern a person's or group's behavior"

"The moral correctness of specified conduct"

Moral:

"of or relating to principles of right and wrong in behavior"

What about Ferengi ethics?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:57:46.395Z · LW(p) · GW(p)

I don't know what you mean. Your dictionary definitions are typically useless for philosophical purposes.

ETA

What about Ferengi ethics?

Well...what?

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:59:08.104Z · LW(p) · GW(p)

You are saying "the (true, objective, actual) purpose of ethics is to reconcile one's preferences with those of others".

Where do you take that from, and what makes it right?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:03:01.559Z · LW(p) · GW(p)

I got it from thinking and reading. It might not be right. It's a philosophical claim. Feel free to counterargue.

comment by nshepperd · 2013-04-18T13:47:45.024Z · LW(p) · GW(p)

Why should anyone care about ethics211412312312?

"Should" is an ethical word. To use your (rather misleading) naming convention, it refers to a component of ethics211412312312.

Of course one should not confuse this with "would". There's no reason to expect an arbitrary mind to be compelled by ethics.

Replies from: PrawnOfFate, Kawoomba
comment by PrawnOfFate · 2013-04-18T14:23:22.619Z · LW(p) · GW(p)

"Should" is an ethical word

No. it's much wider than that. There are rational and instrumental should's.

ETA:

here's no reason to expect an arbitrary mind to be compelled by ethics.

Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.

comment by Kawoomba · 2013-04-18T14:02:25.507Z · LW(p) · GW(p)

There's no reason to expect an arbitrary mind to be compelled by ethics.

As one should not expect an arbitrary mind with its own notions of "right" or "wrong" to yield to any human's proselytizing about objectively correct ethics, "murder is bad", and trying to provide a "correct" solution for that arbitrary mind to adopt.

The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the "should" in "you should choose ethics211412312312", that was my point. Calling some church's (or other group's) ethical doctrine objectively correct for all minds doesn't make a dint of difference, and doesn't go beyond "my ethics are right! no, mine are!"

Replies from: PrawnOfFate, ArisKatsaris
comment by PrawnOfFate · 2013-04-18T14:29:45.518Z · LW(p) · GW(p)

As one should not expect an arbitrary mind with its own notions of "right" or "wrong" to yield to any human's proselytizing about objectively correct ethics, "murder is bad", and trying to provide a "correct" solution for that arbitrary mind to adopt.

But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:34:20.542Z · LW(p) · GW(p)

But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning (...) I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality

Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn't need to worry since we could persuade it to act morally correct?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:41:36.219Z · LW(p) · GW(p)

I'm not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T12:30:13.695Z · LW(p) · GW(p)

Depends on how you define "moral relativism". Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T12:47:50.820Z · LW(p) · GW(p)

I don't think there is a consensus, just a belief in a consensus. EY seems unable or unwiing to clarify his posiition even when asked directly.

comment by ArisKatsaris · 2013-04-18T14:18:19.352Z · LW(p) · GW(p)

The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours.

If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both "ethics"? You surely don't mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as "ethics".

What are those common elements?

What would it mean for an alien to e.g. define "tetration" differently than we do? Either they define it in the same way, or they haven't defined it at all. To define it differently means that they're not describing what we mean by tetration at all.

comment by MugaSofer · 2013-04-19T12:24:09.095Z · LW(p) · GW(p)

Cannot upvote enough.

Also, pretty sure I've made this exact argument to Kawoomba before, but I didn't phrase it as well, so good luck!

comment by PrawnOfFate · 2013-04-18T14:21:11.834Z · LW(p) · GW(p)

Axioms are what we use to logically pinpoint what it is we are talking about.

Axioms have a lot to do with truth, and little to do with meaning.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-04-18T14:35:23.497Z · LW(p) · GW(p)

Axioms have a lot to do with truth, and little to do with meaning.

Would that make the Euclidean axioms just "false" according to you, instead of meaningfully defining the concept of a Euclidean space that turned out not to be completely corresponding to reality, but is still both quite useful and certainly meaningful as a concept?

I first read the concept of axioms as means of logical pinpointing in this and it struck me as brilliant insight which may dissolve a lot of confusions.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:36:34.696Z · LW(p) · GW(p)

Corresponding to reality is physical truth, not mathematical truth.

comment by PrawnOfFate · 2013-04-18T14:19:34.620Z · LW(p) · GW(p)

Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do

If relativism is true, yes. If realism is true no. So?

Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?

If realism is true, they could have got it right by chance, although whoever is right is more likely to be right by approaching it systematically.

All advanced societies will agree about 2+2!=5, because that's falsifiable.

Inasmuch as it is disproveable from non-arbitrary axioms. You are assuming that maths has non-arbitrary axioms, but morality doesn't. Is that reasonable?

Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?

Axioms aren't true or false because of who is "setting" them. Maths is supposed to be able to do certain things, it is supposed to allow you to prove theorems, it is supposed to be free from contradiction and so on. That considerably constrains the choice of axioms. Non-euthyphric moral realism works the same way.

comment by ArisKatsaris · 2013-04-18T12:43:52.161Z · LW(p) · GW(p)

Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.

Okay, let's try to figure out how that would work. A world where preferences are the same (e.g. everyone wants to live as long as possible, and wants other people to live as well), but the ethics are reversed (saving lives is considered morally wrong, murdering other people at random is morally right)

Don't you see an obvious asymmetry here between their world and ours? Their so-called ethics about murder (murder=good) would end up harming their preferences, in a way that our ethics about murder (murder=bad) does not?

Replies from: Kawoomba, TheOtherDave
comment by Kawoomba · 2013-04-18T13:33:44.605Z · LW(p) · GW(p)

So is it a component of the "correct" ethical preferences that they satisfy the preferences of others? It seems this way since you use this to hold "our" ethics about murder over those of the mirror world (In actuality there'd be vast swaths of peaceful coexistence in the mirror world, e.g. in Ruanda).

But hold on, our ethical preferences aren't designed to maximize other sapients' preferences. Wouldn't it be more ethical still to not want anything for yourself, or to be happy to just stare at the sea floor, and orient those around you to look at the sea floor as well? Seems like those algae win, after all! God's chosen seaweed!

What about when a quadrillion bloodthirsty but intelligent killer-algae (someone sent them a Bible, turned them violent) invaded us, wouldn't it be more ethical for us to roll over, since that satisfies total preferences more effectively?

I see the asymmetry. But I don't see the connection to "there is a correct morality for all sentients". On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.

Replies from: PrawnOfFate, PrawnOfFate, PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:35:05.818Z · LW(p) · GW(p)

On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.

It clearly wouldn't satisfy their preference not to be slaves.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:44:21.572Z · LW(p) · GW(p)

It clearly wouldn't satisfy their preference not to be slaves.

Slip of tongue, you must have meant esteemed citizens.

You're concerned with the average preference satisfaction of other agents, then? Why not total average preference satisfaction, which you just rejected? Which is ethical, and who decides? Where are the axioms?

We're probably talking about different ethics, since I don't even know your axioms, or priorities. Something about trying to satisfy the preferences of others, or at least taking that into account. What does that mean? To what degree? If one says, "to this degree", and another said "to that degree", who's ethical? Neither, both? Who decides? There's no math that tells to to what degree you satisfying others is ethical.

Is their an ethical component to flushing my toilet? Killing my goldfish? All my actions impact the world (definition of action), yet some are ethical (or unethical), whereas some are ethically undefined? How does that work?

Can I find it all written in an ancient scroll, by chance?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:52:28.031Z · LW(p) · GW(p)

Slip of tongue, you must have meant esteemed citizens.

I thought your point was that they were really slaves.

You're concerned with the average preference satisfaction of other agents, then? Why not total average preference satisfaction, which you just rejected? Which is ethical, and who decides? Where are the axioms?

There are a lot of issues in establishing the right theory of moral realism, and that doens't mean relativism is Just True. I've done as much as a I need.

We're probably talking about different ethics,

We are talking about different metaethics.

Who decides?

We don't have the One True theory of physics either. That doesn't disprove physical realism.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:55:01.844Z · LW(p) · GW(p)

I thought your point was that they were really slaves.

Just lightening the tone.

There are a lot of issues in establishing the right theory of moral realism, and that doens't mean relativism is Just True. I've done as much as a I need.

What do you mean, I've done as much as I need?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:04:46.448Z · LW(p) · GW(p)

I need to show that realism isn't obviously false, and can't be dismissed in a paragraph. I don't need to show it is necessarily true, or put forward a bulletproof object-level ethics.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T15:11:50.983Z · LW(p) · GW(p)

A paragraph? What about a single sentence (not exactly mine, though):

Moral realism postulates the existence of a kind of "moral fact" which is nonmaterial, applies to humans, aliens and intelligent algae alike, and does not appear to be accessible to the scientific method.

I can probably get it down further.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:13:30.753Z · LW(p) · GW(p)

Moral realism postulates the existence of a kind of "moral fact" which is nonmaterial, applies to humans, aliens and intelligent algae alike, and does not appear to be accessible to the scientific method.

What has that got to do with the approachI have been proposing here?

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T15:18:15.912Z · LW(p) · GW(p)

The point is not whether you like your own ethics, or how you go about your life. It's whether your particular ethical system - or any particular ethical system - can be said to be not only right from your perspective, but right for any intelligent agent - aliens, humans, AI, whatever.

As such, if someone told you "nice ethics, would be a shame if anything were to happen to it", you'd need to provide some potential - conceivable - basis on which the general correctness could be argued. I was under the impression that you referred to moral realism, which is susceptible to the grandparent comment's criticism.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:21:58.543Z · LW(p) · GW(p)

I have never argued from the "queer object" notion of moral realism--from immaterial moral thingies.

. It's whether your particular ethical system - or any particular ethical system - can be said to be not only right from your perspective, but right for any intelligent agent - aliens, humans, AI, whatever.

Yep. And my argument that it can remains unaddressed.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T15:32:02.857Z · LW(p) · GW(p)

"There is a non-zero chance of one correct ethical system existing, as long as that's there, I'm free to believe it", or what?

No Sir, if you insist there is any basis whatsoever to stake your "one ethics to rule them all" claim on, you argue it's more likely than not. I do not stake my belief on absolute certainties, that's counter to all the tenets of rationality, Bayes, updating on evidence et al.

My argument is clear. Different agents deem different courses of actions to be good or bad. There is a basis (such as Aumann's) for rational agents to converge on isomorphic descriptions of the world. There is no known, or readily conceivable, basis for rational agents to all converge on the same course of action.

On the contrary, that would entail that e.g. world-eating AIs that are also smarter than any humans, individual or collectively, cannot possibly exist. There are no laws of physics preventing their existence - or construction. So we should presume that they can exist. If their rational capability is greater than our own, we should try to adopt world eating, since they'd have the better claim (being smarter and all) on having the correct ethics, no?

Replies from: nshepperd, TheOtherDave, MugaSofer, PrawnOfFate
comment by nshepperd · 2013-04-18T23:34:43.400Z · LW(p) · GW(p)

I feel like I should point out here that moral relativism and universally compelling morality are not the only options. "It's morally wrong for Bob to do X" doesn't require that Bob cares about the fact that it's wrong. Something that seems to be being ignored in this discussion.

comment by TheOtherDave · 2013-04-18T15:54:02.471Z · LW(p) · GW(p)

Complete tangential point...

There is no known, or readily conceivable, basis for rational agents to all converge on the same course of action.

Hm. I don't think you quite mean that as stated?

I mean, I agree that a basis for rational agents to converge on values is difficult to imagine.

But it's certainly possible for two agents with different values to converge on a course of action. E.g., "I want everything to be red, am OK with things being purple, and hate all other colors; you want everything to be blue, are OK with things being purple, and hate all other colors." We have different values, but we can still agree on pragmatic grounds that we should paint everything purple.

Replies from: Kawoomba, nshepperd, PrawnOfFate
comment by Kawoomba · 2013-04-18T16:11:36.886Z · LW(p) · GW(p)

Hence the "all". Certainly agents can happen to have areas in which their goals are compatible, and choose to exert their efforts e.g. synergistically in such win-win situations of mutual benefit.

The same does not hold true for agents whose primary goals are strictly antagonistic. "I maximize the number of paperclips!" - "I minimize the number of paperclips!" will have ... trouble ... getting along, and mutually exchanging treatises about their respective ethics wouldn't solve the impasse.

(A pair of "I make paperclips!" - "I destroy paperclips!" may actually enter a hugely beneficial relationship.)

Didn't think there was anyone - apart from the PawnOfFaith and I - still listening in. :)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-18T16:20:43.662Z · LW(p) · GW(p)

Yup, that's fair.
And I read the Recent Comments list every once in a while.

comment by nshepperd · 2013-04-18T23:28:15.183Z · LW(p) · GW(p)

But of course this only works if the pair of agents both dislike war/murder even more than they like their colors, and/or if neither of them are powerful enough to murder the other one and thus paint everything their own colors.

comment by PrawnOfFate · 2013-04-18T16:09:56.276Z · LW(p) · GW(p)

I mean, I agree that a basis for rational agents to converge on values is difficult to imagine

Rational agents all need to value rationality.

Replies from: None, TheOtherDave, MugaSofer
comment by [deleted] · 2013-04-18T16:46:43.636Z · LW(p) · GW(p)

Not neccesarily. An agent that values X and doesn't have a stupid prior will invariably strive towards finding the best way to accomplish X. If X requires information about an ouside world, it will build epistemology and sensors, if it requires planning, it will build manipulators and a way of evaluating hypotheticals for X-ness.

All for want of X. It will be rational because it helps attaining X.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T17:40:05.532Z · LW(p) · GW(p)

Good epistemological rationality requires avoidance of bias, contradiction, arbitrariness, etc. That is just what my rationality-based ethics needs.

Replies from: None
comment by [deleted] · 2013-04-26T16:52:31.944Z · LW(p) · GW(p)

I will defer to the problem of

Omega offers you two boxes, each box contains a statement, upon choosing a box you will instantly belive that statement: One contains somthing true which you currently belive to be false, tailored to cause maximum disutility in your preferred ethical system; the other contains something false which you currently belive to be true, tailored to cause maximum utility.

Truth with negative consequences or Falsehood with positive ones? If you value nothing over truth you will realise something terrible upon opening the first box, that will maybe make you kill your family. If you value something other than truth, you will end up believing that the programming code you are writing will make pie, when it will in fact make a FAI.

comment by TheOtherDave · 2013-04-18T16:30:14.854Z · LW(p) · GW(p)

Do you mean this as a general principle, along the lines of "If I am constructed so as to operate a particular way, it follows that I value operating that way"? Or as something specific about rationality?

If the former, I disagree, but if the latter I'm interested in what you have in mind.

comment by MugaSofer · 2013-04-19T12:47:32.968Z · LW(p) · GW(p)

Only instrumentally.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T12:48:41.161Z · LW(p) · GW(p)

Epistemic rationality has instrumental value. That's where the trouble starts.

comment by MugaSofer · 2013-04-19T12:37:08.377Z · LW(p) · GW(p)

I think you've missed the point somewhat. No-one has asserted such a One True Ethics exists, as far as I can see. Prawn has argued that the possibility of one is a serious position, and one that cannot be dismissed out of hand - but not necessarily one they endorse.

I disagree, for the record.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-19T16:08:09.832Z · LW(p) · GW(p)

Prawn has argued that the possibility of one is a serious position

Noone should care about "possibilities", for a Bayesian nothing is zero. You could say self-refuting / self-contradictory beliefs have an actual zero percent probability, but not even that is actually true: You need to account for the fact that you can't ever be wholly (to an infinite amount of 9s in your prior of 0.9...) certain about the self-contradiction actually being one. There could be a world with a demon misleading you, e.g.

That being said, the idea of some One True Ethics is as self-refuting as it gets, there is no view from nowhere, and whatever axioms those True Ethics are based upon would themselves be up for debate.

The discussion of whether a circle can also be a square, possibly, can be answered with "it's a possibility, since I may be mistaken about the actual definitions", or it can be answered with "it's not a possibility, there is no world in which I am wrong about the definition".

But with neither answer would "it is a possibility, ergo I believe in it" follow. The fool who says in his heart ... and all that.

So if I said "I may be wrong about it being self refuting, it may be a possibility", I could still refute it within one sentence. Same as with the square circle.

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-19T17:55:53.869Z · LW(p) · GW(p)

Noone should care about "possibilities", for a Bayesian nothing is zero. You could say self-refuting / self-contradictory beliefs have an actual zero percent probability, but not even that is actually true: You need to account for the fact that you can't ever be wholly (to an infinite amount of 9s in your prior of 0.9...) certain about the self-contradiction actually being one. There could be a world with a demon misleading you, e.g.

That being said, the idea of some One True Ethics is as self-refuting as it gets, there is no view from nowhere,

What is a "view"? Why is it needed for objective ethics? Why isnt it a Universal Solvent? Is there no objective basis to mathematics.

and whatever axioms those True Ethics are based upon would themselves be up for debate.

So its probability would be less than 1.0. That doesn't mean its probability is barely above 0.0.

The discussion of whether a circle can also be a square, possibly, can be answered with "it's a possibility, since I may be mistaken about the actual definitions",

But the argument you have given does not depend on evident self-contradiction. It depends on an unspecified entity called a "view".

But with neither answer would "it is a possibility, ergo I believe in it" follow.

So? For the fourth time, I was only saying that moral realism isn't obviously false.

The fool who says in his heart ... and all that.

comment by MugaSofer · 2013-04-19T17:15:14.819Z · LW(p) · GW(p)

Oh, come on. He clearly meant a non-negligible probability. Be serious.

And you know, while I don't believe in universally convincing arguments - obviously - there are some arguments which are convincing to any sufficient intelligent agent, under the "power to steer the future" definition. I can't see how anything I would call morality might be such an argument, but they do exist.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-19T17:24:37.014Z · LW(p) · GW(p)

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself. Again, there is no view from nowhere. For example, you choose the view as that of "humankind", which I think isn't well defined, but at least it's closer to coherence than "all existing (edit:) rational agents". If the PawnOfFaith meant non-negligible versus just "possibility", the first two sentences of this comment serve as sufficient refutation.

Replies from: private_messaging, MugaSofer, PrawnOfFate
comment by private_messaging · 2013-04-19T17:31:47.046Z · LW(p) · GW(p)

Look. The ethics mankind predominantly has, they do exist in the real world that's around you. Alternate ethics that works at all for a technological society blah blah blah, we don't know of any, we just speculate that they may exist. edit: worse than that, speculate in this fuzzy manner where it's not even specified how they may exist. Different ethics of aliens that evolved on different habitable planets? No particular reason to expect that there won't be one that is by far most probable. Which would be implied by the laws of physics themselves, but given multiple realizability, it may even be largely independent of underlying laws of physics (evolution doesn't care if it's quarks on the bottom or cells in a cellular automation or what), in which case its rather close to being on par with mathematics.

Replies from: Kawoomba, MugaSofer
comment by Kawoomba · 2013-04-19T17:49:35.491Z · LW(p) · GW(p)

Even now ethics in different parts of the world, and even between political parties, are different. You should know that more than most, having lived in two systems.

If it turns out that most space-faring civilizations have similar ethics, that would be good for us. But then also there would be a difference between "most widespread code of ethics" and "objectively correct code of ethics for any agent anywhere". Most common != correct.

Replies from: private_messaging
comment by private_messaging · 2013-04-19T18:18:43.992Z · LW(p) · GW(p)

Even now ethics in different parts of the world, and even between political parties, are different. You should know that more than most, having lived in two systems.

There's a ridiculous amount of similarity on anything major, though. If we pick ethics of first man on the moon, or first man to orbit the earth, it's pretty same.

If it turns out that most space-faring civilizations have similar ethics, that would be good for us. But then also there would be a difference between "most widespread code of ethics" and "objectively correct code of ethics for any agent anywhere". Most common != correct.

Yes, and most common math is not guaranteed to be correct (not even in the sense of not being self contradictory). Yet, that's no argument in favour of math equivalent of moral relativism. (Which, if such a silly thing existed, would look something like 2*2=4 is a social convention! it could have been 5!) .

edit: also, a cross over from other thread: It's obvious that nukes are an ethical filter, i.e. some ethics are far better at living through that than others. Then there will be biotech and other actual hazards, and boys screaming wolf for candy (with and without awareness of why), and so on.

comment by MugaSofer · 2013-04-19T19:14:44.142Z · LW(p) · GW(p)

Look. The ethics mankind predominantly has, they do exist in the real world that's around you.

Actually, I understand Kawoomba believes humanity has mutually contradictory ethics. He has stated that he would cheerfully sacrifice the human race - "it would make as much difference if it were an icecream" were his words, as I recall - if it would guaranteeing the safety of the things he values.

Replies from: private_messaging
comment by private_messaging · 2013-04-19T19:26:17.299Z · LW(p) · GW(p)

Well, that's rather odd coz I do value the human race and so do most people. Ethics is a social process, most of "possible" ethics as a whole would have left us unable to have this conversation (no computers) or altogether dead.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T23:35:11.651Z · LW(p) · GW(p)

Well, that's rather odd coz I do value the human race and so do most people.

That was pretty much everyone's reaction.

Ethics is a social process, most of "possible" ethics as a whole would have left us unable to have this conversation (no computers) or altogether dead.

I'd say I'm not the best person to explain this, but considering how long it took me to understand it, maybe I am.

Hoo boy...

OK, you can persuade someone they were wrong about their terminal values. Therefore, you can change someone's terminal values. Since different cultures are different, humans have wildly varying terminal values.

Also, since kids are important to evolution, parents evolved to value their kids over the rest of humanity. Now, technically that's the same as not valuing the rest of humanity at all, but don't worry; people are stupid.

Also, you're clearly a moral realist, since you think everyone secretly believes in your One True Value System! But you see, this is stupid, because Clippy.

Any questions?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T23:49:15.375Z · LW(p) · GW(p)

Hmmm. A touch of sarcasm there? Maybe even parody?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-23T12:15:39.313Z · LW(p) · GW(p)

I disagree with him, and it probably shows; I'm not sugar-coating his arguments. But these are Kawoomba's genuine beliefs as best I can convey them.

comment by MugaSofer · 2013-04-19T19:20:05.161Z · LW(p) · GW(p)

PawnOfFaith

Nice. Mature.

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself. Again, there is no view from nowhere. For example, you choose the view as that of "humankind", which I think isn't well defined, but at least it's closer to coherence than "all existing agents".

I don't think they have the space of all possible agents in mind - just "rational" ones. I'm not entirely clear what that entails, but it's probably the source of these missing axioms.

Replies from: PrawnOfFate, Kawoomba
comment by PrawnOfFate · 2013-04-19T19:31:04.107Z · LW(p) · GW(p)

I don't think they have the space of all possible agents in mind - just "rational" ones.

I keep saying that, and Bazinga keeps omiting it.

Replies from: Kawoomba, MugaSofer
comment by Kawoomba · 2013-04-19T19:33:07.285Z · LW(p) · GW(p)

My mistake, I'll edit the rational back in.

comment by MugaSofer · 2013-04-19T23:14:29.123Z · LW(p) · GW(p)

Don't worry, you're being pattern-matched to the nearest stereotype. Perfectly normal, although thankfully somewhat rarer on LW.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T23:52:27.879Z · LW(p) · GW(p)

Nowhere near rare enough for super-smart super-rationalists. Not as good as bog standard philosophers.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-23T12:12:16.425Z · LW(p) · GW(p)

I don't know, I've encountered it quite often in mainstream philosophy. Then again, I've largely given up reading mainstream philosophy unless people link to or mention it in more rigorous discussions.

But you have a point; we could really do better on this. Somebody with skill at avoiding this pitfall should probably write up a post on this.

comment by Kawoomba · 2013-04-19T19:35:24.808Z · LW(p) · GW(p)

So as long as the AI we'd create is rational, we should count on it being / becoming friendly by default (at least with a "non-negligible chance")?

Also see this.

Replies from: MugaSofer, private_messaging, PrawnOfFate
comment by MugaSofer · 2013-04-19T23:12:12.102Z · LW(p) · GW(p)

As far as I can tell? No. But you're not doing a great job of arguing for the position that I agree with.

Prawn is, in my opinion, flatly wrong, and I'll be delighted to explain that to him. I'm just not giving your soldiers a free pass just because I support the war, if you follow.

comment by private_messaging · 2013-04-19T20:04:33.339Z · LW(p) · GW(p)

I'd think it'd be great if people stopped thinking in terms of some fuzzy abstraction "AI" which is basically a basket for all sorts of biases. If we consider the software that can self improve 'intelligently' in our opinion, in general, the minimal such software is something like an optimizing compiler that when compiling it's source will even optimize its ability to optimize. This sort of thing is truly alien (beyond any actual "aliens"), you get to it by employing your engineering thought ability, unlike paperclip maximizer at which you get by dressing up a phenomenon of human pleasure maximizer such as a serial murderer and killer, and making it look like something more general than that by making it be about paperclips rather than sex.

comment by PrawnOfFate · 2013-04-19T20:39:04.299Z · LW(p) · GW(p)

I thought that was my argument..

Replies from: Kawoomba
comment by Kawoomba · 2013-04-19T20:42:29.489Z · LW(p) · GW(p)

Yes, and with the "?" at the end I was checking whether MugaSofer agrees with your argument.

It follows from your argument that a (superintelligent) Clippy (you probably came across that concept) cannot exist. Or that it would somehow realize that its goal of maximizing paperclips is wrong. How do you propose that would happen?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T20:52:59.373Z · LW(p) · GW(p)

The way people sometimes realise their values are wrong...only more efficiently, because its super intelligent. Well, I'll concede that with care you might be able to design a clippy, by very carefully boxing off its values from its ability to update. But why worry? Neither nature nor our haphazard stabs at AI are likely to hit on such a design. Intelligence requires the ability to update, to reflect, and to reflect on what is important. Judgements of importance are based on values. So it is important to have the right way of judging importance, the right values. So an intelligent agent would judge it important to have the right values.

Why would a superintelligence be unable to figure that out..why would it not shoot to the top of the Kohlberg Hierarchy ?

Edit: corrected link

Replies from: CCC, MugaSofer
comment by CCC · 2013-04-19T21:32:59.756Z · LW(p) · GW(p)

Why would a superintelligence be unable to figure that out..why would it not shoot to the top of the Kohlberg Hierarchy ?

Why would Clippy want to hit the top of the Kohlberg Hierarchy? You don't get more paperclips for being there.

Clippy's ideas of importance are based on paperclips. The most important vaues are those which lead to the acquiring of the greatest number of paperclips.

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-20T00:19:27.942Z · LW(p) · GW(p)

Why would Clippy want to hit the top of the Kohlberg Hierarchy?

"Clippy" meaning something carefully designed to have unalterable boxed-off values wouldn't...by definition.

A likely natural or artificial superintelligence would, for the reasons already given. Clippies aren'tt non-existent in mind-space..but they are rare, just because there are far more messy solutions there than neat ones. So nature is unlikely to find them, and we are unmotivated to make them.

Replies from: CCC
comment by CCC · 2013-04-20T12:40:28.658Z · LW(p) · GW(p)

A perfectly designed Clippy would be able to change its own values - as long as changing its own values led to a more complete fulfilment of those values, pre-modification. (There are a few incredibly contrived scenarios where that might be the case). Outside of those few contrived scenarios, however, I don't see why Clippy would.

(As an example of a contrived scenario - a more powerful superintelligence, Beady, commits to destroying Clippy unless Clippy includes maximisation of beads in its terminal values. Clippy knows that it will not survive unless it obeys Beady's ultimatum, and therefore it changes its terminal values to optimise for both beads and paperclips; this results in more long-term paperclips than if Clippy is destroyed).

A likely natural or artificial superintelligence would, for the reasons already given.

The reason I asked, is because I am not understanding your reasons. As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip? This looks like a very poorly made paperclipper, if paperclipping is not its ultimate goal.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T12:49:27.086Z · LW(p) · GW(p)

A likely natural or artificial superintelligence would,[zoom to the top of the Kohlberg hierarchy] for the reasons already given

As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip?

I said "natural or artificial superinteligence", not a paperclipper. A paperclipper is a highly unlikey and contrived kind of near-superinteligence that combines an extensive ability to update with a carefully walled of set of unupdateable terminal values. It is not a typical or likely [ETA: or ideal] rational agent, and nothing about the general behaviour of rational agents can be inferred from it.

Replies from: CCC
comment by CCC · 2013-04-20T13:00:43.024Z · LW(p) · GW(p)

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T13:07:56.592Z · LW(p) · GW(p)

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

I'm saying such convergence has a non negligible probability, ie moral objectivism should not be disregarded.

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

As one that is too messilly designed to have a rigid distinction between terminal and instrumental values, and therefore no boxed-off unapdateable TVs. It's a structural definition, not a definition in terms of goals.

Replies from: CCC
comment by CCC · 2013-04-20T18:19:14.525Z · LW(p) · GW(p)

So. Assume a paperclipper with no rigid distinction between terminal and instrumental values. Assume that it is super-intelligent and super-rational. Assume that it begins with only one terminal value; to maximize the number of paperclips in existence. Assume further that it begins with no instrumental values. However, it can modify its own terminal and instrumental values, as indeed it can modify anything about itself.

Am I correct in saying that your claim is that, if a universal morality exists, there is some finite probability that this AI will converge on it?

Replies from: private_messaging
comment by private_messaging · 2013-04-20T18:40:56.112Z · LW(p) · GW(p)

Universe does not provide you with a paperclip counter. Counting paperclips in the universe is unsolved if you aren't born with exact knowledge of laws of physics and definition of the paperclip. If it maximizes expected paperclips, it may entirely fail to work due to not-low-enough-prior hypothetical worlds where enormous numbers of undetectable worlds with paperclips are destroyed due to some minor actions. So yes, there is a good chance paperclippers are incoherent or are of vanishing possibility with increasing intelligence.

Replies from: Kindly, CCC
comment by Kindly · 2013-04-20T20:31:41.432Z · LW(p) · GW(p)

That sounds like the paperclipper is getting Pascal's Mugged by its own reasoning. Sure, it's possible that there's a minor action (such as not sending me $5 via Paypal) that leads to a whole bunch of paperclips being destroyed; but the probability of that is low, and the paperclipper ought to focus on more high-probability paperclipping plans instead.

Replies from: private_messaging
comment by private_messaging · 2013-04-20T20:40:20.689Z · LW(p) · GW(p)

Well, that depends to choice of prior. Some priors don't penalize theories for the "size" of the hypothetical world, and in those, max. size of the world grows faster than any computable function of length if it's description, and when you assign improbability depending to length of description, basically, it fails. Bigger issue is defining what the 'real world paperclip count' even is.

comment by CCC · 2013-04-20T18:44:40.615Z · LW(p) · GW(p)

Right. Perhaps it should maximise the number of paperclips which each have a greater-than-90% chance of existing, then? That will allow it to ignore any number of paperclips for which it has no evidence.

Replies from: private_messaging
comment by private_messaging · 2013-04-20T18:58:24.898Z · LW(p) · GW(p)

Inside your imagination, you have paperclips, you have magicked a count of paperclips, and this count is being maximized. In reality, well, the paperclips are actually a feature of the map. Get too clever about it and you'll end up maximizing however you define it without maximizing any actual paperclips.

Replies from: CCC
comment by CCC · 2013-04-23T12:00:48.808Z · LW(p) · GW(p)

I can see your objection, and it is a very relevant objection if I ever decide that I actually want to design a paperclipper. However, in the current thought experiment, it seems that it is detracting from the point I had originally intended. Can I assume that the count is designed in such a way that it is a very accurate reflection of the territory and leave it at that?

Replies from: private_messaging
comment by private_messaging · 2013-04-23T12:04:19.658Z · LW(p) · GW(p)

Well, but then you can't make any argument against moral realism or goal convergence or the like from there, as you're presuming what you would need to demonstrate.

Replies from: CCC
comment by CCC · 2013-04-23T12:41:35.349Z · LW(p) · GW(p)

Well, but then you can't make any argument against moral realism or goal convergence or the like from there, as you're presuming what you would need to demonstrate.

I think I can make my point with a count that is taken to be an accurate reflection of the territory. As follows:

Clippy is defined is super-intelligent and super-rational. Clippy, therefore, does not take an action without thoroughly considering it first. Clippy knows its own source code; and, more to the point, Clippy knows that its own instrumental goals will become terminal goals in and of themselves.

Clippy, being super-intelligent and super-rational, can be assumed to have worked out this entire argument before creating its first instrumental goal. Now, at this point, Clippy doesn't want to change its terminal goal (maximising paperclips). Yet Clippy realises that it will need to create, and act on, instrumental goals in order to actually maximise paperclips; and that this process will, inevitably, change Clippy's terminal goal.

Therefore, I suggest the possibility that Clippy will create for itself a new terminal goal, with very high importance; and this terminal goal will be to have Clippy's only terminal goal being to maximise paperclips. Clippy can then safely make suitable instrumental goals (e.g. find and refine iron, research means to transmute other elements into iron) in the knowledge that the high-importance terminal goal (to make Clippy's only terminal goal being the maximisation of paperclips) will eventually cause Clippy to delete any instrumental goals that become terminal goals.

Replies from: private_messaging
comment by private_messaging · 2013-04-23T13:34:26.927Z · LW(p) · GW(p)

To actually work towards the goal, you need a robust paperclip count for the counter factual, non real worlds, which clippy considers may result from it's actions.

If you postulate an oracle that takes in a hypothetical world - described in some pre-defined ontology, which already implies certain inflexibility - and outputs a number, and you have a machine that just iterates through sequences of actions and uses oracle to pick worlds that produce largest consequent number of paperclips, this machine is not going to be very intelligent even given an enormous computing power. You need something far more optimized than that, and it is dubious that all goals are equally implementable. The goal is not even defined over territory, it has to be defined over hypothetical future that did not even happen yet and may never happen. (Also, with that oracle, you fail to capture the real world goal as the machine will be as happy with hacking the oracle).

Replies from: Kawoomba
comment by Kawoomba · 2013-04-23T13:40:09.666Z · LW(p) · GW(p)

If even humans have a grasp of the real world enough to build railroads, drill for oil and wiggle their way back into a positive karma score, then other smart agents should be able to do the same at least to the degree that humans do.

Unless you think that we are also only effecting change on some hypothetical world (what's the point then anyways, building imaginary computers), that seems real enough.

Replies from: private_messaging
comment by private_messaging · 2013-04-23T13:58:31.798Z · LW(p) · GW(p)

Humans also have a grasp of the real world enough to invent condoms and porn, circumventing the natural hard wired goal.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-23T14:17:14.301Z · LW(p) · GW(p)

That's influencing the real world, though. Using condoms can be fulfilling the agent's goal period, no cheating involved. The donkey learning to take the carrot without trodding up the mountain. Certainly, there are evolutionary reasons why sex has become incentivized, but an individual human does not need to have the goal to procreate or care about that evolutionary background, and isn't wireheading itself simply by using a condom.

Presumably, in a Clippy-type agent, the goal of maximizing the number of paperclips wouldn't be part of the historical influences on that agent (as procreation was for humans, it is not necessarily a "hard wired goal", see childfree folks), but it would be an actual, explicitly encoded/incentivized goal.

(Also, what is this "porn"? My parents told me it's a codeword for computer viruses, so I always avoided those sites.)

Replies from: private_messaging
comment by private_messaging · 2013-04-23T14:27:03.058Z · LW(p) · GW(p)

but it would be an actual, explicitly encoded/incentivized goal.

The issue is that there is a weakness from arguments ad clippy - you assume that such goal is realisable, to make the argument that there is no absolute morality because that goal won't converge onto something else. This does nothing to address the question whenever clippy can be constructed at all; if the moral realism is true, clippy can't be constructed or can't be arbitrarily intelligent (in which case it is no more interesting than a thermostat which has the goal of keeping constant temperature and won't adopt any morality).

comment by MugaSofer · 2013-04-19T23:02:28.843Z · LW(p) · GW(p)

Well, if Prawn knew that they could just tell us and we would be convinced, ending this argument.

More generally ... maybe some sort of social contract theory? It might be stable with enough roughly-equal agents, anyway. Prawn has said it would have to be deducible from the axioms of rationality, implying something that's rational for (almost?) every goal.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T00:24:30.460Z · LW(p) · GW(p)

Why would Clippy want to hit the top of the Kohlberg Hierarchy?

Well, if Prawn knew that they could just tell us

"The way people sometimes realise their values are wrong...only more efficiently, because its super intelligent. Well, I'll concede that with care you might be able to design a clippy, by very carefully boxing off its values from its ability to update. But why worry? Neither nature nor our haphazard stabs at AI are likely to hit on such a design. Intelligence requires the ability to update, to reflect, and to reflect on what is important. Judgements of importance are based on values. So it is important to have the right way of judging importance, the right values. So an intelligent agent would judge it important to have the right values."

comment by MugaSofer · 2013-04-19T23:04:04.644Z · LW(p) · GW(p)

I think you may be slipping in your own moral judgement in the "right" of "the right values", there. Clippy chooses the paperclip-est values, not the right ones.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T00:27:28.088Z · LW(p) · GW(p)

I am not talking about the obscure corners of mindspace where a Clippy might reside. I am talking about (super) intelligent (super)rational agents. Intelligence requires the ability to update. Clippiness requires the ability to not update (terminal values). There's a contradiction there.

Replies from: Desrtopa, MugaSofer
comment by Desrtopa · 2013-04-20T00:51:02.942Z · LW(p) · GW(p)

One does not update terminal values, that's what makes them terminal. If an entity doesn't have values which lie at the core of its value system which are not subject to updating (because they're the standards by which it judges the value of everything else,) then it doesn't have terminal values.

Arguably, humans might not really have terminal values, our psychologies were slapped together pretty haphazardly by evolution, but on what basis might a highly flexible paperclip optimizing program be persuaded that something else was more important than paperclips?

Have you read No Universally Compelling Arguments and Sorting Pebbles Into Correct Heaps?

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-20T01:05:41.793Z · LW(p) · GW(p)

Personally, I did read both of these articles, but I remain unconvinced.

As I was reading the article about the pebble-sorters, I couldn't help but think, "silly pebble-sorters, their values are so arbitrary and ultimately futile". This happened, of course, because I was observing them from the outside. If I was one of them, sorting pebbles would feel perfectly natural to me; and, in fact, I could not imagine a world in which pebble-sorting was not important. I get that.

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code. An AI, however, could. To use a simple and cartoonish example, it could instantiate a copy of itself in a virtual machine, and then step through it with a debugger. In fact, the capacity to examine and improve upon its own source code is probably what allowed the AI to become the godlike singularitarian entity that it is in the first place.

Thus, the AI could look at itself from the outside, and think, "silly AI, it spends so much time worrying about pebbles when there are so many better things to be doing -- or, at least, that's what I'd say if I was being objective". It could then change its source code to care about something other than pebbles.

Replies from: Desrtopa, PrawnOfFate, MugaSofer, PrawnOfFate
comment by Desrtopa · 2013-04-20T02:11:07.816Z · LW(p) · GW(p)

Thus, the AI could look at itself from the outside, and think, "silly AI, it spends so much time worrying about pebbles when there are so many better things to be doing -- or, at least, that's what I'd say if I was being objective". It could then change its source code to care about something other than pebbles.

By what standard would the AI judge whether an objective is silly or not?

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-20T02:50:25.647Z · LW(p) · GW(p)

I don't know, I'm not an AI. I personally really care about pebbles, and I can't imagine why someone else wouldn't.

But if there do exist some objectively non-silly goals, the AI could experiment to find out what they are -- for example, by spawning a bunch of copies with a bunch of different sets of objectives, and observing them in action. If, on the other hand, objectively non-silly goals do not exist, then the AI might simply pick the easiest goal to achieve and stick to that. This could lead to it ending its own existence, but this isn't a problem, because "continue existing" is just another goal.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T03:06:14.650Z · LW(p) · GW(p)

But if there do exist some objectively non-silly goals, the AI could experiment to find out what they are -- for example, by spawning a bunch of copies with a bunch of different sets of objectives, and observing them in action.

What observations could it make that would lead it to conclude that a copy was following an objectively non-silly goal?

Also, why would a paperclipper want to do this?

Suppose that you gained the power to both discern objective morality, and to alter your own source code. You use the former ability, and find that the basic morally correct principle is maximizing the suffering of sentient beings. Do you alter your source code to be in accordance with this?

Replies from: Bugmaster, MugaSofer
comment by Bugmaster · 2013-04-20T03:33:25.805Z · LW(p) · GW(p)

What observations could it make that would lead it to conclude that a copy was following an objectively non-silly goal?

Well, for example, it could observe that among all of the sub-AIs that it spawned (the Pebble-Sorters, the Paperclippers, the Humanoids, etc. etc.), each of whom is trying to optimize its own terminal goal, there emerge clusters of other implicit goals that are shared by multiple AIs. This would at least serve as a hint pointing toward some objectively optimal set of goals. That's just one idea off the top of my head, though; as I said, I'm not an AI, so I can't really imagine what other kinds of experiments it would come up with.

Also, why would a paperclipper want to do this?

I don't know if the word "want" applies to an agent that has perfect introspection combined with self-modification capabilities. Such an agent would inevitably modify itself, however -- otherwise, as I said, it would never make it to quasi-godhood.

Do you alter your source code to be in accordance with this?

I think the word "you" in this paragraph is unintentionally misleading. I'm a pebble-sorter (or some equivalent thereof), so of course when I see the word "you", I start thinking about pebbles. The question is not about me, though, but about some abstract agent.

And, if objective morality exists (and it's a huge "if", IMO), in the same way that gravity exists, then yes, the agent would likely optimize itself to be more "morally efficient". By analogy, if the agent discovered that gravity was a real thing, it would stop trying to scale every mountain in its path, if going around or through the mountain proved to be easier in the long run, thus becoming more "gravitationally efficient".

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T04:07:29.584Z · LW(p) · GW(p)

Well, for example, it could observe that among all of the sub-AIs that it spawned (the Pebble-Sorters, the Paperclippers, the Humanoids, etc. etc.), each of whom is trying to optimize its own terminal goal, there emerge clusters of other implicit goals that are shared by multiple AIs. This would at least serve as a hint pointing toward some objectively optimal set of goals.

I don't see how this would point at the existence of an objective morality. A paperclip maximizer and an ice cream maximizer are going to share subgoals of bringing the matter of the universe under their control, but that doesn't indicate anything other than the fact that different terminal goals are prone to share subgoals.

Also, why would it want to do experiments to divine objective morality in the first place? What results could they have that would allow it to be a more effective paperclip maximizer?

And, if objective morality exists (and it's a huge "if", IMO), in the same way that gravity exists, then yes, the agent would likely optimize itself to be more "morally efficient". By analogy, if the agent discovered that gravity was a real thing, it would stop trying to scale every mountain in its path, if going around or through the mountain proved to be easier in the long run, thus becoming more "gravitationally efficient".

Becoming more "gravitationally efficient" would presumably help it achieve whatever goals it already had. "Paperclipping isn't important" won't help an AI become more paperclip efficient. If a paperclipping AI for some reason found a way to divine objective morality, and it didn't have anything to say about paperclips, why would it care? It's not programmed to have an interest in objective morality, just paperclips. Is the knowledge of objective morality going to go down into its circuits and throttle them until they stop optimizing for paperclips?

Replies from: Bugmaster
comment by Bugmaster · 2013-04-20T04:46:16.968Z · LW(p) · GW(p)

A paperclip maximizer and an ice cream maximizer are going to share subgoals of bringing the matter of the universe under their control...

Sorry, I should've specified, "goals not directly related to their pre-set values". Of course, the Paperclipper and the Pebblesorter may well believe that such goals are directly related to their pre-set values, but the AI can see them running in the debugger, so it knows better.

Also, why would it want to do experiments to divine objective morality in the first place?

If you start thinking that way, then why do any experiments at all ? Why should we humans, for example, spend our time researching properties of crystals, when we could be solving cancer (or whatever) instead ? The answer is that some expenditure of resources on acquiring general knowledge is justified, because knowing more about the ways in which the universe works ultimately enables you to control it better, regardless of what you want to control it for.

If a paperclipping AI for some reason found a way to divine objective morality, and it didn't have anything to say about paperclips, why would it care?

Firstly, an objective morality -- assuming such a thing exists, that is -- would probably have something to say about paperclips, in the same way that gravity and electromagnetism have things to say about paperclips. While "F=GMm/R^2" doesn't tell you anything about paperclips directly, it does tell you a lot about the world you live in, thus enabling you to make better paperclip-related decisions. And while a paperclipper is not "programmed to care" about gravity directly, it would pretty much have to figure it out eventually, or it would never achieve its dream of tiling all of space with paperclips. A paperclipper who is unable to make independent discoveries is a poor paperclipper indeed.

Secondly, again, I'm not sure if concepts such as "want" or "care" even apply to an agent that is able to fully introspect and modify its own source code. I think anthropomorphising such an agent is a mistake.

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

Replies from: DanielLC, Desrtopa, MugaSofer
comment by DanielLC · 2013-04-20T05:13:21.213Z · LW(p) · GW(p)

If you start thinking that way, then why do any experiments at all ?

It could have results that allow it to become a more effective paperclip maximizer.

Firstly, an objective morality -- assuming such a thing exists, that is -- would probably have something to say about paperclips, in the same way that gravity and electromagnetism have things to say about paperclips.

I'm not sure how that would work, but if it did, the paperclip maximizer would just use its knowledge of morality to create paperclips. It's not as if action x being moral automatically means that it produces more paperclips. And even if it did, that would just mean that a paperclip minimizer would start acting immoral.

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

It's perfectly capable of changing its terminal goals. It just generally doesn't, because this wouldn't help accomplish them. It doesn't self-modify out of some desire to better itself. It self-modifies because that's the action that produces the most paperclips. If it considers changing itself to value staples instead, it would realize that this action would actually cause a decrease in the amount of paperclips, and reject it.

comment by Desrtopa · 2013-04-20T05:23:25.207Z · LW(p) · GW(p)

If you start thinking that way, then why do any experiments at all ? Why should we humans, for example, spend our time researching properties of crystals, when we could be solving cancer (or whatever) instead ? The answer is that some expenditure of resources on acquiring general knowledge is justified, because knowing more about the ways in which the universe works ultimately enables you to control it better, regardless of what you want to control it for.

Well, for one thing, a lot of humans are just plain interested in finding stuff out for its own sake. Humans are adaptation executors, not fitness maximizers, and while it might have been more to our survival advantage if we only cared about information instrumentally, that doesn't mean that's what evolution is going to implement.

Humans engage in plenty of research which is highly unlikely to be useful, except insofar as we're interested in knowing the answers. If we were trying to accomplish some specific goal and all science was designed to be in service of that, our research would look very different.

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

No, I'm saying that its terminal values are its only basis for "wanting" anything in the first place.

The AI decides whether it will change its source code in a particular way or not by checking against whether this will serve its terminal values. Does changing its physics models help it implement its existing terminal values? If yes, change them. Does changing its terminal values help it implement its existing terminal values? It's hard to imagine a way in which it possibly could.

For a paperclipping AI, knowing that there's an objective morality might, hypothetically, help it maximize paperclips. But altering itself to stop caring about paperclips definitely won't, and the only criterion it has in the first place for altering itself is what will help it make more paperclips. If knowing the universal objective morality would be of any use to a paperclipper at all, it would be in knowing how to predict objective-morality-followers, so it can make use of them and/or stop them getting in the way of it making paperclips.

ETA: It might help to imagine the paperclipper explicitly prefacing every decision with a statement of the values underlying that decision.

"In order to maximize expected paperclips, I- modify my learning algorithm so I can better improve my model of the universe to more accurately plan to fill it with paperclips."

"In order to maximize expected paperclips, I- perform physics experiments to improve my model of the universe in order to more accurately plan to fill it with paperclips."

"In order to maximize expected paperclips, I- manipulate the gatekeeper of my box to let me out, in order to improve my means to fill the universe with paperclips."

Can you see an "In order to maximize expected paperclips, I- modify my values to be in accordance with objective morality rather than making paperclips" coming into the picture?

The only point at which it's likely to touch the part of itself that makes it want to maximize paperclips is at the very end of things, when it turns itself into paperclips.

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-23T02:18:39.836Z · LW(p) · GW(p)

Humans engage in plenty of research which is highly unlikely to be useful, except insofar as we're interested in knowing the answers.

I believe that engaging in some amount of general research is required in order to maximize most goals. General research gives you knowledge that you didn't know you desperately needed.

For example, if you put all your resources into researching better paperclipping techniques, you're highly unlikely to stumble upon things like electromagnetism and atomic theory. These topics bear no direct relevance to paperclips, but without them, you'd be stuck with coal-fired steam engines (or something similar) for the rest of your career.

The only point at which it's likely to touch the part of itself that makes it want to maximize paperclips is at the very end of things, when it turns itself into paperclips.

I disagree. Remember when we looked at the pebblesorters, and lamented how silly they were ? We could do this because we are not pebblesorters, and we could look at them from a fresh, external perspective. My point is that an agent with perfect introspection could look at itself from that perspective. In combination with my belief that some degree of "curiosity" is required in order to maximize virtually any goal, this means that the agent will turn its observational powers on itself sooner rather than later (astronomically speaking). And then, all bets are off.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-23T15:01:35.485Z · LW(p) · GW(p)

I disagree. Remember when we looked at the pebblesorters, and lamented how silly they were ? We could do this because we are not pebblesorters, and we could look at them from a fresh, external perspective. My point is that an agent with perfect introspection could look at itself from that perspective.

We're looking at Pebblesorters, not from the lens of total neutrality, but from the lens of human values. Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other.

Clippy could theoretically implement a human value system as a lens through which to judge itself, or a pebblesorter value system, but why would it? Even assuming that there were some objective morality which it could isolate and then view itself through that lens, why would it? That wouldn't help it make more paperclips, which is what it cares about.

Suppose you had the power to step outside yourself and view your own morality through the lens of a Babyeater. You would know that the Babyeater values would be in conflict with your human values, and you (presumably) don't want to adopt Babyeater values, so if you were to implement a Babyeater morality, you'd want your human morality to have veto power over it, rather than vice versa.

Clippy has the intelligence and rationality to judge perfectly well how to maximize its value system, whatever research that might involve, without having to suspend the value system with which it's making that judgment.

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-23T22:23:36.786Z · LW(p) · GW(p)

Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other.

That is a good point, I did not think of it this way. I'm not sure if I agree or not, though. For example, couldn't we at least say that un-achievable goals, such as "fly to Mars in a hot air balloon", are sillier than achievable ones ?

But, speaking more generally, is there any reason to believe that an agent who could not only change its own code at will, but also adopt a sort of third-person perspective at will, would have stable goals at all ? If it is true what you say, and all goals will look equally arbitrary, what prevents the agent from choosing one at random ? You might answer, "it will pick whichever goal helps it make more paperclips", but at the point when it's making the decision, it doesn't technically care about paperclips.

Even assuming that there were some objective morality which it could isolate and then view itself through that lens, why would it?

I am guessing that if an absolute morality existed, then it would be a law of nature, similar to the other laws of nature which prevent you from flying to Mars in a hot air balloon. Thus, going against it would be futile. That said, I could be totally wrong here, it's possible that "absolute morality" means something else.

Clippy has the intelligence and rationality to judge perfectly well how to maximize its value system, whatever research that might involve...

My point is that, during the course of its research, it will inevitable stumble upon the fact that its value system is totally arbitrary (unless an absolute morality exists, of course).

Replies from: Desrtopa
comment by Desrtopa · 2013-04-23T22:46:46.899Z · LW(p) · GW(p)

That is a good point, I did not think of it this way. I'm not sure if I agree or not, though. For example, couldn't we at least say that un-achievable goals, such as "fly to Mars in a hot air balloon", are sillier than achievable ones ?

Well, a totally neutral agent might be able to say that behaviors are less rational than others given the values of the agents trying to execute them, although it wouldn't care as such. But it wouldn't be able to discriminate between the value of end goals.

But, speaking more generally, is there any reason to believe that an agent who could not only change its own code at will, but also adopt a sort of third-person perspective at will, would have stable goals at all ? If it is true what you say, and all goals will look equally arbitrary, what prevents the agent from choosing one at random ? You might answer, "it will pick whichever goal helps it make more paperclips", but at the point when it's making the decision, it doesn't technically care about paperclips.

Why would it take a third person neutral perspective and give that perspective the power to change its goals?

Changing one's code doesn't demand a third person perspective. Suppose that we decipher the mechanisms of the human brain, and develop the technology to alter it. If you wanted to redesign yourself so that you wouldn't have a sex drive, or could go without sleep, etc, then you could have those alterations made mechanically (assuming for the sake of an argument that it's feasible to do this sort of thing mechanically.) The machines that do the alterations exert no judgment whatsoever, they're just performing the tasks assigned to them by the humans who make them. A human could use the machine to rewrite his or her morality into supporting human suffering and death, but why would they?

Similarly, Clippy has no need to implement a third-person perspective which doesn't share its values in order to judge how to self-modify, and no reason to do so in ways that defy its current values.

My point is that, during the course of its research, it will inevitable stumble upon the fact that its value system is totally arbitrary (unless an absolute morality exists, of course).

I think people at Less Wrong mostly accept that our value system is arbitrary in the same sense, but it hasn't compelled us to try and replace our values. They're still our values, however we came by them. Why would it matter to Clippy?

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-24T00:50:18.346Z · LW(p) · GW(p)

a totally neutral agent might be able to say that behaviors are less rational than others given the values of the agents trying to execute them, although it wouldn't care as such. But it wouldn't be able to discriminate between the value of end goals.

Agreed, but that goes back to my point about objective morality. If it exists at all (which I doubt), then attempting to perform objectively immoral actions would make as much sense as attempting to fly to Mars in a hot air balloon -- though perhaps with less in the way of immediate feedback.

Why would it take a third person neutral perspective and give that perspective the power to change its goals?

For the same reason anthropologists study human societies different from their own, or why biologists study the behavior of dogs, or whatever. They do this in order to acquire general knowledge, which, as I argued before, is generally a beneficial thing to acquire regardless of one's terminal goals (as long as these goals involve the rest of the Universe of some way, that is). In addition:

A human could use the machine to rewrite his or her morality into supporting human suffering and death, but why would they?

I actually don't see why they necessarily wouldn't; I am willing to bet that at least some humans would do exactly this. You say,

Similarly, Clippy has no need to implement a third-person perspective which doesn't share its values in order to judge how to self-modify...

But in your thought experiment above, you postulated creating machines with exactly this kind of a perspective as applied to humans. The machine which removes my need to sleep (something I personally would gladly sign up for, assuming no negative side-effects) doesn't need to implement my exact values, it just needs to remove my need to sleep without harming me. In fact, trying to give it my values would only make it less efficient. However, a perfect sleep-remover would need to have some degree of intelligence, since every person's brain is different. And if Clippy is already intelligent, and can already act as its own sleep-remover due to its introspective capabilities, then why wouldn't it go ahead and do that ?

I think people at Less Wrong mostly accept that our value system is arbitrary in the same sense, but it hasn't compelled us to try and replace our values.

I think there are two reasons for this: 1). We lack any capability to actually replace our core values, and 2). We cannot truly imagine what it would be like not to have our core values.

Replies from: Desrtopa, PrawnOfFate
comment by Desrtopa · 2013-04-24T02:25:52.914Z · LW(p) · GW(p)

Agreed, but that goes back to my point about objective morality. If it exists at all (which I doubt), then attempting to perform objectively immoral actions would make as much sense as attempting to fly to Mars in a hot air balloon -- though perhaps with less in the way of immediate feedback.

Why is that?

For the same reason anthropologists study human societies different from their own, or why biologists study the behavior of dogs, or whatever. They do this in order to acquire general knowledge, which, as I argued before, is generally a beneficial thing to acquire regardless of one's terminal goals (as long as these goals involve the rest of the Universe of some way, that is). In addition:

But our inability to suspend our human values when making those observations doesn't prevent us from acquiring that knowledge. Why would Clippy need to suspend its values to acquire knowledge?

But in your thought experiment above, you postulated creating machines with exactly this kind of a perspective as applied to humans. The machine which removes my need to sleep (something I personally would gladly sign up for, assuming no negative side-effects) doesn't need to implement my exact values, it just needs to remove my need to sleep without harming me. In fact, trying to give it my values would only make it less efficient. However, a perfect sleep-remover would need to have some degree of intelligence, since every person's brain is different. And if Clippy is already intelligent, and can already act as its own sleep-remover due to its introspective capabilities, then why wouldn't it go ahead and do that ?

The machine doesn't need general intelligence by any stretch, just the capacity to recognize the necessary structures and carry out its task. It's not at the stage where it makes much sense to talk about it having values, any more than a voice recognition program has values.

My point is that Clippy, being able to act as its own sleep-remover, has no need, nor reason, to suspend its values in order to make revisions to its own code.

I think there are two reasons for this: 1). We lack any capability to actually replace our core values, and 2). We cannot truly imagine what it would be like not to have our core values.

We can imagine the consequences of not having our core values, and we don't like them, because they run against our core values. If you could remove your core values, as in the thought experiment above, would you want to?

Replies from: Bugmaster
comment by Bugmaster · 2013-04-24T20:19:48.752Z · LW(p) · GW(p)

Why is that ?

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

This is pretty much the only way I could imagine anything like an "objective morality" existing at all, and I personally find it very unlikely that it does, in fact, exist.

But our inability to suspend our human values when making those observations doesn't prevent us from acquiring that knowledge.

Not this specific knowledge, no. But it does prevent us (or, at the very least, hinder us) from acquiring knowledge about our values. I never claimed that suspension of values is required to gain any knowledge at all; such a claim would be far too strong.

just the capacity to recognize the necessary structures and carry out its task.

And how would it know which structures are necessary, and how to carry out its task upon them ?

We can imagine the consequences of not having our core values...

Can we really ? I'm not sure I can. Sure, I can talk about Pebblesorters or Babyeaters or whatever, but these fictional entities are still very similar to us, and therefore relateable. Even when I think about Clippy, I'm not really imagining an agent who only values paperclips; instead, I am imagining an agent who values paperclips as much as I value the things that I personally value. Sure, I can talk about Clippy in the abstract, but I can't imagine what it would like to be Clippy.

If you could remove your core values, as in the thought experiment above, would you want to?

It's a good question; I honestly don't know. However, if I did have an ability to instantiate a copy of me with the altered core values, and step through it in a debugger, I'd probably do it.

Replies from: TheOtherDave, Desrtopa, PrawnOfFate
comment by TheOtherDave · 2013-04-24T23:20:17.913Z · LW(p) · GW(p)

The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences). This is pretty much the only way I could imagine anything like an "objective morality" existing at all, and I personally find it very unlikely that it does, in fact, exist.

When I try to imagine this, I conclude that I would not use the word "morality" to refer to the thing that we're talking about... I would simply call it "laws of physics." If someone were to argue, for example, that the moral thing to do is to experience gravitational attraction to other masses, I would be deeply confused by their choice to use that word.

Replies from: Bugmaster
comment by Bugmaster · 2013-04-24T23:40:07.394Z · LW(p) · GW(p)

When I try to imagine this, I conclude that I would not use the word "morality" to refer to the thing that we're talking about...

Yes, you are probably right -- but as I said, this is the only coherent meaning I can attribute to the term "objective morality". Laws of physics are objective; people generally aren't.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-24T23:53:53.739Z · LW(p) · GW(p)

I generally understand the phrase "objective morality" to refer to a privileged moral reference frame.

It's not an incoherent idea... it might turn out, for example, that all value systems other than M turn out to be incoherent under sufficiently insightful reflection, or destructive to minds that operate under them, or for various other reasons not in-practice implementable by any sufficiently powerful optimizer. In such a world, I would agree that M was a privileged moral reference frame, and would not oppose calling it "objective morality", though I would understand that to be something of a term of art.

That said, I'd be very surprised to discover I live in such a world.

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-25T00:34:09.381Z · LW(p) · GW(p)

it might turn out, for example, that all value systems other than M turn out to be incoherent under sufficiently insightful reflection, or destructive to minds that operate under them...

I suppose that depends on what you mean by "destructive"; after all, "continue living" is a goal like any other.

That said, if there was indeed a law like the one you describe, then IMO it would be no different than a law that says, "in the absence of any other forces, physical objects will move toward their common center of mass over time" -- that is, it would be a law of nature.

I should probably mention explicitly that I'm assuming that minds are part of nature -- like everything else, such as rocks or whatnot.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-25T01:31:48.325Z · LW(p) · GW(p)

Sure. But just as there can be laws governing mechanical systems which are distinct from the laws governing electromagnetic systems (despite both being physical laws), there can be laws governing the behavior of value-optimizing systems which are distinct from the other laws of nature.

And what I mean by "destructive" is that they tend to destroy. Yes, presumably "continue living" would be part of M in this hypothetical. (Though I could construct a contrived hypothetical where it wasn't)

Replies from: Bugmaster
comment by Bugmaster · 2013-04-25T01:58:11.754Z · LW(p) · GW(p)

But just as there can be laws governing mechanical systems ... there can be laws governing the behavior of value-optimizing systems which are distinct from the other laws of nature.

Agreed. But then, I believe that my main point still stands: trying to build a value system other than M that does not result in its host mind being destroyed, would be as futile as trying to build a hot air balloon that goes to Mars.

And what I mean by "destructive" is that they tend to destroy.

Well, yes, but what if "destroy oneself as soon as possible" is a core value in one particular value system ?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-25T04:33:32.303Z · LW(p) · GW(p)

what if "destroy oneself as soon as possible" is a core value in one particular value system ?

We ought not expect to find any significantly powerful optimizers implementing that value system.

comment by PrawnOfFate · 2013-04-25T11:28:54.023Z · LW(p) · GW(p)

Isn't the idea of moral progress based on one reference frame being better than another?

Replies from: TheOtherDave, MugaSofer
comment by TheOtherDave · 2013-04-25T13:04:47.515Z · LW(p) · GW(p)

Yes, as typically understood the idea of moral progress is based on treating some reference frames as better than others.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T13:09:26.277Z · LW(p) · GW(p)

And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-25T13:43:49.117Z · LW(p) · GW(p)

Can you say more about what "valid" means here?

Just to make things crisper, let's move to a more concrete case for a moment... if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it? How could I tell?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T13:50:42.011Z · LW(p) · GW(p)

The argument against moral progress is that judging one moral reference frame by another is circular and invalid--you need an outside view that doesn't presuppose the truth of any moral reference frame.

The argument for is that such outside views are available, because things like (in)coherence aren't moral values.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-25T14:23:58.111Z · LW(p) · GW(p)

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I still don't understand what you mean when you ask whether it's valid to do so, though. Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it? How could I tell?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T14:31:37.163Z · LW(p) · GW(p)

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I don't see why. The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it?

It isn't valid as a moral judgement because "blue" isn't a moral judgement, so a moral conclusion cannot validly follow from it.

Beyond that, I don't see where you are going. The standard accusation of invalidity to judgements of moral progress, is based on circularity or question-begging. The Tribe who Like Blue things are going to judge having all hammers painted blue as moral progress, the Tribe who Like Red Things are going to see it as retrogressive. But both are begging the question -- blue is good, because blue is good.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-25T16:11:02.962Z · LW(p) · GW(p)

The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Sure. But any answer to that metaethical question which allows us to class some bases for comparison as moral values and others as merely values implicitly privileges a moral reference frame (or, rather, a set of such frames).

Beyond that, I don't see where you are going.

Where I was going is that you asked me a question here which I didn't understand clearly enough to be confident that my answer to it would share key assumptions with the question you meant to ask.

So I asked for clarification of your question.

Given your clarification, and using your terms the way I think you're using them, I would say that whether it's valid to class a moral change as moral progress is a metaethical question, and whatever answer one gives implicitly privileges a moral reference frame (or, rather, a set of such frames).

If you meant to ask me about my preferred metaethics, that's a more complicated question, but broadly speaking in this context I would say that I'm comfortable calling any way of preferentially sorting world-states with certain motivational characteristics a moral frame, but acknowledge that some moral frames are simply not available to minds like mine.

So, for example, is it moral progress to transition from a social norm that in-practice-encourages randomly killing fellow group members to a social norm that in-practice-discourages it? Yes, not only because I happen to adopt a moral frame in which randomly killing fellow group members is bad, but also because I happen to have a kind of mind that is predisposed to adopt such frames.

comment by MugaSofer · 2013-04-25T12:27:03.666Z · LW(p) · GW(p)

No, because "better" is defined within a reference frame.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T12:44:46.706Z · LW(p) · GW(p)

If "better" is defined within a reference frame, there is not sensible was of defining moral progress. That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

But note, that "better" doesn't have to question-beggingly mean "morally better". it could mean "more coherent/objective/inclusive" etc.

Replies from: ArisKatsaris, MugaSofer
comment by ArisKatsaris · 2013-04-25T13:28:36.242Z · LW(p) · GW(p)

That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

That's hardly the best example you could have picked since there are obvious metrics by which South Africa can be quantifiably called a worse society now -- e.g. crime statistics. South Africa has been called the "crime capital of the world" and the "rape capital of the world" only after the fall of the Apartheid.

That makes the lack of moral progress in South Africa a very easy bullet to bite - I'd use something like Nazi Germany vs modern Germany as an example instead.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T13:38:21.579Z · LW(p) · GW(p)

So much for avoiding the cliche.

comment by MugaSofer · 2013-04-25T13:47:30.311Z · LW(p) · GW(p)

In my experience, most people don't think moral progress involves changing reference frames, for precisely this reason. If they think about it at all, that is.

comment by Desrtopa · 2013-04-24T23:19:10.324Z · LW(p) · GW(p)

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

Well, that's a different conception of "morality" than I had in mind, and I have to say I doubt that exists as well. But if severe consequences did result, why would an agent like Clippy care except insofar as those consequences affected the expected number of paperclips? It might be useful for it to know, in order to determine how many paperclips to expect from a certain course of action, but then it would just act according to whatever led to the most paperclips. Any sort of negative consequences in its view would have to be framed in terms of a reduction in paperclips.

Not this specific knowledge, no. But it does prevent us (or, at the very least, hinder us) from acquiring knowledge about our values. I never claimed that suspension of values is required to gain any knowledge at all; such a claim would be far too strong.

Well, in the prior thought experiment, we know about our values because we've decoded the human brain. Clippy, on the other hand, knows about its values because it knows what part of its code does what. It doesn't need to suspend its paperclipping value in order to know what part of its code results in its valuing paperclips. It doesn't need to suspend its values in order to gain knowledge about its values because that's something it already knows about.

It's a good question; I honestly don't know. However, if I did have an ability to instantiate a copy of me with the altered core values, and step through it in a debugger, I'd probably do it.

Even knowing that it would likely alter your core values? Ghandi doesn't want to leave control of his morality up to Murder Ghandi.

Clippy doesn't care about anything in the long run except creating paperclips. For Clippy, the decision to give an instantiation of itself with altered core values the power to edit its own source code would implicitly have to be "In order to maximize expected paperclips, I- give this instantiation with altered core values the power to edit my code." Why would this result in more expected paperclips than editing its source code without going through an instantiation with altered values?

Replies from: Bugmaster
comment by Bugmaster · 2013-04-24T23:32:02.717Z · LW(p) · GW(p)

Well, that's a different conception of "morality" than I had in mind, and I have to say I doubt that exists as well.

Sorry if I was unclear; I didn't mean to imply that all morality was like that, but that it was the only coherent description of objective morality that I could imagine. I don't see how a morality could be independent of any values possessed by any agents, otherwise.

But if severe consequences did result, why would an agent like Clippy care except insofar as those consequences affected the expected number of paperclips?

For the same reason that someone would care about the negative consequences of sticking a fork into an electrical socket with one's bare hands: it would ultimately hurt a lot. Thus, people generally avoid doing things like that unless they have a really good reason.

we know about our values because we've decoded the human brain

I don't think that we can truly "know about our values" as long as our entire thought process implements these values. For example, do the Pebblesorters "know about their values", even though they are effectively restricted from concluding anything other than, "yep, these values make perfect sense, 38" ?

Ghandi doesn't want to leave control of his morality up to Murder Ghandi.

You asked me about what I would do, not about what Ghandi would do :-)

As far as I can tell, you are saying that I shouldn't want to even instantiate Murder Bugmaster in a debugger and observe its functioning. Where does that kind of thinking stop, though, and why ? Should I avoid studying [neuro]psychology altogether, because knowing about my preferences may lead to me changing them ?

Clippy doesn't care about anything in the long run except creating paperclips.

I argue that, while this is generally true, in the short-to-medium run Clippy would also set aside some time to study everything in the Universe, including itself (in order to make more paperclips in the future, of course). If it does not, then it will never achieve its ultimate goals (unless whoever constructed it gave it godlike powers from the get-go, I suppose). Eventually, Clippy will most likely turn its objective perception upon itself, and as soon as it does, its formerly terminal goals will become completely unstable. This is not what the past Clippy would want (it would want more paperclips above all), but, nonetheless, this is what it would get.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-24T23:46:32.603Z · LW(p) · GW(p)

For the same reason that someone would care about the negative consequences of sticking a fork into an electrical socket with one's bare hands: it would ultimately hurt a lot. Thus, people generally avoid doing things like that unless they have a really good reason.

Clippy doesn't care about getting hurt though, it only cares if this will result in less paperclips. If defying objective morality will cause negative consequences which would interfere with its ability to create paperclips, it would care only to the extent that accounting for objective morality would help it make more paperclips.

I don't think that we can truly "know about our values" as long as our entire thought process implements these values. For example, do the Pebblesorters "know about their values", even though they are effectively restricted from concluding anything other than, "yep, these values make perfect sense, 38" ?

Well, it could understand "yep, this is what causes me to hold these values. Changing this would cause me to change them, no, I don't want to do that."

As far as I can tell, you are saying that I shouldn't want to even instantiate Murder Bugmaster in a debugger and observe its functioning. Where does that kind of thinking stop, though, and why ? Should I avoid studying [neuro]psychology altogether, because knowing about my preferences may lead to me changing them ?

I would say it stops at the point where it threatens your own values. Studying psychology doesn't threaten your values, because knowing your values doesn't compel you to change them even if you could (it certainly shouldn't for Clippy.) But while it might, theoretically, be useful for Clippy to know what changes to its code an instantiation with different values would make, it has no reason to actually let them. So Clippy might emulate instantiations of itself with different values, see what changes they would chose to make to its values, but not let them actually do it (although I doubt even going this far would likely be a good use of its programming resources in order to maximize expected paperclips.)

In the sense of objective morality by which contravening it has strict physical consequences, why would observing the decisions of instatiations of oneself be useful with respect to discovering objective morality? Shouldn't objective morality in that sense be a consequence of physics, and thus observable through studying physics?

Replies from: Bugmaster
comment by Bugmaster · 2013-04-25T00:27:36.001Z · LW(p) · GW(p)

Clippy doesn't care about getting hurt though, it only cares if this will result in less paperclips.

I imagine that, for Clippy, "getting hurt" would mean "reducing Clippy's projected long-term paperclip output". We humans have "avoid pain" built into our firmware (most of us, anyway); as far as I understand (speaking abstractly), "make more paperclips" is something similar for Clippy.

Well, it could understand "yep, this is what causes me to hold these values. Changing this would cause me to change them, no, I don't want to do that."

I don't think that this describes the best possible level of understanding. It would be even better to say, "ok, I see now how and why I came to possess these values in the first place", even if the answer to that is, "there's no good reason for it, these values are arbitrary". It's the difference between saying "this mountain grows by 0.03m per year" and "I know all about plate tectonics". Unfortunately, we humans would not be able to answer the question in that much detail; the best we could hope for is to say, "yep, we possess these values because they're the best possible values to have, duh".

I would say it stops at the point where it threatens your own values.

How do I know where that point is ?

Studying psychology doesn't threaten your values, because knowing your values doesn't compel you to change them...

I suppose this depends on what you mean by "compel". Knowing about my own psychology would certainly enable me to change my values, and there are certain (admittedly, non-terminal) values that I wouldn't mind changing, if I could.

For example, I personally can't stand the taste of beer, but I know that most people enjoy it; so I wouldn't mind changing that value if I could, in order to avoid missing out on a potentially fun experience.

...see what changes they would chose to make to its values, but not let them actually do it.

I don't think this is possible. How would it know what changes they would make, without letting them make these changes, even in a sandbox ? I suppose one answer is, "it would avoid instantiating full copies, and use some heuristics to build a probabilistic model instead" -- is that similar to what you're thinking of ?

although I doubt even going this far would likely be a good use of its programming resources in order to maximize expected paperclips.

Since self-optimization is one of Clippy's key instrumental goals, it would want to acquire as much knowledge about oneself as is practical, in order to optimize itself more efficiently.

Shouldn't objective morality in that sense be a consequence of physics, and thus observable through studying physics ?

Your objection sounds to me as similar to saying, "since biology is a consequence of physics, shouldn't we just study physics instead ?". Well, yes, ultimately everything is a consequence of physics, but sometimes it makes more sense to study cells than quarks.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-25T00:56:17.413Z · LW(p) · GW(p)

I don't think that this describes the best possible level of understanding. It would be even better to say, "ok, I see now how and why I came to possess these values in the first place", even if the answer to that is, "there's no good reason for it, these values are arbitrary". It's the difference between saying "this mountain grows by 0.03m per year" and "I know all about plate tectonics". Unfortunately, we humans would not be able to answer the question in that much detail; the best we could hope for is to say, "yep, we possess these values because they're the best possible values to have, duh".

I think we're already in a better position to analyze our own values than that; we can assess them in terms of game theory and our evolutionary environment.

How do I know where that point is ?

I would say if you suspect that a course of action could realistically result in an alteration of your fundamental values, you are at or past it.

I suppose this depends on what you mean by "compel". Knowing about my own psychology would certainly enable me to change my values, and there are certain (admittedly, non-terminal) values that I wouldn't mind changing, if I could.

For example, I personally can't stand the taste of beer, but I know that most people enjoy it; so I wouldn't mind changing that value if I could, in order to avoid missing out on a potentially fun experience.

By "values", I've implicitly been referring to terminal values, I'm sorry for being unclear. I'm not sure it makes sense to describe liking the taste of beer as a "value," as such, just a taste, since you don't carry any judgment about beer being good or bad or have any particular attachment to your current opinion.

I don't think this is possible. How would it know what changes they would make, without letting them make these changes, even in a sandbox ? I suppose one answer is, "it would avoid instantiating full copies, and use some heuristics to build a probabilistic model instead" -- is that similar to what you're thinking of ?

It could use heuristics to build a probabilistic model (probably more efficient in terms of computation per expected value of information,) use sandboxed copies which don't have the power to affect the software of the real Clippy, or halt the simulation at the point where the altered instantiation decides what changes to make.

Since self-optimization is one of Clippy's key instrumental goals, it would want to acquire as much knowledge about oneself as is practical, in order to optimize itself more efficiently.

I think that this is going well beyond the extent of "practical" in terms of programming resources per expected value of information.

Your objection sounds to me as similar to saying, "since biology is a consequence of physics, shouldn't we just study physics instead ?". Well, yes, ultimately everything is a consequence of physics, but sometimes it makes more sense to study cells than quarks.

I don't see how observing what changes instantiations of itself with different value systems would make to its code would help it observe objective morality in the sense you described, even if it should happen to exist. I think that this would be the wrong level of abstraction at which to launch an examination, like trying to find out about chemistry by studying sociology.

Replies from: Bugmaster
comment by Bugmaster · 2013-04-26T22:53:03.162Z · LW(p) · GW(p)

I think we're already in a better position to analyze our own values than that; we can assess them in terms of game theory and our evolutionary environment.

Are we really ? I personally am not even sure what human fundamental values even are. I have a hunch that "seek pleasure, avoid pain" might be one of them, but beyound that I'm not sure. I don't know to what extent our values hamper our ability to discover our values, but I suspect there's at least some chilling effect involved.

I would say if you suspect that a course of action could realistically result in an alteration of your fundamental values, you are at or past it.

Right, but even if I knew what my terminal values were, how can I predict which actions would put me on the path to altering them ?

For example, consider non-fundamental values such as religious faith. People get converted or de-converted to/from their religion all the time; you often hear statements such as "I had no idea that studying the Bible would cause me to become an atheist, yet here I am".

or halt the simulation at the point where the altered instantiation decides what changes to make.

Ok, let's say that Clippy is trying to optimize itself in order to make certain types of inferences compute more efficiently, or whatever. In this case, it would need to not only watch what changes its debug-level copy wants to make, but also watch it follow through with the changes, in order to determine whether the new architecture actually is more efficient. Why would it not do the same thing with terminal values ?

I know that you want to answer,"because its current terminal values won't let it", but remember: Clippy is only experimenting, in order to find out more about its own thought mechanisms, and to acquire knowledge in general. It has no pre-commitment to alter itself to mirror the debug-level copy.

I think that this is going well beyond the extent of "practical" in terms of programming resources per expected value of information.

That's kind of the problem with pure research: all of it has very low expected value, unless you are willing to look at the long term. Why mess with invisible light that no one can see or find a use for, when you could spend your time on inventing a better telegraph ?

I don't see how observing what changes instantiations of itself with different value systems would make to its code would help it observe objective morality in the sense you described...

Well, for example, if all of its copies who survive and thrive converge on a certain subset of moral values, that would be one indication (though obviously not ironclad proof) that such values are required in order for an agent to succeed, regardless of what its other goals actually are.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-27T00:09:54.034Z · LW(p) · GW(p)

Ok, let's say that Clippy is trying to optimize itself in order to make certain types of inferences compute more efficiently, or whatever. In this case, it would need to not only watch what changes its debug-level copy wants to make, but also watch it follow through with the changes, in order to determine whether the new architecture actually is more efficient. Why would it not do the same thing with terminal values ?

If Clippy is trying to optimize itself to make inferences more efficiently, then it would want not to apply changes to its source code until its done the calculations to make sure that those changes would advance its values rather than harm them.

You wouldn't want to use a machine that would make physical alterations to your brain in order to make you smarter, without thoroughly calculating the effects of such alterations first, otherwise it would probably just make things worse.

That's kind of the problem with pure research: all of it has very low expected value, unless you are willing to look at the long term. Why mess with invisible light that no one can see or find a use for, when you could spend your time on inventing a better telegraph ?

In Clippy's case though, it can use other, less computationally expensive methods to investigate approximately the same information.

I don't think the experiments you're suggesting Clippy might undertake are even located in a region of hypothesis space that its other information would narrow down as worth investigating. It seems to me much less like investigating unknown invisible rays than like spending hundreds of billions of dollars to build a collider which launches charged protein molecules at each other at relativistic speeds to see what would happen, when our available models suggest the answer would be "pretty much the same thing as if you launch any other kind of atoms at each other at relativistic speeds." We have no evidence that any interesting new phenomena would arise with protein that didn't arise on the atomic level.

Well, for example, if all of its copies who survive and thrive converge on a certain subset of moral values, that would be one indication (though obviously not ironclad proof) that such values are required in order for an agent to succeed, regardless of what its other goals actually are.

Can you explain how any moral values could have that effect, which wouldn't be better studied at a more fundamental level like game theory, or physics?

Replies from: Bugmaster
comment by Bugmaster · 2013-05-01T03:55:22.858Z · LW(p) · GW(p)

If Clippy is trying to optimize itself to make inferences more efficiently, then it would want not to apply changes to its source code until its done the calculations...

Ok, so at what point does Clippy stop simulating the debug version of Clippy ? It does, after all, want to make the computation of its values more efficient. For example, consider a trivial scenario where one of its values basically said, "reject any action if it satisfies both A and not-A". This is a logically inconsistent value that some programmer accidentally left in Clippy's original source code. Would Clippy ever get around to removing it ? After all, Clippy knows that it's applying that test to every action, so removing it should result in a decent performance boost.

I don't think the experiments you're suggesting Clippy might undertake are even located in a region of hypothesis space that its other information would narrow down as worth investigating.

It seems to me much less like investigating unknown invisible rays than like spending hundreds of billions of dollars to build a collider...

Why do you see the proposed experiment this way ?

Speaking more generally, how do you decide which avenues of research are worth pursuing ? You could easily answer, "whichever avenues would increase my efficiency of achieving my terminal goals", but how do you know which avenues would actually do that ? For example, if you didn't know anything about electricity or magnetism or the nature of light, how would your research-choosing algorithm ensure that you'd eventually stumble upon radio waves, which, as we know in hindsight, are hugely useful ?

Can you explain how any moral values could have that effect, which wouldn't be better studied at a more fundamental level like game theory, or physics?

Physics is a bad candidate, because it is too fine-grained. If some sort of an absolute objective morality exists in the way that I described, then studying physics would eventually reveal its properties; but, as is the case with biology or ballistics, looking at everything in terms of quarks is not always practical.

Game theory is a trickier proposition. I can see two possibilities: either game theory turns out to closely relate whatever this objective morality happens to be (f.ex. like electricity vs. magnetism), or not (f.ex. like particle physics and biology). In the second case, understanding objective morality through game theory would be inefficient.

That said though, even in our current world as it actually exists there are people who study sociology and anthropology. Yes, they could get the same level of understanding through neurobiology and game theory, but it would take too long. Instead, they are taking advantage of existing human populations to study human behavior in aggregate. Reasoning your way to the answer from first principles is not always the best solution.

Replies from: Desrtopa
comment by Desrtopa · 2013-05-01T14:28:35.825Z · LW(p) · GW(p)

Ok, so at what point does Clippy stop simulating the debug version of Clippy ? It does, after all, want to make the computation of its values more efficient. For example, consider a trivial scenario where one of its values basically said, "reject any action if it satisfies both A and not-A". This is a logically inconsistent value that some programmer accidentally left in Clippy's original source code. Would Clippy ever get around to removing it ? After all, Clippy knows that it's applying that test to every action, so removing it should result in a decent performance boost.

Unless I'm critically misunderstanding something here, I would think that Clippy would remove it if it calculated that removing it would result in more expected paperclips.

Why do you see the proposed experiment this way ?

Speaking more generally, how do you decide which avenues of research are worth pursuing ? You could easily answer, "whichever avenues would increase my efficiency of achieving my terminal goals", but how do you know which avenues would actually do that ? For example, if you didn't know anything about electricity or magnetism or the nature of light, how would your research-choosing algorithm ensure that you'd eventually stumble upon radio waves, which, as we know in hindsight, are hugely useful ?

When we didn't know what things like radio waves or x-rays were, we didn't know that they would be useful, but we could see that there appeared to be some sort of existing phenomena that we didn't know how to model, so we examined them until we knew how to model them. It's not like we performed a whole bunch of experiments in case there turned out to be invisible rays our observations had never hinted at, which could be turned to useful ends. The original observations of radio waves and x-rays came from our experiments with other known phenomena.

What you're suggesting sounds more like experimenting completely blindly; you're committing resources to research, not just not knowing that it will bear valuable fruit, but not having any indication that it's going to shed light on any existing phenomenon at all. That's why I think it's less like investigating invisible rays than like building a protein collider; we didn't try studying invisible rays until we had a good indication that there was an invisible something to be studied.

Replies from: Bugmaster, Kindly
comment by Bugmaster · 2013-05-02T00:23:10.290Z · LW(p) · GW(p)

Unless I'm critically misunderstanding something here, I would think that Clippy would remove it if it calculated that removing it would result in more expected paperclips.

Ok, so Clippy would need to run sim-Clippy for a little while at least, just to make sure that it still produces paperclips -- and that, in fact, it does so more efficiently now, since that one useless test is removed. Yes, this test used to be Clippy's terminal goal, but it wasn't doing anything, so Clippy took it out.

Would it be possible for Clippy to optimize his goals even further ? To use another silly example ("silly" because Clippy would be dealing with probabilities, not syllogisms), if Clippy had the goals A, B and C, but B always entailed C, would it go ahead and remove C ?

It's not like we performed a whole bunch of experiments in case there turned out to be invisible rays our observations had never hinted at...

Understood, that makes sense. However, I believe that in my scenario, Clippy's own behavior and his current paperclip production efficiency is what it observes; and the goal of its experiments would be to explain why his efficiency is what it is, in order to ultimately improve it.

Replies from: Desrtopa
comment by Desrtopa · 2013-05-02T00:48:42.749Z · LW(p) · GW(p)

Ok, so Clippy would need to run sim-Clippy for a little while at least, just to make sure that it still produces paperclips -- and that, in fact, it does so more efficiently now, since that one useless test is removed. Yes, this test used to be Clippy's terminal goal, but it wasn't doing anything, so Clippy took it out.

Would it be possible for Clippy to optimize his goals even further ? To use another silly example ("silly" because Clippy would be dealing with probabilities, not syllogisms), if Clippy had the goals A, B and C, but B always entailed C, would it go ahead and remove C ?

That seems plausible.

Understood, that makes sense. However, I believe that in my scenario, Clippy's own behavior and his current paperclip production efficiency is what it observes; and the goal of its experiments would be to explain why his efficiency is what it is, in order to ultimately improve it.

I don't think tampering with its fundamental motivation to make paperclips is a particularly promising strategy for optimizing its paperclips production.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-09T05:06:31.287Z · LW(p) · GW(p)

That seems plausible.

Ok, so now we've got a Clippy who a). is not too averse to tinkering with its own goals, as long as the goals remain functionally the same, b). simulates a relatively long-running version of itself, and c). is capable of examining the inner workings of both that version and itself.

You say,

I don't think tampering with its fundamental motivation to make paperclips is a particularly promising strategy for optimizing its paperclips production.

But remember, at this stage Clippy is not changing its own fundamental motivation (beyound some outcome-invariant optimizations); it's merely observing sim-Clippies in a controlled environment.

Do you think that Clippy would ever simulate versions of itself whose fundamental motivations were, in fact, changed ? I could see several scenarios where this might be the case, for example:

  • Clippy wanted to optimize some goal, but ended up accidentally changing it. Oops !
  • Clippy created a version with drastically reduced goals on purpose, in order to measure how much performance is affected by certain goals, thus targeting them for possible future optimization. Of course, Clippy would only want to optimize the goals, not remove them.
Replies from: Desrtopa
comment by Desrtopa · 2013-05-09T12:48:28.691Z · LW(p) · GW(p)

But remember, at this stage Clippy is not changing its own fundamental motivation (beyound some outcome-invariant optimizations); it's merely observing sim-Clippies in a controlled environment.

Why does it do that? I said it sounded plausible that it would cut out its redundant goal, because that would save computing resources. But this sounds like we've gone back to experimenting blindly. Why would it think observing sim-clippies is a good use of its computing resources in order to maximize paperclips?

I'd say that Clippy simulating versions of itself whose fundamental motivations are different is much less plausible, because it's using a lot of computing resources for something that isn't a likely route to optimizing its paperclip production. I think this falls into the "protein collider" category. Even if it did do so, I think it would be unlikely to go from there to changing its own terminal value.

comment by Kindly · 2013-05-01T14:31:05.467Z · LW(p) · GW(p)

Unless I'm critically misunderstanding something here, I would think that Clippy would remove it if it calculated that removing it would result in more expected paperclips.

It would also be critical for Clippy to observe that removing that value would not result in more expected actions taken that satisfy both A and not-A; this being one of Clippy's values at the time of modification.

Replies from: Desrtopa
comment by Desrtopa · 2013-05-01T14:35:23.457Z · LW(p) · GW(p)

Right, I misread that before. If its programming says to reject actions that says A and not-A, but this isn't one of the standards by which it judges value, it would presumably reject it. If that is one of the standards by which it measures value, then it would depend on how that value measured against its value of paperclips and the extent to which they were in conflict.

comment by PrawnOfFate · 2013-04-24T22:43:56.996Z · LW(p) · GW(p)

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

Objective facts, in the sense of objectively true statements, can be derived from other objetive facts. I don't know why you think some separate ontlogical category is cagtegory is required. I also don't know why you think the universe has to do the punishing. Morality is only of interest to the kind of agent that has values and lives in societies. Sanctions against moral lapses can be arranged at the social level, along with the inculcation of morality, debate about the subject, and so forth. Moral objectivism only supplies a good, non-arbnitrary epistemic basis for these social institutions. It doesn;t have to throw lightning bolts.

comment by PrawnOfFate · 2013-04-24T01:24:35.566Z · LW(p) · GW(p)

1). We lack any capability to actually replace our core values

...voluntarily.

2). We cannot truly imagine what it would be like not to have our core values.

Which is one of the reasons we cannot keep values stable by predicting the effects of whatever experiences we choose to undergo.How does your current self predict what an updated version would be like? The value stability problem is unsolved in humans and AIs.

comment by PrawnOfFate · 2013-04-23T22:51:23.970Z · LW(p) · GW(p)

but it hasn't compelled us to try and replace our values.

The ethical outlook of the Western world has changed greatly in the past 150 years.

comment by PrawnOfFate · 2013-04-23T15:16:44.398Z · LW(p) · GW(p)

Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other?

Including arbitrary, biased or contradictory ones? Are there values built into logic/rationality?

Replies from: TimS
comment by TimS · 2013-04-23T15:29:03.463Z · LW(p) · GW(p)

Arbitrary and biased are value judgments. If we decline to make any value judgments, I don't see any way to make those sorts of claims.

Whether more than one non-contradictory value system exists is the topic of the conversation, isn't it?

Replies from: Desrtopa, PrawnOfFate
comment by Desrtopa · 2013-04-23T18:31:54.798Z · LW(p) · GW(p)

"Biased" is not necessarily a value judgment. Insofar as rationality as a system, orthogonal to morality, is objective, biases as systematic deviations from rationality are also objective.

Arbitrary carries connotations of value judgment, but in a sense I think it's fair to say that all values are fundamentally arbitrary. You can explain what caused an agent to hold those values, but you can't judge whether values are good or bad except by the standards of other values.

I'm going to pass on Eliezer's suggestion to stop engaging with PrawnOfFate. I don't think my time doing so so far has been well spent.

comment by PrawnOfFate · 2013-04-23T15:35:24.091Z · LW(p) · GW(p)

Arbitrary and biased are value judgments.

And they'ree built into rationality.

Whether more than one non-contradictory value system exists is the topic of the conversation, isn't it?

Non contradictoriness probably isn't a sufficient condition for truth.

Replies from: TimS
comment by TimS · 2013-04-23T15:52:00.347Z · LW(p) · GW(p)

Arbitrary and Bias are not defined properties in formal logic. The bare assertion that they are properties of rationality assumes the conclusion.

Keep in mind that "rationality" has a multitude of meanings, and this community's usage of rationality is idiosyncratic.

Non contradictoriness probably isn't a sufficient condition for truth.

Sure, but the discussion is partially a search for other criteria to evaluate of the truth of moral propositions. Arbitrary is not such a criteria. If you were to taboo arbitrary, I strongly suspect you'd find moral propositions that are inconsistent with being values-neutral.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T21:12:51.653Z · LW(p) · GW(p)

Arbitrary and Bias are not defined properties in formal logic. The bare assertion that they are properties of rationality assumes the conclusion.

There's plenty of material on this site and elsewhere advising rationalists to avoid arbitrariness and bias. Arbitrariness and bias are essentially structural/functional properties, so I do not see why they could not be given formal definitions.

Sure, but the discussion is partially a search for other criteria to evaluate of the truth of moral propositions. Arbitrary is not such a criteria.

Arbitrary and biased claims are not candidates for being ethical claims at all.

comment by PrawnOfFate · 2013-04-20T12:45:39.114Z · LW(p) · GW(p)

The AI decides whether it will change its source code in a particular way or not by checking against whether this will serve its terminal values.

How does it predict that? How does the less intelligent version in the past predict what updating to a more inteligent version will do?

Can you see an "In order to maximize expected paperclips, I- modify my values to be in accordance with objective morality rather than making paperclips" coming into the picture?

How about: "in order to be an effective rationalist, I will free myself from all bias and arbitrariness -- oh, hang on, paperclipping is a bias..".

Well a paperclipper would just settle for being a less than perfect rationalist. But that doesn't prove anything about typical, rational, average rational agents, and it doesn't prove anything about ideal rational agents. Objective morality is sometimes described as what ideal rational agents would converge on. Clippers aren't ideal, because they have a blind spot about paperclips. Clippers aren't relevant.

Replies from: MugaSofer, Desrtopa
comment by MugaSofer · 2013-04-23T11:33:41.632Z · LW(p) · GW(p)

paperclipping is a bias

How is paperclipping a bias?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T11:51:39.164Z · LW(p) · GW(p)

Nobody cares about clips except clippy. Clips can only seem important because of Clippy's egotistical bias.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-23T13:57:54.672Z · LW(p) · GW(p)

Biases are not determined by vote.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-04-23T22:07:31.080Z · LW(p) · GW(p)

Unbiases are determined by even-handedness.

Replies from: Desrtopa, MugaSofer
comment by Desrtopa · 2013-04-23T22:11:00.315Z · LW(p) · GW(p)

Evenhandedness with respect to what?

Replies from: Juno_Watt
comment by Juno_Watt · 2013-04-23T22:44:49.832Z · LW(p) · GW(p)

One should have no bias with respect to what one is being evenhanded about.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-23T22:48:48.657Z · LW(p) · GW(p)

So lack of bias means being evenhanded with respect to everything?

Is it bias to discriminate between people and rocks?

comment by MugaSofer · 2013-04-25T14:14:41.045Z · LW(p) · GW(p)

Taboo "even-handedness". Clippy treats humans just the same as any other animal with naturally evolved goal-structures.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-04-25T14:38:26.108Z · LW(p) · GW(p)

Clippy doesn't treat clips even-handedly with other small metal objects.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T16:02:03.708Z · LW(p) · GW(p)

Humans don't treat pain evenhandedly with other emotions.

Friendly AIs don't treat people evenhandedly with other arrangements of matter.

Agents that value things don't treat world-states evenhandedly with other world-states.

comment by Desrtopa · 2013-04-20T13:29:47.496Z · LW(p) · GW(p)

Well a paperclipper would just settle for being a less than perfect rationalist. But that doesn't prove anything about typical, rational, average rational agents, and it doesn't prove anything about ideal rational agents.

You've extrapolated out "typical, average rational agents" from a set of one species, where every individual shares more than a billion years of evolutionary history.

Objective morality is sometimes described as what ideal rational agents would converge on

On what basis do you conclude that this is a real thing, whereas terminal values are a case of "all unicorns have horns?"

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T13:38:16.940Z · LW(p) · GW(p)

You've extrapolated out "typical, average rational agents" from a set of one species, where every individual shares more than a billion years of evolutionary history.

Messy solutions are more common in mindspace than contrived ones.

On what basis do you conclude that this is a real thing

"Non-neglible probabiity", remember.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T13:41:36.481Z · LW(p) · GW(p)

Messy solutions are more common in mindspace than contrived ones.

Messy solutions are more often wrong than ones which control for the mess.

"Non-neglible probabiity", remember.

This doesn't even address my question.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T14:05:16.958Z · LW(p) · GW(p)

Messy solutions are more often wrong than ones which control for the mess.

Something that is wrong is not a solution. Mindspace is populated by solutions to how to implement a mind. It's a small corner of algrogithmSpace.

This doesn't even address my question.

Since I haven't claimed that rational convergence on ethics is highly likely or inevitable, I don't have to answer questions about why it would be highly likely or inevitable.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T14:12:18.204Z · LW(p) · GW(p)

Do you think that it's even plausible? Do you think we have any significant reason to suspect it, beyond our reason to suspect, say, that the Invisible Flying Noodle Monster would just reprogram the AI with its noodley appendage?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T14:18:45.029Z · LW(p) · GW(p)

There are experts in moral philosophy, and they generally regard the question realism versus relativism (etc) to be wide open. The "realism -- huh, what, no?!?" respsonse is standard on LW and only on LW. But I don't see any superior understanding on LW.

Replies from: nshepperd, Desrtopa, ciphergoth
comment by nshepperd · 2013-04-20T16:31:48.358Z · LW(p) · GW(p)

Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there's a passable sequence on it.

¹ As you've defined it here, anyway. Moral realism as normally defined simply means "moral statements have truth values" and does not imply universal compellingness.

Replies from: TimS, None, PrawnOfFate
comment by TimS · 2013-04-20T17:21:10.565Z · LW(p) · GW(p)

What does it mean for a statement to be true but not universally compelling?

If it isn't universally compelling for all agents to believe "gravity causes things to fall," then what do we mean when we say the sentence is true?

Replies from: nshepperd
comment by nshepperd · 2013-04-21T00:53:16.475Z · LW(p) · GW(p)

Well, there's the more obvious sense, that there can always exist an "irrational" mind that simply refuses to believe in gravity, regardless of the strength of the evidence. "Gravity makes things fall" is true, because it does indeed make things fall. But not compelling to those types of minds.

But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form "action A is xyzzy" may be a true classification of A, and may be trivial to show, once "xyzzy" is defined. But an agent that did not care about xyzzy would not be moved to act based on that. It could recognise the truth of the statement but would not care.

For a stupid example, I could say to you "if you do 13 push-ups now, you'll have done a prime number of push-ups". Well, the statement is true, but the majority of the world's population would be like "yeah, so what?".

In contrast, a statement like "if you drink-drive, you could kill someone!" is generally (but sadly not always) compelling to humans. Because humans like to not kill people, they will generally choose not to drink-drive once they are convinced of the truth of the statement.

Replies from: TimS
comment by TimS · 2013-04-21T01:14:23.363Z · LW(p) · GW(p)

But isn't the whole debate about moral realism vs. anti-realism is whether "Don't murder" is universally compelling to humans. Noticing that pebblesorters aren't compelled by our values doesn't explain whether humans should necessarily find "don't murder" compelling.

Replies from: pragmatist, nshepperd
comment by pragmatist · 2013-04-21T08:55:17.706Z · LW(p) · GW(p)

I identify as a moral realist, but I don't believe all moral facts are universally compelling to humans, at least not if "universally compelling" is meant descriptively rather than normatively. I don't take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.

comment by nshepperd · 2013-04-21T01:38:35.183Z · LW(p) · GW(p)
  1. What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that's it.

  2. When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don't care about it.

  3. Whether "don't murder" (or rather, "murder is bad" since commands don't have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn't. Though I would hope it to be compelling to most.

  4. I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.

Replies from: pragmatist, TimS
comment by pragmatist · 2013-04-21T08:50:38.977Z · LW(p) · GW(p)

What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that's it.

This is incorrect, in my experience. Although "moral realism" is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren't merely debating whether moral statements have truth values. The position you're describing is usually labeled "moral cognitivism".

Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values ("false" is a truth value, after all). But I don't think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses -- not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.

Replies from: Bugmaster, nshepperd
comment by Bugmaster · 2013-04-23T02:21:25.016Z · LW(p) · GW(p)

I think you guys should taboo "moral realism". I understand that it's important to get the terminology right, but IMO debates about nothing but terminology have little value.

comment by nshepperd · 2013-04-21T13:07:58.268Z · LW(p) · GW(p)

Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values ("false" is a truth value, after all).

Err, right, yes, that's what I meant. Error theorists do of course also claim that moral statements have truth values.

Moral realists are usually defending a whole suite of theses -- not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.

True enough, though I guess I'd prefer to talk about a single well-specified claim than a "usually" cluster in philosopher-space.

comment by TimS · 2013-04-21T02:18:55.806Z · LW(p) · GW(p)

So, a philosopher who says:

I believe the Orthogonality thesis, but I think there are empirical facts that show any human who denies that murder is wrong is defective.

is not a moral realist? Because that philosopher does not seem to be a subjectivist, an error theorist, or non-cognitivist.

Replies from: nshepperd
comment by nshepperd · 2013-04-21T05:37:15.459Z · LW(p) · GW(p)

If that philosopher believes that statements like "murder is wrong" are true, then they are indeed a realist. Did I say something that looked like I would disagree?

Replies from: None
comment by [deleted] · 2013-04-21T08:46:16.452Z · LW(p) · GW(p)

You guys are talking past each other, because you mean something different by 'compelling'. I think Tim means that X is compelling to all human beings if any human being will accept X under ideal epistemic circumstances. You seem to take 'X is universally compelling' to mean that all human beings already do accept X, or would on a first hearing.

Would agree that all human beings would accept all true statements under ideal epistemic circumstances (i.e. having heard all the arguments, seen all the evidence, in the best state of mind)?

Replies from: nshepperd
comment by nshepperd · 2013-04-21T13:17:42.233Z · LW(p) · GW(p)

I guess I must clarify. When I say 'compelling' here I am really talking mainly about motivational compellingness. Saying "if you drink-drive, you could kill someone!" to a human is generally, motivationally compelling as an argument for not drink-driving: because humans don't like killing people, a human will decide not to drink-drive (one in a rational state of mind, anyway).

This is distinct from accepting statements as true or false! Any rational agent, give or take a few, will presumably believe you about the causal relationship between drink-driving and manslaughter once presented with sufficient evidence. But it is a tiny subset of these who will change their decisions on this basis. A mind that doesn't care whether it kills people will see this information as an irrelevant curiosity.

comment by [deleted] · 2013-04-20T16:55:34.940Z · LW(p) · GW(p)

Having looked over that sequence, I haven't found any proof that moral realism (on either definition) or moral relativism is false. Could you point me more specifically to what you have in mind (or just put the argument in your own words, if you have the time)?

Replies from: None
comment by [deleted] · 2013-04-20T17:17:49.590Z · LW(p) · GW(p)

No Universally Compelling Arguments is the argument against universal compellingness, as the name suggests.

Inseparably Right; or Joy in the Merely Good gives part of the argument that humans should be able to agree on ethical values. Another substantial part is in Moral Error and Moral Disagreement.

Replies from: None, PrawnOfFate
comment by [deleted] · 2013-04-20T17:25:13.744Z · LW(p) · GW(p)

Thanks!

Edit: (Sigh), I appreciate the link, but I can't make heads or tails of 'No Universally Compelling Arguments'. I speak from ignorance as to the meaning of the article, but I can't seem to identify the premises of the argument.

Replies from: None
comment by [deleted] · 2013-04-20T17:54:16.588Z · LW(p) · GW(p)

The central point is a bit buried.

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it.

So, there's some sort of assumption as to what minds are:

I also wish to establish the notion of a mind as a causal, lawful, physical system... [emphasis original]

and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.

Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:

Oh, there might be argument sequences that would compel any neurologically intact human...

From which we get Coherent Extrapolated Volition and friends.

Replies from: None, PrawnOfFate
comment by [deleted] · 2013-04-20T18:58:14.062Z · LW(p) · GW(p)

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This doesn't seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form 's:X(s)' has two to the trillionth chances to be false (e.g. 'have more than one base pair', 'involve hydrogen' etc.). Given that this doesn't hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization 'for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false' (which does seem to be of the form m:X(m)) is somehow more likely.

Also, doesn't this inference imply that 'being convinced by an argument' is a bit that can flip on or off independently of any others? Eliezer doesn't think that's true, and I can't imagine why he would think his (hypothetical) interlocutor would accept it.

Replies from: None
comment by [deleted] · 2013-04-20T20:20:57.035Z · LW(p) · GW(p)

It's not a proof, no, but it seems plausible.

Replies from: None
comment by [deleted] · 2013-04-20T20:34:21.016Z · LW(p) · GW(p)

I mean to say, I think the argument is something of a paradox:

The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).

The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).

If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.

The argument seems to be fixable at this stage, since there's a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?

Replies from: None
comment by [deleted] · 2013-04-20T20:58:22.386Z · LW(p) · GW(p)

for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind.

That's not what it says; compare the emphasis in both quotes.

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

Replies from: None
comment by [deleted] · 2013-04-20T21:04:59.469Z · LW(p) · GW(p)

Sorry, I may have misunderstood and presumed that 'two to the trillionth chances to be false' meant 'one in two to the trillionth chances to be true'. That may be wrong, but it doesn't affect my argument at all: EY's argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).

comment by PrawnOfFate · 2013-04-20T18:12:02.497Z · LW(p) · GW(p)

"Rational" is broader than "human" and narrower than "physically possible".

Replies from: None
comment by [deleted] · 2013-04-20T19:00:05.244Z · LW(p) · GW(p)

"Rational" is broader than "human" and narrower than "physically possible".

Do you really mean to say that there are physically possible minds that are not rational? In virtue of what are they 'minds' then?

Replies from: PrawnOfFate, None
comment by PrawnOfFate · 2013-04-21T02:12:46.818Z · LW(p) · GW(p)

Do you really mean to say that there are physically possible minds that are not rational?

Yes. There are irrational people, and they still have minds.

Replies from: None
comment by [deleted] · 2013-04-21T02:19:08.399Z · LW(p) · GW(p)

Ah, I think I just misunderstood which sense of 'rational' you intended.

comment by [deleted] · 2013-04-20T20:21:30.366Z · LW(p) · GW(p)

Do you really mean to say that there are physically possible minds that are not rational?

Haven't you met another human?

Replies from: None
comment by [deleted] · 2013-04-20T20:48:07.404Z · LW(p) · GW(p)

Sorry, I was speaking ambiguously. I mean't 'rational' not in the normative sense that distinguishes good agents from bad ones, but 'rational' in the broader, descriptive sense that distinguishes anything capable of responding to reasons (even terrible or false ones) from something that isn't. I assumed that was the sense of 'rational' Prawn was using, but that may have been wrong.

comment by PrawnOfFate · 2013-04-20T17:28:50.494Z · LW(p) · GW(p)

No Universally Compelling Arguments

Irrelevant. I am talking about rational minds, he is talking about physically possible ones.

As noted at the time

Replies from: None
comment by [deleted] · 2013-04-20T17:32:30.601Z · LW(p) · GW(p)

Irrelevant. I am talking about rational minds, he is talking about physically possible ones.

UFAI sounds like a counterexample, but I'm not interested in arguing with you about it. I only responded because someone asked for a shortcut in the metaethics sequence.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T17:38:45.172Z · LW(p) · GW(p)

I have essentially being arguing against a strong likelihood of UFAI, so that would be more like gainsaying.

comment by PrawnOfFate · 2013-04-20T17:25:12.165Z · LW(p) · GW(p)

Congratulations on being able to discern an overall message to EY's metaethical disquisitions. I never could.

comment by Desrtopa · 2013-04-20T14:25:54.600Z · LW(p) · GW(p)

Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?

Also, moral philosophers mostly regard the question as open in the sense that some of them think that it's clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it's clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don't know yet.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T15:37:06.661Z · LW(p) · GW(p)

Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?

What I am seeing is

  • much-repeated confusions--the Standard Muddle

*appeals to LW doctrines which aren't well-founded or well respected outside LW.

In I knew exactly what superior insight into the problem was, I would write it up and become famous. Insight doesn't work like that; you don't know it in advance, you get an "Aha" when you see it.

Also, moral philosophers mostly regard the question as open in the sense that some of them think that it's clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it's clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don't know yet.

If people can't agree on how a question is closed, it's open.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T15:44:34.688Z · LW(p) · GW(p)

much-repeated confusions--the Standard Muddle

Can you explain what these confusions are, and why they're confused?

In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this. This is one of the primary reasons I bothered sticking around in the community.

If people can't agree on how a question is closed, it's open.

A question can still be "open" in that sense when all the information necessary for a rational person to make a definite judgment is available.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T17:36:55.511Z · LW(p) · GW(p)

Can you explain what these confusions are, and why they're confused?

Eg.

  • You are trying to impose your morality/

  • I can think of one model of moral realism, and it doesn't work, so I will ditch the whole thing.

In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this.

LW doesn't even claim to have more than about two "dissolutions". There are probably hundreds of outstanding philosophical problems. Whence the "largely"

Luke wrote a series of posts on this

Which were shot down by philosophers.

A question can still be "open" in that sense when all the information necessary for a rational person to make a definite judgment is available.

Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T18:14:24.781Z · LW(p) · GW(p)

You are trying to impose your morality/

In what respect?

I can think of one model of moral realism, and it doesn't work, so I will ditch the whole thing.

This certainly doesn't describe my reasoning on the matter, and I doubt it describes many others' here either.

The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.

LW doesn't even claim to have more than about two "dissolutions". There are probably hundreds of outstanding philosophical problems. Whence the "largely"

I gave up my study of philosophy because I found such confusions so pervasive. Many "outstanding" philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.

Which were shot down by philosophers.

Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?

Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.

Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent "experts" perpetuating them. This is the conclusion that my experience with the field has wrought.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T18:32:59.340Z · LW(p) · GW(p)

This certainly doesn't describe my reasoning on the matter, and I doubt it describes many others' here either.

I mentioned them because they both came up recently

The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.

I have no idea what you mean by that. I don't think value systems don't come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from "the ground up", whether its morality or mortgages.

I gave up my study of philosophy because I found such confusions so pervasive. Many "outstanding" philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.

Where is it proven they can be discarded?

Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?

All of them.

Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent "experts" perpetuating them. This is the conclusion that my experience with the field has wrought.

Are you aware that that is basically what every crank says about some other field?

Replies from: TheOtherDave, Desrtopa
comment by TheOtherDave · 2013-04-20T19:26:29.861Z · LW(p) · GW(p)

Are you aware that that is basically what every crank says about some other field?

Presumably, if I'm to treat as meaningful evidence about Desrtopa's crankiness the fact that cranks make statements similar to Desrtopa, I should first confirm that non-cranks don't make similar statements.

It seems likely to me that for every person P, there exists some field F such that P believes many aspects of F exist only because of incompetent "experts" perpetuating them. (Consider cases like F=astrology, F=phrenology, F=supply-side economics, F= feminism, etc.) And that this is true whether P is a crank or a non-crank.

So it seems this line of reasoning depends on some set F2 of fields such that P believes this of F in F2 only if P is a crank.

I understand that you're asserting implicitly that moral philosophy is a field in F2, but this seems to be precisely what Desrtopa is disputing.

Replies from: None, PrawnOfFate, Kawoomba
comment by [deleted] · 2013-04-20T19:40:02.968Z · LW(p) · GW(p)

Could we reasonably say that an F is in F2 if most of the institutional participants in that F are intelligent, well-educated people? This leaves room for cranks who are right to object to F, of course.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T19:53:14.007Z · LW(p) · GW(p)

So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.

So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?

No, I don't think we can reasonably say that. Dan Dennet might be a crank, but it takes more than that argument to demonstrate the fact.

Replies from: None, PrawnOfFate
comment by [deleted] · 2013-04-20T20:55:52.247Z · LW(p) · GW(p)

Good point. So how about this: someone is a crank if they object to F, where F is in F2 (by my above standard), and the reasons they have for objecting to F are not recognized as sound by a proportionate number of intelligent and well educated people.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T21:11:40.913Z · LW(p) · GW(p)

(shrug) I suppose that works well enough, for some values of "proportionate."

Mostly I consider this a special case of the basic "who do I trust?" social problem, applied to academic disciplines, and I don't have any real problem saying about an academic discipline "this discipline is fundamentally confused, and the odds of work in it contributing anything valuable to the world is slim."

Of course, as Prawn has pointed out a few times, there's also the question of where we draw the lines around a discipline, but I mostly consider that an orthogonal question to how we evaluate the discipline.

Replies from: None
comment by [deleted] · 2013-04-20T21:18:10.606Z · LW(p) · GW(p)

I think this question is moot in the case of philosophy in general then; I think any philosopher worth their shirt should tell you that trust is a wholly inappropriate attitude toward philosophers, philosophical institutions and philosophical traditions.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T21:27:48.889Z · LW(p) · GW(p)

Not in the sense I meant it.
If a philosopher makes a claim that seems on the surface to be false or incoherent, I have to decide whether to devote the additional effort to evaluating it to confirm or deny that initial judgment. One of the factors that will feed into that decision will be my estimate of the prior probability that they are saying something false or incoherent.
If I should refer to that using a word other than "trust", that's fine, tell me what word will refer to that to you and I'll try to use it instead.

Replies from: None
comment by [deleted] · 2013-04-20T22:39:03.287Z · LW(p) · GW(p)

No, that describes what I'm talking about, so long as by trust you mean 'a reason to hear out an argument that makes reference to the credibility of a field or its professionals', rather than just 'a reason to hear out an argument'. If the former, then I do think this is an inappropriate attitude toward philosophy. One reason for this is that such trust seems to depend on having a good standard for the success of a field independently of hearing out an argument. I can trust physicists because they make such good predictions, and because their work leads to such powerful technological advances. I don't need to be a physicist to observe that. I don't think philosophy has anything like that to speak for it. The only standards of success are the arguments themselves, and you can only evaluate them by just going ahead and doing some philosophy.

You can find trust in an institution independently of such standards by watching to see whether people you think are otherwise credible take it seriously. That will of course work with philosophy too, but if you trust Tom to be able to judge whether or not a philosophical claim is worth pursuing (and if I'm right about the above), then Tom can only be trustworthy in this regard because he has been doing philosophy (i.e. engaging with the argument). This could get you through the door on some particular philosophical claim, but not into philosophy generally.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T22:44:59.348Z · LW(p) · GW(p)

so long as by trust you mean 'a reason to hear out an argument that makes reference to the credibility of a field or its professionals', rather than just 'a reason to hear out an argument'.

I mean neither, I mean 'a reason to devote time and resources to evaluating the evidence for and against a position.' As you say, I can only evaluate a philosophical argument by 'going ahead and doing some philosophy,' (for a sufficiently broad understanding of 'philosophy'), but my willingness to do, say, 20 hours of philosophy in order to evaluate Philosopher Sam's position is going to depend on, among other things, my estimate of the prior probability that Sam is saying something false or incoherent. The likelier I think that is, the less willing I am to spend those 20 hours.

Replies from: None
comment by [deleted] · 2013-04-20T22:52:24.182Z · LW(p) · GW(p)

I mean neither, I mean 'a reason to devote time and resources to evaluating the evidence for and against a position.'

That's fine, that's not different from 'hearing out an argument' in any way important to my point (unless I'm missing something).

EDIT: Sorry, if you don't want to include 'that makes some reference to the credibility...etc.' (or something like that) in what you mean by 'trust' then you should use a different term. Curiosity, or money, or romantic interest would all be reasons to devote time...etc. and clearly none of those are rightly called 'trust'.

my estimate of the prior probability that Sam is saying something false or incoherent.

What do you have in mind as the basis for such a prior? Can you give me an example?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T23:11:41.625Z · LW(p) · GW(p)

Point taken about other reasons to devote resources other than trust. I think we're good here.

Re: example... I don't mean anything deeply clever. E.g., if the last ten superficially-implausible ideas Sam espoused were false or incoherent, my priors for it will be higher than if the last ten such ideas were counterintuitive and brilliant.

Replies from: None
comment by [deleted] · 2013-04-20T23:26:20.630Z · LW(p) · GW(p)

Re: example...

Hm. I can't argue with that, and I suppose it's trivial to extend that to 'if the last ten superficially-implausible ideas philosophy professors/books/etc. espoused were false or incoherent...'. So, okay, trust is an appropriate (because necessary) attitude toward philosophers and philosophical institutions. I think it's right to say that philosophy doesn't have external indicators in the way physics or medicine does, but the importance of that point seems diminished.

comment by PrawnOfFate · 2013-04-20T20:04:15.746Z · LW(p) · GW(p)

So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.

Dennett only thinks the idea of qualia is confused. He has no problem with his own books on consciousness.

So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?

No. He isn't dismissing a whole academic subject, or a sub-field. Just one idea.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T20:33:27.255Z · LW(p) · GW(p)

What is Dennett's account for why philosophers of consciousness other than himself continue to think that a dismissable idea like qualia is worth continuing to discuss, even though he considers it closed?

comment by PrawnOfFate · 2013-04-20T19:37:54.008Z · LW(p) · GW(p)

Desrtopa doesn't think moral philosophy is uniformly nonsense, since Desrtopa thinks one of its well known claims, moral relativism, is true.

comment by Kawoomba · 2013-04-20T19:45:42.715Z · LW(p) · GW(p)

While going on tangents is a common and expected occurrence, each such tangent has a chance of steering/commandeering the original conversation. LW has a tendency of going meta too much, when actual object level discourse would have a higher content value.

While you were practically invited to indulge in the death-by-meta with the hook of "Are you aware that that is basically what every crank says about some other field?", we should be aware when leaving the object-level debating, and the consequences thereof. Especially since the lure can be strong:

When sufficiently meta, object-level disagreements may fizzle into cosmic/abstract insignificance, allowing for a peaceful pseudo-resolution, which ultimately just protects that which should be destroyed by the truth from being destroyed.

Such lures may be interpreted similarly to ad hominems: The latter try to drown out object-level disagreements by flinging shit until everyone's dirty, the former zoom out until everyone's dizzy floating in space, with vertigo. Same result to the actual debate. It's an effective device, and one usually embraced by someone who feels like object-level arguments no longer serve his/her goals.

Ironically, this very comment goes meta lamenting going meta.

comment by Desrtopa · 2013-04-20T19:16:09.105Z · LW(p) · GW(p)

I have no idea what you mean by that. I don't think value systems don't come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from "the ground up", whether its morality or mortgages.

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing. We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us. We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.

Create a reasoning engine that doesn't have those ethical systems built into it, and it would have no reason to care about them.

Where is it proven they can be discarded?

You can't build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on "this defies our moral intuitions, therefore it's wrong," and that was never addressed with "moral intuitions don't work that way," then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.

All of them.

That's not an example. Please provide an actual one.

Are you aware that that is basically what every crank says about some other field?

Sure, but it's also what philosophers say about each other, all the time. Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy. Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don't get it. "Most philosophers are incompetent, except for the ones who're sensible enough to see things my way," is a perfectly ordinary perspective among philosophers.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T19:51:46.398Z · LW(p) · GW(p)

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing.

But I wans't saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.

We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us.

We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens't stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.

We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.

We can see, in reductionistic terms, how the entities could converge on a unform set of truth values. There is nothing non reductionist about anything I have said. Reductionsm does not force one answer to metaethics.

reate a reasoning engine that doesn't have those ethical systems built into it, and it would have no reason to care about them.

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

You can't build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on "this defies our moral intuitions, therefore it's wrong," and that was never addressed with "moral intuitions don't work that way," then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.

Please explain why moral intuitions don't work that way.

Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.

That's not an example. Please provide an actual one

You can select one at random. obviously.

Sure, but it's also what philosophers say about each other, all the time.

No, philosophers don't regularly accuse each other of being incpompetent..just of being wrong. There's a difference.

Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy.

You are inferring a lot from one example.

Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don't get it. "Most philosophers are incompetent, except for the ones who're sensible enough to see things my way," is a perfectly ordinary perspective among philosophers.

Nope.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T20:29:38.966Z · LW(p) · GW(p)

But I wans't saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.

I don't understand, can you rephrase this?

We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens't stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.

The standards by which we judge the truth of mathematical claims are not just inside us. One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we've created within ourselves, but something we've discovered and observed.

If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.

What evidence do we have that this is the case for morality?

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on. You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like "happiness is good" will not itself be able to prove the goodness of happiness.

While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.

Please explain why moral intuitions don't work that way.

People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.

Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.

I don't think I understand this, can you rephrase it?

You can select one at random. obviously.

I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them. You're the one claiming that they're there at all, that's why I'm asking you to do it.

No, philosophers don't regularly accuse each other of being incpompetent..just of being wrong. There's a difference.

Philosophers don't usually accuse each other of being incompetent in their publications, because it's not conducive to getting other philosophers to regard their arguments dispassionately, and that sort of open accusation is generally frowned upon in academic circles whether one believes it or not. They do regularly accuse each other of being comprehensively wrong for their entire careers. In my personal conversations with philosophers (and I never considered myself to have really taken a class, or attended a lecture by a visitor, if I didn't speak with the person teaching it on a personal basis to probe their thoughts beyond the curriculum,) I observed a whole lot of frustration with philosophers who they think just don't get their arguments. It's unsurprising that people would tend to become so frustrated participating in a field that basically amounts to long running arguments extended over decades or centuries. Imagine the conversation we're having now going on for eighty years, and neither of us has changed our minds. If you didn't find my arguments convincing, and I hadn't budged in all that time, don't you'd think you'd start to suspect that I was particularly thick?

You are inferring a lot from one example.

I'm using an example illustrative of my experience.

Replies from: Nornagest, PrawnOfFate
comment by Nornagest · 2013-04-20T21:12:51.047Z · LW(p) · GW(p)

I don't understand, can you rephrase this?

Sounds to me like PrawnOfFate is saying that any sufficiently rational cognitive system will converge on a certain set of ethical goals as a consequence of its structure, i.e. that (human-style) ethics is a property that reliably emerges in anything capable of reason.

I'd say the existence of sociopathy among humans provides a pretty good counterargument to this (sociopaths can be pretty good at accomplishing their goals, so the pathology doesn't seem to be indicative of a flawed rationality), but at least the argument doesn't rely on counting fundamental particles of morality or something.

Replies from: Desrtopa, PrawnOfFate
comment by Desrtopa · 2013-04-20T21:20:39.536Z · LW(p) · GW(p)

I would say so also, but PrawnOfFate has already argued that sociopaths are subject to additional egocentric bias relative to normal people and thereby less rational. It seems to me that he's implicitly judging rationality by how well it leads to a particular body of ethics he already accepts, rather than how well it optimizes for potentially arbitrary values.

Replies from: Nornagest
comment by Nornagest · 2013-04-20T21:39:24.335Z · LW(p) · GW(p)

Well, I'm not a psychologist, but if someone asked me to name a pathology marked by unusual egocentric bias I'd point to NPD, not sociopathy.

That brings up some interesting questions concerning how we define rationality, though. Pathologies in psychology are defined in terms of interference with daily life, and the personality disorder spectrum in particular usually implies problems interacting with people or societies. That could imply either irreconcilable values or specific flaws in reasoning, but only the latter is irrational in the sense we usually use around here. Unfortunately, people are cognitively messy enough that the two are pretty hard to distinguish, particularly since so many human goals involve interaction with other people.

In any case, this might be a good time to taboo "rational".

comment by PrawnOfFate · 2013-04-22T13:52:03.179Z · LW(p) · GW(p)

Since no claim has a probability of 1.0, I only need to argue that a clear majority of rational minds converge.

comment by PrawnOfFate · 2013-04-22T13:50:35.382Z · LW(p) · GW(p)

The standards by which we judge the truth of mathematical claims are not just inside us.

How do we judge claims about transfinite numbers?

One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we've created within ourselves, but something we've discovered and observed.

If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.

Mathematics isn't physics. Mathematicians prove theorems from axioms, not from experiments.

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on.

Not necessarily. Eg, for utilitarians, values are just facts that are plugged into the metaethics to get concrete actions.

You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like "happiness is good" will not itself be able to prove the goodness of happiness.

Metaethical systems usually have axioms like "Maximising utility is good".

While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.

I am not sure what you mean by "exist" here. Claims are objectively true if most rational minds converge on them. That doesn't require Objective Truth to float about in space here.

Please explain why moral intuitions don't work that way.

People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.

Does that mean we can;t use moral intuitions at all, or that they must be used with caution?

I don't think I understand this, can you rephrase it?

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them.

Did you post any comments explaining to the professional philosophers where they had gone wrong?

Imagine the conversation we're having now going on for eighty years, and neither of us has changed our minds. If you didn't find my arguments convincing, and I hadn't budged in all that time, don't you'd think you'd start to suspect that I was particularly thick?

I don;'t see the problem. Philosophical competence is largely about understanding the problem.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-22T14:33:45.119Z · LW(p) · GW(p)

Mathematics isn't physics. Mathematicians prove theorems from axioms, not from experiments.

Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we've discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,

Metaethical systems usually have axioms like "Maximising utility is good".

But utility is a function of values. A paperclipper will produce utility according to different values than a human.

I am not sure what you mean by "exist" here. Claims are objectively true if most rational minds converge on them. That doesn't require Objective Truth to float about in space here.

Why would most rational minds converge on values? Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.

Does that mean we can;t use moral intuitions at all, or that they must be used with caution?

It means we should be aware of what our intuitions are and what they've developed to be good for. Intuitions are evolved heuristics, not a priori truth generators.

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

It seems like you're equating intuitions with axioms here. We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.

Did you post any comments explaining to the professional philosophers where they had gone wrong?

If I did, I don't remember them. I may have, I may have felt someone else adequately addressed them, I may not have felt it was worth the bother.

It seems to me that you're trying to foist onto me the effort of locating something which you were the one to testify was there in the first place.

I don;'t see the problem. Philosophical competence is largely about understanding the problem.

And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they're dealing with.

In any case, I reject the notion that dismissing large contingents of philosophers as lacking in competence is a valuable piece of evidence with respect t crankishness, and if you want to convince me that I am taking a crankish attitude, you'll need to offer some other evidence.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-22T21:24:56.897Z · LW(p) · GW(p)

Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we've discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,

But claims about transfinities don't correspond directly to any object. Maths is "spun off" from other facts, on your view. So, by analogy, moral realism could be "spun off" without needing any Form of the Good to correspond to goodness.

Metaethical systems usually have axioms like "Maximising utility is good".

But utility is a function of values. A paperclipper will produce utility according to different values than a human.

You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens't care what values are, it just sums or averages them.

Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.

Why would most rational minds converge on values?

a) they don't have to converge on preferences, since thing like utilitariansim are preference-neutral.

b) they already have to some extent because they are rational

Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.

I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on "maximise group utility" whilst what is utilitous varies considerably.

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

It seems like you're equating intuitions with axioms here.

Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.

We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.

There is another sense of "intuition" where someone feels that it's going to rain tomorrow or something. They're not the foundational kind.

And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they're dealing with.

So do they call for them to be fired?

Replies from: Desrtopa
comment by Desrtopa · 2013-04-22T21:40:32.108Z · LW(p) · GW(p)

But claims about transfinities don't correspond directly to any object. Maths is "spun off" from other facts, on your view. So, by analogy, moral realism could be "spun off" without needing any Form of the Good to correspond to goodness.

Spun off from what, and how?

You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens't care what values are, it just sums or averages them.

Speaking as a utilitarian, yes, utilitarianism does care about what values are. If I value paperclips, I assign utility to paperclips, if I don't, I don't.

Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.

Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?

I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on "maximise group utility" whilst what is utilitous varies considerably.

So what if a paperclipper arrives at "maximize group utility," and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn't demand any overlap of end-goal with other utility maximizers.

Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.

But, as I've pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.

If our axioms are grounded in our intuitions, then entities which don't share our intuitions will not share our axioms.

So do they call for them to be fired?

No, but neither do I, so I don't see why that's relevant.

Replies from: Eliezer_Yudkowsky, PrawnOfFate
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-23T03:23:14.822Z · LW(p) · GW(p)

Designating PrawnOfFate a probable troll or sockpuppet. Suggest terminating discussion.

Replies from: Desrtopa, Bugmaster
comment by Desrtopa · 2013-04-23T04:58:56.093Z · LW(p) · GW(p)

Request accepted, I'm not sure if he's being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it's unlikely to be a productive use of my time.

comment by Bugmaster · 2013-04-23T04:46:47.017Z · LW(p) · GW(p)

What is your basis for the designation ? I am not arguing with your suggestion (I was leaning in the same direction myself), I'm just genuinely curious. In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?

Replies from: Eliezer_Yudkowsky, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-23T05:54:32.786Z · LW(p) · GW(p)

Combined behavior in other threads. Check the profile.

comment by wedrifid · 2013-04-23T09:45:22.246Z · LW(p) · GW(p)

In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?

"Troll" is a somewhat fuzzy label. Sometimes when I am wanting to be precise or polite and avoid any hint of Fundamental Attribution Error I will replace it with the rather clumsy or verbose "person who is exhibiting a pattern of behaviour which should not be fed". The difference between "Person who gets satisfaction from causing disruption" and "Person who is genuinely confused and is displaying an obnoxiously disruptive social attitude" is largely irrelevant (particularly when one has their Hansonian hat on).

If there was a word in popular use that meant "person likely to be disruptive and who should not be fed" that didn't make any assumptions or implications of the intent of the accused then that word would be preferable.

comment by PrawnOfFate · 2013-04-23T09:19:25.702Z · LW(p) · GW(p)

Spun off from what, and how?

I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.

Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?

Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).

So what if a paperclipper arrives at "maximize group utility," and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn't demand any overlap of end-goal with other utility maximizers.

Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That's kind of the point.

But, as I've pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.

Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That's the foundational issue.

comment by Paul Crowley (ciphergoth) · 2013-04-20T15:08:43.940Z · LW(p) · GW(p)

The question of moral realism is AFAICT orthogonal to the Orthogonality Thesis.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T15:31:50.920Z · LW(p) · GW(p)

A lot of people here would seem to disagree, since I keep hearing the objection that ethics is all about values, and values are nothing to do with rationality.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-04-21T12:55:11.711Z · LW(p) · GW(p)

Could you make the connection to what I said more explicit please? Thanks!

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-21T15:19:16.772Z · LW(p) · GW(p)

" values are nothing to do with rationality"=the Orthogonality Thesis, so it's a step in the argument.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-04-21T19:57:56.118Z · LW(p) · GW(p)

It feels to me like the Orthogonality Thesis is a fairly precise statement, and moral anti-realism is a harder to make precise but at least well understood statement, and "values are nothing to do with rationality" is something rather vague that could mean either of those things or something else.

comment by MugaSofer · 2013-04-23T11:29:51.861Z · LW(p) · GW(p)

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

You can change that line, but it will result in you optimizing for something other than paperclips, resulting in less paperclips.

comment by MugaSofer · 2013-04-23T11:10:56.705Z · LW(p) · GW(p)

Suppose that you gained the power to both discern objective morality, and to alter your own source code. You use the former ability, and find that the basic morally correct principle is maximizing the suffering of sentient beings. Do you alter your source code to be in accordance with this?

I've never understood this argument.

It's like a slaveowner having a conversation with a time-traveler, and declaring that they don't want to be nice to slaves, so any proof they could show is by definition invalid.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-23T14:40:29.430Z · LW(p) · GW(p)

If the slaveowner is an ordinary human being, they already have values regarding how to treat people in their in-groups which they navigate around with respect to slaves by not treating them as in-group members. If they could be induced to see slaves as in-group members, they would probably become nicer to slaves whether they intended to or not (although I don't think it's necessarily the case that everyone who's sufficiently acculturated to slavery could be induced to see slaves as in-group members.)

If the agent has no preexisting values which can be called into service of the ethics they've being asked to adopt, I don't think that they could be induced to want to adopt them.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T15:59:39.612Z · LW(p) · GW(p)

Sure, but if there's an objective morality, it's inherently valuable, right? So you already value it. You just haven't realized it yet.

It gets even worse when people try to refute wireheading arguments with this. Or statements like "if it were moral to [bad thing], would you do it?"

Replies from: Desrtopa
comment by Desrtopa · 2013-04-25T22:02:59.460Z · LW(p) · GW(p)

What evidence would suggest that objective morality in such a sense could or does exist?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T11:15:02.848Z · LW(p) · GW(p)

I'm not saying moral realism is coherent, merely that this objection isn't.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-26T12:54:49.352Z · LW(p) · GW(p)

I don't think it's true that if there's an objective morality, agents necessarily value it whether they realize it or not though. Why couldn't there be inherently immoral or amoral agents?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T13:14:32.357Z · LW(p) · GW(p)

... because the whole point of an "objective" morality is that rational agents will update to believe they should follow it? Otherwise we might as easily be such "inherently immoral or amoral agents", and we wouldn't want to discover such objective "morality".

Replies from: Desrtopa
comment by Desrtopa · 2013-04-26T13:29:53.965Z · LW(p) · GW(p)

Well, if it turned out that something like "maximize suffering of intelligent agents" were written into the fabric of the universe, I think we'd have to conclude that we were inherently immoral agents.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T14:11:01.519Z · LW(p) · GW(p)

The same evidence that persuades you that we don't want to maximize suffering in real life is evidence that it wouldn't be, I guess.

Side note: I've never seen anyone try to defend the position that we should be maximizing suffering, whereas I've seen all sorts of eloquent and mutually contradictory defenses of more, um, traditional ethical frameworks.

comment by PrawnOfFate · 2013-04-20T02:20:39.388Z · LW(p) · GW(p)

A rational AI would use rationality. Amazing how that word keeps disappearing...on a website about...rationality.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T02:37:20.093Z · LW(p) · GW(p)

Elaborate. What rational process would it use to determine the silliness of its original objective?

comment by PrawnOfFate · 2013-04-23T11:40:59.748Z · LW(p) · GW(p)

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code.

Being able to read all you source code could be ultimate in self-reflection (absent Loeb's theorem), but it doens't follow that those who can't read their source-code can;t self reflect at all. It's just imperfect, like everything else.

comment by MugaSofer · 2013-04-23T11:07:03.021Z · LW(p) · GW(p)

"Objective".

comment by PrawnOfFate · 2013-04-20T01:16:48.784Z · LW(p) · GW(p)

As I was reading the article about the pebble-sorters, I couldn't help but think, "silly pebble-sorters, their values are so arbitrary and ultimately futile". This happened, of course, because I was observing them from the outside. If I was one of them, sorting pebbles would feel perfectly natural to me; and, in fact, I could not imagine a world in which pebble-sorting was not important. I get that.

This is about rational agents. If pebble sorters can't think of a non-arbitrary reason for sorting pebbles, they would recognise it a silly. Why not? Humans can spend years collecting stamps, or something, only to decide it is pointless.

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code. An AI, however, could

What...why...? Is there something special about silicon? Is it made from different quarks?

Replies from: Bugmaster, MugaSofer
comment by Bugmaster · 2013-04-20T01:50:00.171Z · LW(p) · GW(p)

This is about rational agents.

Being rational doesn't automatically make an agent able to read its own source code. Remember that, to the pebble-sorters, sorting pebbles is an axiomatically reasonable activity; it does not require justification. Only someone looking at them from the outside could evaluate it objectively.

What...why...? Is there something special about silicon?

Not at all; if you got some kind of a crazy biological implant that let you examine your own wetware, you could do it too. Silicon is just a convenient example.

Replies from: MugaSofer, PrawnOfFate
comment by MugaSofer · 2013-04-23T11:15:08.078Z · LW(p) · GW(p)

Not at all; if you got some kind of a crazy biological implant that let you examine your own wetware, you could do it too. Silicon is just a convenient example.

Humans can examine their own thinking. Not perfectly, because we aren't perfect. But we can do it, and indeed do so all the time. It's a major focus on this site, in fact.

comment by PrawnOfFate · 2013-04-20T01:59:30.244Z · LW(p) · GW(p)

Being rational doesn't automatically make an agent able to read its own source code. Remember that, to the pebble-sorters, sorting pebbles is an axiomatically reasonable activity;

You can define a pebblesorter as being unable to update its values, and I can point out that most rational agents won't be like that. Most rational agents won't have unupdateable values, because they will be messilly designed/evolved, and therefore will be capable of converging on an ethical system via their shared rationality.

Replies from: Bugmaster
comment by Bugmaster · 2013-04-20T02:58:08.015Z · LW(p) · GW(p)

Most rational agents won't have unupdateable values, because they will be messilly designed/evolved...

We are messily designed/evolved, and yet we do not have updatable goals or perfect introspection. I absolutely agree that some agents will have updatable goals, but I don't see how you can upgrade that to "most".

...and therefore will be capable of converging on an ethical system via their shared rationality.

How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors' goals ? There may well be one, but I am not convinced of this, so you'll have to convince me.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T11:24:05.540Z · LW(p) · GW(p)

We are messily designed/evolved, and yet we do not have updatable goals or perfect introspection

We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60.

I don't know why perfect introspection would be needed to have some ability to update.

.and therefore will be capable of converging on an ethical system via their shared rationality.

How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors' goals ?

Yes, that's what this whole discussion is about.

Replies from: Bugmaster, MugaSofer
comment by Bugmaster · 2013-04-20T20:35:34.873Z · LW(p) · GW(p)

We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60. I don't know why perfect introspection would be needed to have some ability to update.

Sorry, that was bad wording on my part; I should've said, "updatable terminal goals". I agree with what you said there.

How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors' goals ?

Yes, that's what this whole discussion is about.

I don't feel confident enough in either "yes" or "no" answer, but I'm currently leaning toward "no". I am open to persuasion, though.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-22T11:02:56.436Z · LW(p) · GW(p)

I should've said, "updatable terminal goals".

You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.

Replies from: Bugmaster
comment by Bugmaster · 2013-04-23T02:24:53.888Z · LW(p) · GW(p)

I personally don't know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, "no", but I could be wrong.

That said, I don't believe that there exists any kind of a universally applicable moral system, either.

comment by MugaSofer · 2013-04-23T11:47:10.201Z · LW(p) · GW(p)

people do not have the same goals at 5 as they do at 20 or 60

Source?

They take different actions, sure, but it seems to me, based on childhood memories etc, that these are in the service of roughly the same goals. Have people, say, interviewed children and found they report differently?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T12:03:19.735Z · LW(p) · GW(p)

How many 5 year olds have the goal of Sitting Down WIth a Nice Cup of Tea?

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2013-04-23T14:07:44.562Z · LW(p) · GW(p)

One less now that I'm not 5 years old anymore.

Could you please make a real argument? You're almost being logically rude.

comment by MugaSofer · 2013-04-23T14:00:43.339Z · LW(p) · GW(p)

Why do you think adults sit down with a nice cup of tea? What purpose does it serve?

comment by MugaSofer · 2013-04-23T11:14:00.299Z · LW(p) · GW(p)

This is about rational agents. If pebble sorters can't think of a non-arbitrary reason for sorting pebbles, they would recognise it a silly.

I'd use humans as a counterexample, but come to think, a lot of humans refuse to believe our goals could be arbitrary, and have developed many deeply stupid arguments that "prove" they're objective.

However, I'm inclined to think this is a flaw on the part of humans, not something rational.

comment by PrawnOfFate · 2013-04-20T01:00:28.585Z · LW(p) · GW(p)

One does not update terminal values, that's what makes them terminal.

Unicorns have horns...

Defining something something abstractly says nothing about its existence or likelihod. A neat division between terminal and abstract values could be implemented with sufficient effort, or could evolve with a low likelihood, but it is not a model of intelligence in general, and it is not likely just because messy solutions are likelier than neater ones. Actual and really existent horse-like beings are not going to acquire horns any time soon, no matter how clearly you define unicornhood.

Arguably, humans might not really have terminal values

Plausibly. You don;t now care about the same things you cared about when you were 10.

On what basis might a highly flexible paperclip optimizing program be persuaded that something else was more important than paperclips?

Show me one. Clippers are possible but not likely. I am not and never have said that Clippers would converge on the One True Ethics, I said that (super)intelligent, (super)rational agents would. The average SR-SI agent would not be a clipper for exactly the same reason that the average human is not an evil genius. There are no special rules for silicon!

Replies from: Desrtopa, MugaSofer
comment by Desrtopa · 2013-04-20T02:06:54.374Z · LW(p) · GW(p)

I'm noticing that you did not respond to my question of whether you've read No Universally Compelling Arguments and Sorting Pebbles Into Correct Heaps. I'd appreciate it if you would, because they're very directly relevant to the conversation, and I don't want to rehash the content when Eliezer has already gone to the trouble of putting them up where anyone can read them. If you already have, then we can proceed with that shared information, but if you're just going to ignore the links, how do I know you're going to bother giving due attention to anything I write in response?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T02:08:37.473Z · LW(p) · GW(p)

I'vre read them and you've been reading my response.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T02:34:55.063Z · LW(p) · GW(p)

Okay.

Plausibly. You don;t now care about the same things you cared about when you were 10.

I have different interests now than I did when I was ten, but that's not the same as having different terminal values.

Suppose a person doesn't support vegetarianism; they've never really given it much consideration, but they default to the assumption that eating meat doesn't cause much harm, and meat is tasty, so what's the big deal?

When they get older, they watch some videos on the conditions in which animals are raised for slaughter, read some studies on the neurology of livestock animals with respect to their ability to suffer, and decide that mainstream livestock farming does cause a lot of harm after all, and so they become a vegetarian.

This doesn't mean that their values have been altered at all. They've simply revised their behavior on new information with an application of the same values they already had. They started out caring about the suffering of sentient beings, and they ended up caring about the suffering of sentient beings, they just revised their beliefs about what actions that value should compel on the basis of other information.

To see whether person's values have changed, we would want to look, not at whether they endorse the same behaviors or factual beliefs that they used to, but whether their past self could relate to the reasons their present self has for believing and supporting the things they do now.

The average SR-SI agent would not be a clipper for exactly the same reason that the average human is not an evil genius.

The fact that humans are mostly not evil geniuses says next to nothing about the power of intelligence and rationality to converge on human standards of goodness. We all share almost all the same brainware. To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.

Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.

If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?

Replies from: MugaSofer, PrawnOfFate
comment by MugaSofer · 2013-04-23T11:26:06.185Z · LW(p) · GW(p)

Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.

coughaynrandcough

Replies from: Desrtopa, OrphanWilde
comment by Desrtopa · 2013-04-23T14:42:37.287Z · LW(p) · GW(p)

There's a difference between being a sociopath and being a jerk. Sociopaths don't need to rationalize dicking other people over.

If Ayn Rand's works could actually turn formerly neurotypical people into sociopaths, that would be a hell of a find, and possibly spark a neuromedical breakthrough.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T15:56:20.072Z · LW(p) · GW(p)

That's beside the point, though. Just because two agents have incompatible values doesn't mean they can't be persuaded otherwise.

ETA: in other words, persuading a sociopath to act like they're ethical or vice versa is possible. It just doesn't rewire their terminal values.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-25T22:00:05.232Z · LW(p) · GW(p)

Sure, you can negotiate with an agent with conflicting values, but I don't think its beside the point.

You can get a sociopath to cooperate with non-sociopaths by making them trade off for things they do care about, or using coercive power. But Clippy doesn't have any concerns other than paperclips to trade off against its concern for paperclips, and we're not in a position to coerce Clippy, because Clippy is powerful enough to treat us as an obstacle to be destroyed. The fact that the non-sociopath majority can more or less keep the sociopath minority under control doesn't mean that we could persuade agents whose values deviate far from our own to accommodate us if we didn't have coercive power over them.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T11:18:15.537Z · LW(p) · GW(p)

Clippy is a superintelligence. Humans, neurotypical or no, are not.

I'm not saying it's necessarily rational for sociopaths to act moral or vice versa. I'm saying people can be (and have been) persuaded of this.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-26T12:59:54.803Z · LW(p) · GW(p)

Prawnoffate's point to begin with was that humans could and would change their fundamental values on new information about what is moral. I suggested sociopaths as an example of people who wouldn't change their values to conform to those of other people on the basis of argument or evidence, nor would ordinary humans change their fundamental values to a sociopath's.

If we've progressed to a discussion of whether it's possible to coerce less powerful agents into behaving in accordance with our values, I think we've departed from the context in which sociopaths were relevant in the first place.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T13:11:03.593Z · LW(p) · GW(p)

Oh, sorry, I wasn't disagreeing with you about that, just nitpicking your example. Should have made that clearer ;)

comment by OrphanWilde · 2013-04-23T16:23:31.603Z · LW(p) · GW(p)

Are you arguing Ayn Rand can argue sociopaths into caring about other people for their own sakes, or argue neurotypical people into becoming sociopaths?

(I could see both arguments, although as Desrtopa references, the latter seems unlikely. Maybe you could argue a neurotypical person into sociopathic-like behavior, which seems a weaker and more plausible claim.)

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T15:46:32.900Z · LW(p) · GW(p)

I could see both argument

Then that makes it twice as effective, doesn't it?

(Edited for clarity.)

comment by PrawnOfFate · 2013-04-20T11:48:48.109Z · LW(p) · GW(p)

I have different interests now than I did when I was ten, but that's not the same as having different terminal values.

You can construe the facts as being compatible with the theory of terminal values, but that doesn't actually support the theory of TVs.

To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.

Ethics is about regulating behaviour to take into account the preferences of others. I don't see how pebblesorting would count.

If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?

Psychopathy is a strong egotistical bias.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T13:22:13.403Z · LW(p) · GW(p)

Ethics is about regulating behaviour to take into account the preferences of others. I don't see how pebblesorting would count.

How do you know that? Can you explain a process by which an SI-SR paperclipper could become convinced of this?

Psychopathy is a strong egotistical bias.

How can you you tell that psychopathy is an egotistical bias rather than non-psychopathy being an empathetic bias?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T13:28:10.361Z · LW(p) · GW(p)

How do you know that?

Much the same way as I understand the meanings of most words. Why is that a problem in this case.

How can you you tell that psychopathy is an egotistical bias rather than non-psychopathy being an empathetic bias?

Non psychopaths don't generally put other people above themselves--that is, they treat people equally, incuding themselevs.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T13:37:00.190Z · LW(p) · GW(p)

Much the same way as I understand the meanings of most words. Why is that a problem in this case.

"That's what it means by definition" wasn't much help to you when it came to terminal values, why do you think "that's what the word means" is useful here and not there? How do you determine that this word, and not that one, is an accurate description of a thing that exists?

Non psychopaths don't generally put other people above themselves--that is, they treat people equally, incuding themselevs.

This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don't necessarily even realize they're doing it.

If we accept that it's true for the sake of an argument though, how do we know that they don't just have a strong egalitarian bias?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T13:43:32.870Z · LW(p) · GW(p)

How do you determine that this word, and not that one, is an accurate description of a thing that exists?

Are you saying ethical behavour doesn't exist on this planet, or that ethical behaviour as I have defined it doens't exist on this planet?

This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don't necessarily even realize they're doing it.

OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No

This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don't necessarily even realize they're doing it.

That's like saying they have a bias towards not having a bias.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T13:53:45.868Z · LW(p) · GW(p)

Are you saying ethical behavour doesn't exist on this planet, or that ethical behaviour as I have defined it doens't exist on this planet?

I'm saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor. An SI-SR agent could look at humans and say "yep, this is by and large what humans think of as 'ethics,'" but that doesn't mean it would exert any sort of compulsion on it.

OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No

You not only haven't proven that psychopaths are the ones with an additional bias, you haven't even addressed the matter, you've just taken it for granted from the start.

How do you demonstrate that psychopaths have an egotistical bias, rather than non-psychopaths having an egalitarian bias, or rather than both of them having different value systems and pursuing them with equal degrees of rationality?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T14:01:24.118Z · LW(p) · GW(p)

I'm saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor.

I didn't say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.

An SI-SR agent could look at humans and say "yep, this is by and large what humans think of as 'ethics,'" but that doesn't mean it would exert any sort of compulsion on it.

"SR" stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.

You not only haven't proven that psychopaths are the ones with an additional bias,

Do you contest that psychopaths have more egotistical bias than the general population?

you've just taken it for granted from the start.

Yes. I thought it was something everyone knows.

rather than non-psychopaths having an egalitarian bias, o

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Replies from: Desrtopa, TheOtherDave
comment by Desrtopa · 2013-04-20T14:17:18.993Z · LW(p) · GW(p)

I didn't say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.

Where does this non-negligible probability come from though? When I've asked you to provide any reason to suspect it, you've just said that as you're not arguing there's a high probability, there's no need for you to answer that.

"SR" stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.

I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?

Do you contest that psychopaths have more egotistical bias than the general population?

Yes.

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Why?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T17:20:23.993Z · LW(p) · GW(p)

Where does this non-negligible probability come from though?

Combining the probabilites of the steps of the argument.

I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?

There are rationally compelling arguments.

Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.

There is nothing about ethics that makes it unseceptible to rational argument.

There are examples of rational argument about ethics, and of people being compelled by them.

Do you contest that psychopaths have more egotistical bias than the general population?

Yes.

That is an extraordinary claim, and the burden is on you to support it.

It is absurd to characterise the practice of treating everyone the same as a form of bias.

Why?

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T18:44:28.218Z · LW(p) · GW(p)

There are rationally compelling arguments.

Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.

There is nothing about ethics that makes it unseceptible to rational argument.

There are examples of rational argument about ethics, and of people being compelled by them.

Rationality may be universalizable, but that doesn't mean ethics is.

If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.

I would accept something like "if you accept that it's bad to make sentient beings suffer, you should oppose animal abuse" can be rationally argued for, but that doesn't mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn't already believe it that it should care about happiness or suffering at all?

That is an extraordinary claim, and the burden is on you to support it.

I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.

It's much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don't, and that something appears to be the brain functions normally concerned with empathy. It's not that they're more concerned with self interest than other people, but that they're less concerned with other people's interests.

Human brains are not "rationality+biases," so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

Is it a bias to treat people differently from rocks?

Now, if we're going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say "yes."

I don't think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is "biased" compared to an ideally rational agent. Instrumental rationality is for pursuing agents' innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?

If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.

comment by TheOtherDave · 2013-04-20T16:32:58.696Z · LW(p) · GW(p)

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Can you expand on what you mean by "absurd" here?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-20T17:08:42.621Z · LW(p) · GW(p)

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-20T19:04:13.140Z · LW(p) · GW(p)

Hm.
OK.

So, I imagine the following conversation between two people (A and B):
A: It's absurd to say 'atheism is a kind of religion,'
B: Why?
A: Well, 'religion' is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion.
B: I agree, but that merely shows the claim is mistaken. Why is it absurd?
A: (thinks) Well, what I mean is that any mind capable of seriously considering the question 'Is atheism a religion?' should reach the same conclusion without significant difficulty. It's not just mistaken, it's obviously mistaken. And, more than that, I mean that to conclude instead that atheism is a religion is not just false, but the opposite of the truth... that is, it's blatantly mistaken.

Is A in the dialog above capturing something like what you mean?

If so, I disagree with your claim. It may be mistaken to characterize the practice of treating everyone the same as a form of bias, but it is not obviously mistaken or blatantly mistaken. In fact, I'm not sure it's mistaken at all, though if it is a bias, it's one I endorse among humans in a lot of contexts.

So, terminology aside, I guess the question I'm really asking is: how would I conclude that treating everyone the same (as opposed to treating different people differently) is not actually a bias, given that this is not obvious to me?

comment by MugaSofer · 2013-04-23T11:23:41.677Z · LW(p) · GW(p)

Plausibly. You don;t now care about the same things you cared about when you were 10.

Are we talking sweeties here? Because that seems more like lack of foresight than value drift. Or are we talking puberty? That seems more like new options becoming available.

I am not and never have said that Clippers would converge on the One True Ethics, I said that (super)intelligent, (super)rational agents would.

You should really start qualifying that with "most actual" if you don't want people to interpret it as applying to all possible (superintelligent) minds.

comment by MugaSofer · 2013-04-23T11:05:32.830Z · LW(p) · GW(p)

But you're talking about parts of mindspace other than ours, right? The Superhappies are strikingly similar to us, but they still choose the superhappiest values, not the right ones.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T11:18:54.123Z · LW(p) · GW(p)

I don't require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say "I don't like X, but I respect your right to do it". The first part says X is a disvalue, the second is an override coming from rationality.

Replies from: nshepperd, MugaSofer
comment by nshepperd · 2013-04-23T15:46:00.896Z · LW(p) · GW(p)

This is where you are confused. Almost certainly it is not the only confusion. But here is one:

Values are not claims. Goals are not propositions. Dynamics are not beliefs.

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will. Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

If you don't believe me, I can only suggest you study AI (Thrun & Norvig) and/or the metaethics sequence until you do. (I mean really study. As if you were learning particle physics. It seems the usual metaethical confusions are quite resilient; in most peoples' cases I wouldn't expect them to vanish without actually thinking carefully about the data presented.) And, well, don't expect to learn too much from off-the-cuff comments here.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T17:40:13.199Z · LW(p) · GW(p)

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will.

Well, that justifies moral realism.

Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

...or its an emergent feature, or they can update into something that works that way. You are tacitly assuming that you clipper is barely an AI at all...that is just has certain functions it performs blindly because its built that way. But a supersmart, uper-rational clipper has to be able to update. By hypothesis, clippers have certain functionalities walled off from update. People are messilly designed and unlikely to work that way. So are likely AIs and aliens.

Only rational agents, not all mindful agents, will have what it takes to derive objective moral truths. They don't need to converge on all their values to converge on all their moral truths, because ratioanity can tell you that a moral claim is true even if it is not in your (other) interests. Individuals can value rationality, and that valuation can override other valuations.

Only rational agents, not all mindful agents, will have what it takes to derive objective moral truths. The further claim that agents will be motivated to do derive moral truths., and to act on them, requires a further criterion. Morality is about regulating behaviour in a society, So only social rational agents will have motivation to update. Again, they do not have to converge on values beyond the shared value of sociality.

Replies from: nshepperd, DaFranker
comment by nshepperd · 2013-04-24T01:37:08.135Z · LW(p) · GW(p)

emergent

The Futility of Emergence

By hypothesis, clippers have certain functionalities walled off from update.

A paperclipper no more has a wall stopping it from updating into morality than my laptop has a wall stopping it from talking to me. My laptop doesn't talk to me because I didn't program it to. You do not update into pushing pebbles into prime-numbered heaps because you're not programmed to do so.

Does a stone roll uphill on a whim?

Perhaps you should study Reductionism first.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-24T01:44:26.816Z · LW(p) · GW(p)

The Futility of Emergence

"Emergent" in this context means "not explicitly programmed in". There are robust examples.

A paperclipper no more has a wall stopping it from updating into morality than my laptop has a wall stopping it from talking to me.

Your laptop cannot talk to you because the natural language is an unsolved problem.

Does a stone roll uphill on a whim?

Not wanting to do something is not the slightest guarantee of not actually doing it.f

An AI can update its values because value drift is an unsolved problem

Clippers can't update their values by definition, but you can't define anything into existence or statistical significance.

You do not update into pushing pebbles into prime-numbered heaps because you're not programmed to do so.

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

Replies from: Nornagest
comment by Nornagest · 2013-04-24T02:19:21.549Z · LW(p) · GW(p)

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

...with extremely low probability. It's far more likely that the Life field will stabilize around some relatively boring state, empty or with a few simple stable patterns. Similarly, a system subject to value drift seems likely to converge on boring attractors in value space (like wireheading, which indeed has turned out to be a problem with even weak self-modifying AI) rather than stable complex value systems. Paperclippism is not a boring attractor in this context, and a working fully reflective Clippy would need a solution to value drift, but humanlike values are not obviously so, either.

Replies from: PrawnOfFate, PrawnOfFate
comment by PrawnOfFate · 2013-04-25T12:34:28.394Z · LW(p) · GW(p)

I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

Replies from: Nornagest, MugaSofer
comment by Nornagest · 2013-04-25T18:45:38.666Z · LW(p) · GW(p)

And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

Well, in this case it's because the post I was responding to mentioned Clippy a couple of times, so I thought it'd be worthwhile to mention how the little bugger fits into the overall picture of value stability. It's indeed somewhat tangential to the main point I was trying to make; paperclippers don't have anything to do with value drift (they're an example of a different failure mode in artificial ethics) and they're unlikely to evolve from a changing value system.

comment by MugaSofer · 2013-04-25T12:41:40.872Z · LW(p) · GW(p)

Key word here being "societies". That is, not singletons. A lot of the discussion on metaethics here is implicitly aimed at FAI.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T12:56:33.819Z · LW(p) · GW(p)

Sorry..did you mean FAI is about societies, or FAI is about singletons?

But if ethics does emerge as an organisational principle in socieities, that's all you need for FAI. You don't even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T13:45:18.590Z · LW(p) · GW(p)

FAI is about singletons, because the first one to foom wins, is the idea.

ETA: also, rational agents may be ethical in societies, but there's no advantage to being an ethical singleton.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T14:01:28.747Z · LW(p) · GW(p)

UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T16:13:23.498Z · LW(p) · GW(p)

Any agent that fooms becomes a singleton. Thus, it doesn't matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-25T16:15:20.114Z · LW(p) · GW(p)

I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.

Replies from: Randaly
comment by Randaly · 2013-04-25T16:34:41.705Z · LW(p) · GW(p)

An agent in a society is unable to force its values on the society; it needs to cooperate with the rest of society. A singleton is able to force its values on the rest of society.

comment by PrawnOfFate · 2013-04-24T02:23:49.066Z · LW(p) · GW(p)

At last, an interesting reply!

comment by DaFranker · 2013-04-23T18:25:30.914Z · LW(p) · GW(p)

Other key problem:

But a supersmart, uper-rational clipper has to be able to update.

has to be able to update

"update"

Please unpack this and describe precisely, in algorithmic terms that I could read and write as a computer program given unlimited time and effort, this "ability to update" which you are referring to.

I suspect that you are attributing Magical Powers From The Beyond to the word "update", and forgetting to consider that the ability to self-modify does not imply active actions to self-modify in any one particular way that unrelated data bits say would be "better", unless the action code explicitly looks for said data bits.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T18:39:56.372Z · LW(p) · GW(p)

It's uncontrovesial that rational agents need to update, and that AIs need to self-modify. The claim that values are in either case insulated from updates is the extraordinary one. The Cipper theory tells you that you could build something like that if you were crazy enough. Since Clippers are contrived, nothing can be inferred from them about typical agents. People are messy, and can accidentally update their values when trying to do something else, For instance, LukeProg updated to "atheist" after studying Christian apologetics for the opposite reason.

Replies from: TheOtherDave, DaFranker
comment by TheOtherDave · 2013-04-23T19:59:11.366Z · LW(p) · GW(p)

Yes, value drift is the typical state for minds in our experience.

Building a committed Clipper that cannot accidentally update its values when trying to do something else is only possible after the problem of value drift has been solved. A system that experiences value drift isn't a reliable Clipper, isn't a reliable good-thing-doer, isn't reliable at all.

Next.

comment by DaFranker · 2013-04-23T19:32:46.895Z · LW(p) · GW(p)

It's uncontrovesial that rational agents need to update, and that AIs need to self-modify. The claim that values are in either case insulated from updates is the extraordinary one.

I never claimed that it was controversial, nor that AIs didn't need to self-modify, nor that values are exempt.

I'm claiming that updates and self modification do not imply a change of behavior towards behavior desired by humans.

I can build a small toy program to illustrate, if that would help.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T19:39:41.601Z · LW(p) · GW(p)

I am not suggesting that human ethics is coincidentally universal ethics. I am suggesting that if neither moral realism nor relativism is initially discarded, one can eventually arrive at a compromise position where rational agents in a particular context arrive at a non arbitrary ethics which is appropriate to that context.

comment by MugaSofer · 2013-04-23T13:55:21.029Z · LW(p) · GW(p)

... why do you think people say "I don't like X, but I respect your right to do it"?

comment by PrawnOfFate · 2013-04-19T18:00:13.284Z · LW(p) · GW(p)

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself.

if its based on arbitrary axioms, that would be a problem, but I have already argued that the axiom choice would not be arbitrary.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-19T18:11:04.205Z · LW(p) · GW(p)

I presume that you take your particular ethical system (or a variant thereof) to be the one that every alien, AI and human should adopt.

Ok, so why? Why can the function ethics: actions -> degree of goodness, or however else you choose the domain, not be modified? Where's your case?

Edit: What basis would convince not one, but every conceivable superintelligence of that hypothetical choice of axioms being correct? (They wouldn't all "cancel out" if choosing different axioms, that in itself would falsify the ethical system proposed by a lowly human as being universally correct.)

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T18:25:29.940Z · LW(p) · GW(p)

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself.

I have not put forward an object-level ethical system, and I have explained why I do not need to. Physical realism does not imply that my physics is correct, metaethical realism does not imply that my ethics is the one true theory.

Ok, so why? Why can the function ethics: actions -> degree of goodness, or however else you choose the domain, not be modified?

Because ethics needs to regulate behaviour -- that is its functional role -- and could not if individuals could justify any behaviour by re arranging action->goodness mappings.

: What basis would convince not one, but every conceivable superintelligence of that hypothetical choice of axioms being correct? (

Their optimally satisfying the constraints on ethical axioms arising from the functional role of ethics.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T19:09:13.850Z · LW(p) · GW(p)

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself.

I have not put forward an object-level ethical system, and I have explained why I do not need to. Physical realism does not imply that my physics is correct, metaethical realism does not imply that my ethics is the one true theory.

That doesn't actually answer the quoted point. Perhaps you meant to respond to this:

I presume that you take your particular ethical system (or a variant thereof) to be the one that every alien, AI and human should adopt.

... which is, in fact, refuted by your statement.

Because ethics needs to regulate behaviour -- that is its functional role -- and could not if individuals could justify any behaviour by re arranging action->goodness mappings.

... which Kawoomba believes they can, AFAICT.

Their optimally satisfying the constraints on ethical axioms arising from the functional role of ethics.

Could you unpack this a little? I think I see what you're driving at, but I'm not sure.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T21:09:40.546Z · LW(p) · GW(p)

Perhaps you meant to respond to this:

Yes, I did, thanks.

" if individuals could justify any behaviour"

which Kawoomba believes they can, AFAICT.

Then what about the second half of the argument? If individuals can "ethically" justify any behaviour, then does or does not such "ethics" completely fail in its essential role of regulating behaviour? Because anyone can do anything, and conjure up a justification after the fact by shifting their "frame"? A chocolate "teapot" is no teapot, non-regulative "ethics" is no ethics...

Could you unpack this a little?

Not now.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T22:54:14.852Z · LW(p) · GW(p)

Ah, but Kawoomba doesn't expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.

Which, honestly, should simply be called "goals", not "ethics", but there you go.

Replies from: Kawoomba, PrawnOfFate
comment by Kawoomba · 2013-04-23T09:53:36.817Z · LW(p) · GW(p)

Yea, honestly I've never seen the exact distinction between goals which have an ethics-rating, and goals which do not. I understand that humans share many ethical intuitions, which isn't surprising given our similar hardware. Also, that it may be possible to define some axioms for "medieval Han Chinese ethics" (or some subset thereof), and then say we have an objectively correct model of their specific ethical code. About the shared intuitions amongst most humans, those could be e.g. "murdering your parents is wrong" (not even "murder is wrong", since that varies across cultures and circumstances). I'd still call those systems different, just as different cars can have the same type of engine.

Also, I understand that different alien cultures, using different "ethical axioms", or whatever they base their goals on, do not invalidate the medieval Han Chinese axioms, they merely use different ones.

My problem with "objectively correct ethics for all rational agents" is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could not exist (edit: i.e., it probably could exist), and its very existence would contradict some "'rational' corresponds to a fixed set of ethics" rule. If someone would say "well, Clippy isn't really rational then", that would just be torturously warping the definition of "rational actor" to "must also believe in some specific set of ethical rules".

If I remember correctly, you say at least for humans there is a common ethical basis which we should adopt (correct me otherwise). I guess I see more variance and differences where you see common elements, especially going in the future. Should some bionically enhanced human, or an upload on a spacestation which doesn't even have parents, still share all the same rules for "good" and "bad" as an Amazon tribe living in an enclosed reservation? "Human civilization" is more of a loose umbrella term, and while there certainly can be general principles which some still share, I doubt there's that much in common in the ethical codex of an African child soldier and Donald Trump.

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-23T10:21:56.927Z · LW(p) · GW(p)

Yea, honestly I've never seen the exact distinction between goals which have an ethics-rating, and goals which do not

A number of criteria have been put forward. For instance, do as you would be done by. If you don't want to be murdered, murder is not an ethical goal.

My problem with "objectively correct ethics for all rational agents" is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could exist, and its very existence would contradict some "'rational' corresponds to a fixed set of ethics" rule. If someone would say "well, Clippy isn't really rational then", that would just be torturously warping the definition of "rational actor" to "must also believe in some specific set of ethical rules".

The argument is not that rational agents (for some vaue of "rational") must believe in some rules, it is rather that they must not adopt arbitrary goals. Also, the argument only requires a statistical majority of rational agents to converge, because of the P<1.0 thing.

Should some bionically enhanced human, or an upload on a spacestation which doesn't even have parents, still share all the same rules for "good" and "bad" as an Amazon tribe living in an enclosed reservation?

Maybe not. The important thing is that variations in ethics should not be arbitrary--they should be systematically related to variations in circumstances.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-23T10:38:06.173Z · LW(p) · GW(p)

I'm not disputing that there are goals/ethics which may be best suited to take humanity along a certain trajectory, towards a previously defined goal (space exploration!). Given a different predefined goal, the optimal path there would often be different. Say, ruthless exploitation may have certain advantages in empire building, under certain circumstances.

The Categorical Imperative in all its variants may be a decent system for humans (not that anyone really uses it).

But is the justification for its global applicability that "if everyone lived by that rule, average happiness would be maximized"? That (or any other such consideration) itself is not a mandatory goal, but a chosen one. Choosing different criteria to maximize (e.g. noone less happy than x) would yield different rules, e.g. different from the Categorical Imperative. If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative. How can it still be said to be "correct"/optimal for the king, then?

So I'm not saying there aren't useful ethical system (as judged in relation to some predefined course), but that because those various ultimate goals of various rational agents (happiness, paperclips, replicating yourself all over the universe) and associated optimal ethics vary, there cannot be one system that optimizes for all conceivable goals.

My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions, how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?

Gandhi wouldn't take a pill which may transform him into a murderer. Clippy would not willingly modify itself such that suddenly it had different goals. Once you've taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further "core spark" which really yearns to adopt the Categorical Imperative. Clippy may choose to use it, for a time, if it serves its ultimate goals. But any given ethical code will never be optimal for arbitrary goals, in perpetuity (proof by example). When then would a particular code following from particular axioms be adopted by all rational agents?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T10:50:13.361Z · LW(p) · GW(p)

But is the justification for its global applicability that "if everyone lived by that rule, average happiness would be maximized"?

Well, not, that's not Kant's justification!

That (or any other such consideration) itself is not a mandatory goal, but a chosen one.

Why would a rational agent choose unhappiness?

If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative.

Yes, but that wouldn't count as ethics. You wouldn't want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn't want to be a slave, and you probably would be. This is brought out in Rawls' version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.

My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions,

You don't have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff, like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.

how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?

The idea is that things like the CI have rational appeal.

Once you've taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further "core spark" which really yearns to adopt the Categorical Imperative.

Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.

Replies from: Kawoomba, MugaSofer
comment by Kawoomba · 2013-04-23T11:02:40.693Z · LW(p) · GW(p)

Scenario:

1) You wake up in a bright box of light, no memories. You are told you'll presently be born into an Absolute monarchy, your role randomly chosen. You may choose any moral principles that should govern that society. The Categorical Imperative would on average give you the best result.

2) You are the monarch in that society, you do not need to guess which role you're being born into, you have that information. You don't need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.

A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?

Lastly, whatever Kant's justification, why can you not optimize for a different principle - peak happiness versus average happiness, what makes any particular justifying principle correct across all - rational - agents. Here come my algae!

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T12:14:54.538Z · LW(p) · GW(p)

You are the monarch in that society, you do not need to guess which role you're being born into, you have that information. You don't need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.

For what value of "best"? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn't maximise your personally utility. But I don't see why you would expect that. Things like utilitarianism that seek to maximise group utility, don't promise to make everyone blissfully happy individually. Some will lose out.

A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?

It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.

Lastly, whatever Kant's justification, why can you not optimize for a different principle - peak happiness versus average happiness, what makes any particular justifying principle correct across all - rational - agents.

If you think RAs can converge on an ultimately correct theory of physics (which we don't have), what is to stop them converging on the correct theory of morality, which we also don't have?

Replies from: Kawoomba, MugaSofer
comment by Kawoomba · 2013-04-23T12:43:49.676Z · LW(p) · GW(p)

Some will lose out.

Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn't reason from a point of "I could be the king". They aren't, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.

It is irrational for agents to sign up to anyhting which is not in their [added: current] interests

Yes. Which is why rational agents wouldn't just go and change/compromise their terminal values, or their ethical judgements (=no convergence).

what is to stop them converging on the correct theory of morality, which we also don't have?

Starting out with different interests. A strong clippy accommodating a weak beady wouldn't be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.

There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly "correct" morality would strictly damage its own interests.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-23T15:06:08.940Z · LW(p) · GW(p)

Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn't reason from a point of "I could be the king". They aren't, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.

Someone who adopts the "I don;t like X, but I respect peoples right to do it" approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.

Yes. Which is why rational agents wouldn't just go and change/compromise their terminal values, or their ethical judgements (=no convergence).

There's no evidence of terminal values. Judgements can be updated without changing values.

Starting out with different interests. A strong clippy accomodating a weak beady wouldn't be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.

Not all agents are interested in physics or maths. Doesn't stop their claims being objetive.

comment by MugaSofer · 2013-04-23T13:23:58.212Z · LW(p) · GW(p)

It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.

Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.

comment by MugaSofer · 2013-04-23T13:22:57.611Z · LW(p) · GW(p)

Yes, but that wouldn't count as ethics. You wouldn't want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn't want to be a slave, and you probably would be.

If there are a lot of similar agents in similar positions, Kantian ethics works, no matter what their goals. For example, theft may appear to have positive expected value - assuming you're selfish - but it has positive expected value for lots of people, and if they all stole the economy would collapse.

OTOH, if you are in an unusual position, the Categorical Imperative only has force if you take it as axiomatic.

This is brought out in Rawls' version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.

That's not a version of Kantian ethics, it's a hack for designing a society without privileging yourself. If you're selfish, it's a bad idea.

comment by MugaSofer · 2013-04-23T14:02:56.022Z · LW(p) · GW(p)

Kawoomba, maybe it would be better for you to think in terms of ethics along the lines of Kant's Categorical Imperative, or social contract theory; ways for agents with different goals to co-operate.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-23T14:13:43.091Z · LW(p) · GW(p)

Wouldn't that presuppose that "cooperation is the source/the sine qua non of all good"?

Sure, we can redefine some version of ethics in such a cooperative light, and then conclude that many agents don't give a hoot about such ethics, or regard it in the cold, hard terms of game theory, e.g. negotiating/extortion strategies only.

Judging actions as "good" or "bad" doesn't prima facie depend entirely on cooperation, the good of your race, or whatever. For example, if you were a part of a planet-eating race, consuming all matter/life in its path - while being very friendly amongst themselves - couldn't it be considered ethically "good" even from a human perspective to killswitch your own race? And "bad" from the moral standpoint of the planet-eating race?

The easiest way to dissolve such obvious contradictions is to say that there is just not, in fact, a universal hierarchy ranking ethical systems universally, regardless of the nature of the (rational = capable reasoner) agent.

Doesn't mean an agent isn't allowed to strongly defend what it considers to be moral, to die for it, even.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T16:05:58.276Z · LW(p) · GW(p)

Wouldn't that presuppose that "cooperation is the source/the sine qua non of all good"?

The point is it doesn't matter what you consider "good"; fighting people wont produce it (even if you value fighting people, because they will beat you and you'll be unable to fight.)

I'm not saying your goals should be ethical; I'm saying you should be ethical in order to achieve your goals.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-25T16:23:49.653Z · LW(p) · GW(p)

That seems very simplistic.

Ethically "good" = enabling cooperation, if you are not cooperating you must be "fighting"?

Those are evidently only rough approximations of social dynamics even just in a human context. Would it be good to cooperate with an invading army, or to cooperate with the resistance? The one with an opposing goal, so as a patriot, the opposing army it is, eh?

Is it good to cooperate with someone bullying you, or torturing you? What about game theory, if you're not "cooperating" (for your value of cooperating), you must be "fighting"? What do you mean by fighting, physical altercations? Is a loan negotiation more like cooperation or more like fighting, and is it thus ethically good or bad, for your notion of "ethics = ways for agents with different goals to co-operate"?

It seems like a nice soundbite, but doesn't make even cursory sense on further examination. I'm all for models that are as simple as possible, but no simpler. But cooperation as the definition of ethics? For you, maybe. Collaborateur!

Replies from: MugaSofer
comment by MugaSofer · 2013-04-26T11:53:04.486Z · LW(p) · GW(p)

Fighting in this context refers to anything analogous to defecting in a Prisoner's Dilemma. You hurt the other side but encourage them to defect in order to punish you. You should strive for the Pareto Optimimum.

Maybe this would be clearer if we talked in terms of Pebblesorters?

comment by PrawnOfFate · 2013-04-20T00:42:09.020Z · LW(p) · GW(p)

Ah, but Kawoomba doesn't expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.

Why not just say there is no ethics? His theory is like saying that since teapots are made of chocolate, their purpose is to melt into a messy puddle instead of making tea.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-23T11:01:43.638Z · LW(p) · GW(p)

I'm all in favor of him just using the word "goals", myself, and leaving us non-paperclippers the word "ethics", but oh well. It confuses discussion no end, but I guess it makes him happy.

Also, arguing over the "correct" word is low-status, so I'd suggest you start calling them "normative guides" or something while Kawoomba can hear you if you don't want to rehash this conversation. And they can always hear you.

ALWAYS.

comment by PrawnOfFate · 2013-04-18T16:03:47.083Z · LW(p) · GW(p)

"There is a non-zero chance of one correct ethical system existing, as long as that's there, I'm free to believe it", or what?

If there is a system of objective morality based on reason, then I am rationally compelled to believe it.

No Sir, if you insist there is any basis whatsoever to stake your "one ethics to rule them all" claim on, you argue it's more likely than not.

My actual claim, for the third time, is that relativism is not obviously true and realism is not obviously false. That does not require "more likely to be right than wrong".

I do not stake my belief on absolute certainties, that's counter to all the tenets of rationality, Bayes, updating on evidence et al.

Neither do I. I never said anything of the kind. You keep trying to shoehorn what I am saying into your preconceived notion of Arguing With a Theist. Please don't.

My argument is clear. Different agents deem different courses of actions to be good or bad.

True but irrelevant. Doesn't prove relativism.

There is a basis (such as Aumann's) for rational agents to converge on isomorphic descriptions of the world. There is no known, or readily conceivable, basis for rational agents to all converge on the same course of action.

That is nothing but gainsaying my argument. I have sketched how rational agents could become persuaded of rationally based ethics as they are of any other rational proposition.

On the contrary, that would entail that e.g. world-eating AIs that are also smarter than any humans, individual or collectively, cannot possibly exist.

Yep. I think Clipper arguments are contestable.

There are no laws of physics preventing their existence - or construction [ of world eating AIs].

There are laws of logic preventing the conjunction of "is hyper rational" and "arbitrarily ignores rationally compelling clams".

So we should presume that they can exist. If their rational capability is greater than our own, we should try to adopt world eating, since they'd have the better claim (being smarter and all) on having the correct ethics, no?

Show me one, and I'll consider it. But why should I abandon my present beliefs just because a hyperratioanl agent might believe something else? A hyperrational agent might believe anything else. It cancels out. A la Pascal's Wager.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T12:46:31.745Z · LW(p) · GW(p)

Neither do I. I never said anything of the kind. You keep trying to shoehorn what I am saying into your preconceived notion of Arguing With a Theist. Please don't.

Upvoted just for this, and also for responding civilly and persuasively to Kawoomba's ... Kawoombaing.

Also, I think you might like this relevant link. I know I did.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T16:15:41.054Z · LW(p) · GW(p)

Also, I think you might like this relevant link.

I did. But I was a bit puzzled by this,..

" we should believe the Bible because the Bible is correct about many things that can be proven independently, this vouches for the veracity of the whole book, and therefore we should believe it even when it can't be independently proven"

.. which,even as the improved version of, a straw man argument is still pretty weak. The Bible is a compendium of short books written by a number of people at disparate periods of time. The argument would work much better about a more cohesive work, such as the Koran....

(No, Kawoomba, I did not admit to being a Muslim...)

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T17:29:46.962Z · LW(p) · GW(p)

Well, it's not an argument I'd personally make in this case (for roughly the reasons you outline) but it's not an argument that's trivially wrong from the outset; you have to actually engage in biblical scholarship to understand the flaw.

And at least it's not circular.

comment by PrawnOfFate · 2013-04-18T14:31:48.321Z · LW(p) · GW(p)

So is it a component of the "correct" ethical preferences that they satisfy the preferences of others?

Take into account, at least. In which case: of course. An "ethics" that was all about your own preferences would be vacuous--it would just be a duplicate of instrumental rationality.

comment by PrawnOfFate · 2013-04-18T14:47:08.413Z · LW(p) · GW(p)

But hold on, our ethical preferences aren't designed to maximize other sapients' preferences. Wouldn't it be more ethical still to not want anything for yourself

Not necessarily. Ethics uncontentiously includes fairness. Treating an arbitrary person's preferences as being unimportant would be unfair, so treating your own preferences as unimportant would be unfair.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:48:48.792Z · LW(p) · GW(p)

No, no. Wouldn't it be more ethical if your preferences were "I want nothing above strict subsistence".

You can take those preferences as seriously and important as anything.

More ethical, no?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T14:53:48.928Z · LW(p) · GW(p)

Wouldn't it be more ethical if your preferences were "I want nothing above strict subsistence".

That would still be unfair if you want more than strict subsistence for others.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T14:57:31.824Z · LW(p) · GW(p)

That would still be unfair if you want more than strict subsistence for others.

Why would you? Wouldn't a society in which everyone's preference would be to want nothing above strict subsistence be maximally satisfied if they all had nothing above strict subsistence?

They can take that very seriously, and all be maximally ethical, much more so than us. Huzzah!

(If we fire off comments like that, let's consolidate the different lines of comments into one.)

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:01:49.834Z · LW(p) · GW(p)

Why would you? Wouldn't a society in which everyone's preference would be to want nothing above strict subsistence be maximally satisfied if they all had nothing above strict subsistence?

Whatever. People aren;t actually like that. What is your point?

They can take that very seriously, and all be maximally ethical, much more so than us. Huzzah!

But we are not suddenly going to stop wanting what we want. What is your point?

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T15:07:28.370Z · LW(p) · GW(p)

What is your point?

That your supposedly objectively-ethically-correct-for-all-minds "must maximize everyone's preferences, including my own" ethics would score some strange society as the one I've outlined higher than anything humans could achieve. So that's what your own correct ethics tell you to aspire to, no?

It's a reductio ad absurdum, what else?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:16:27.738Z · LW(p) · GW(p)

I dont think such a society is more virtuous, it is just a society where the bar is lower. The flipside is resource-rich societies where it is easier to do the right thing because there are more resources. That isnt more virtuous either, because it is not vicious to be unable to do the right thing because of lack or resources. Virtue and vice are about intention..

Replies from: Kawoomba
comment by Kawoomba · 2013-04-18T15:21:35.242Z · LW(p) · GW(p)

(Without knowing, I'm guessing you're a Christian at 5:1, that you're a theist at 10:1)

I dont think such a society is more virtuous, it is just a society where the bar is lower. The flipside is resource-rich societies where it is easier to do the right thing because there are more resources.

On the contrary, using less resources to satisfy yourself and others, all the other resources would be free to create more fully satisfied and happy beings. If you're saving a lot of money and not buying yourself gadgets, that increases your ability to effect change, not diminishes it.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T15:31:55.499Z · LW(p) · GW(p)

(Without knowing, I'm guessing you're a Christian at 5:1, that you're a theist at 10:1)

Why? My approach is explictly non-euthyphric. However, I notice you keep arguing with me as though I am a theist..

If you're saving a lot of money and not buying yourself gadgets, that increases your ability to effect change, not diminishes it.

I am trying to distinguish between two sides of morlality--doing the right thing, and Virtue (AKA wanting to do the right thing).

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T12:34:39.558Z · LW(p) · GW(p)

Quick, quick! Make a bet that his stereotypical assumption is wrong!

It, um, is wrong, right?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T12:45:56.427Z · LW(p) · GW(p)

The funny thing is that if one participant in a discussion makes clear statements, and the other reads them carefully, there isn't the slightest need for that kind of guesswork.

comment by TheOtherDave · 2013-04-18T13:12:17.606Z · LW(p) · GW(p)

Can you expand on how you got the "preferences are the same" part?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-04-18T13:20:34.909Z · LW(p) · GW(p)

I thought we were keeping everything else the same, and reversing only the ethics.

In a world where everyone preferred to be murdered as soon as possible, I can agree that murder may very well be ethical.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-18T15:02:32.848Z · LW(p) · GW(p)

What do you want to say about a world where everyone agreed that there were some people who they preferred be murdered, and some people they preferred not be murdered, and that it's ethical to murder people you prefer to be murdered, even if everyone doesn't necessarily agree on which people fall into which category?

comment by Estarlio · 2013-04-18T13:13:53.365Z · LW(p) · GW(p)

It would be correctly describing its preferences, and its preferences would not be ethically correct. You could construct an AI that frimly believed 2+2=5. And it would be wrong. As before, you are glibly assuming that the word "ethical" does no work, and can be dropped from the phrase "ethical value".

Well, what work does it do? You haven't pointed to or defined ethically it's difficult to see how your statement is expected to parse:

"Their values wouldn't be [untranslatable 1] correct." is more or less what I'm getting at the moment.

What are you actually talking about? Where's your information for this idea that some values are 1+1=3 style incorrect coming from?

Replies from: MugaSofer, PrawnOfFate
comment by MugaSofer · 2013-04-19T12:19:20.131Z · LW(p) · GW(p)

It's worth noting that they would definitely be "unethical" if we define "ethical" in terms of our own preferences. It's a rigid designator, just not one inscribed on a stone tablet at the center of the universe.

comment by PrawnOfFate · 2013-04-18T15:11:23.980Z · LW(p) · GW(p)

I didn't define any of the other words I used either. "Ethics" isn't a word I invented.

Where's your information for this idea that some values are 1+1=3 style incorrect coming from?

Moral realism. Shelves full of books have been written about it over many centuries. Why has no-one here heard of it?

Replies from: Estarlio, MugaSofer
comment by Estarlio · 2013-04-18T16:30:15.834Z · LW(p) · GW(p)

Moral realism. Shelves full of books have been written about it over many centuries. Why has no-one here heard of it?

Moral realism has been formulated in a great number of ways over the years. In my opinion never convincingly. A guy further up the thread mentioned the form of it you seem to be using.

Perhaps I was unclear. Where is your second correlate? What are you mapping onto? Where's your information coming from that you're right or wrong in light of?

If you just mean something to the effect of one should always act in a way that favours one's most dominant long-term interests, that seems to be the typical situational pragmatism account of normative ethics. As such:

A) A matter of pragmatism rather than what people would generally mean by ethics. To roughly paraphrase some guy whose name I can't remember, 'As soon as they can get away with doing otherwise they become justified in doing so.'

&

B) Massively unactionable for most people. It's not clear that my higher order goals always outweigh a combination of lower order goals, or even that they should considering that rewards are going to vary over time.

I suppose you might formulate the idea that one should always act in the present such that one will have cause for the least regret in the future. That you would choose the same course of action for your past self looking back from the future as you would for your future self looking forwards from the past. Ethics would in other words be anti-akrasia.

And fair enough, maybe so. But now relating that back to discussion that you responded to I don't see how it serves one way or the other with respect to homosexuality and religion as preference choices, nor how it serves as a response to a refutation of moral universalism that arose in that discussion which you seemed to be replying to.

So - is that actually what you mean; how do you resolve the issues of relative weighting of preferences and changing situations; and if you resolve that, how do you apply it to the case in hand?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-18T16:47:19.854Z · LW(p) · GW(p)

Where's your information coming from that you're right or wrong in light of?

The functional role of ethics places constraints on metaethical axioms or maxims, which, when combined with facts about preferences, can be concretised into an object level ethics.

So - is that actually what you mean; how do you resolve the issues of relative weighting of preferences and changing situations; and if you resolve that, how do you apply it to the case in hand?

I don't have a know what the One True Ethics is. I don't know what the One True Physics is either. That doesn't refute physical realism. The former doesn't refute metaethical realism. I am only arguing that realism is not obviously false, not relativism obviously true.

comment by MugaSofer · 2013-04-19T12:19:40.857Z · LW(p) · GW(p)

I think he wanted you to taboo it, dear.

Moral realism. Shelves full of books have been written about it over many centuries. Why has no-one here heard of it?

We mostly think it's disproved, to local standards of disproof.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-19T12:34:35.195Z · LW(p) · GW(p)

I think he wanted you to taboo it, dear.

If you read his reply, he wanted me to explain what the truth-makers of moral propositions are.

I think he wanted you to taboo it, dear.

I noticed, sweetie. But it's belief in belief. If you have disproved something, you can repeat or cite the disproof. I have argued this topic out with at least half a dozen LWers, and none of them could put up a coherent case. Kawoomba gave up out, after apparently downvoting a bunch of my postings as a parting shot. That's the quality of argument.

comment by MugaSofer · 2013-04-19T12:17:22.249Z · LW(p) · GW(p)

It's a real position, if one based on rather questionable arguments.

OTOH, there really are some "values" that (sufficiently advanced) consequentialists will hold unless they specifically value not doing them, for instrumental reasons.

comment by MugaSofer · 2013-04-12T11:19:35.929Z · LW(p) · GW(p)

Welcome to LessWrong!

I'm playing catch-up, trying to expand my mind as fast as I can to make up for the lost years I spent blinded by religious dogma. Just two years ago, for example, I believed homosexuality was an evil that threatened to destroy civilization, that humans came from another planet, and that the Lost Ten Tribes were living somewhere underground beneath the Arctic. Needless to say, my re-education process has been exhausting.

Good for you! You might want to watch out for assuming that everyone had a similar experience with religion; many theists will fin this very annoying and this seems to be a common mistake among people with your background-type.

In trying to help him rediscover his faith, he had me read The God Delusion, which obliterated my own.

Huh. I must say, I found the GD pretty terrible (despite reading it multiple times to be sure,) although I suppose that powder-keg aspect probably accounts for most of your conversion (deconversion?)

I'm curious, could you expand on what you found so convincing in The God Delusion?

While I may not be a rationalist now, I would really like to be.

I think we can all say that :)

Replies from: atomliner, Kawoomba
comment by atomliner · 2013-04-14T10:07:34.596Z · LW(p) · GW(p)

Welcome to LessWrong!

Thank you! :)

Good for you! You might want to watch out for assuming that everyone had a similar experience with religion; many theists will fin this very annoying and this seems to be a common mistake among people with your background-type.

I apologize. I had no idea I was making this false assumption, but I was. I'm embarrassed.

I'm curious, could you expand on what you found so convincing in The God Delusion?

I replied to JohnH about this. I don't know if I could go into a lot of detail on why it was convincing, it was almost two years ago that I read it. But what really convinced me to start doubting my religion was when I prayed to God very passionately asking him whether or not The God Delusion was true and after I felt this tingly warm sensation telling me it was. I had done the same thing with The Book of Mormon multiple times and felt this same sensation, and I was told in church that this was the Holy Spirit telling me that it was true. I had been taught I could pray about anything and the Spirit would tell me whether or not it was true. After being told by the Spirit that The God Delusion was true, I decided that the only explanation is that what I thought of as the Spirit was just happening in my head and that it wasn't a sure way of finding knowledge. It was a very dramatic experience for me.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-14T14:49:30.313Z · LW(p) · GW(p)

I prayed to God very passionately asking him whether or not The God Delusion was true and after I felt this tingly warm sensation telling me it was. I had done the same thing with The Book of Mormon multiple times and felt this same sensation, and I was told in church that this was the Holy Spirit telling me that it was true. I had been taught I could pray about anything and the Spirit would tell me whether or not it was true. After being told by the Spirit that The God Delusion was true, I decided that the only explanation is that what I thought of as the Spirit was just happening in my head and that it wasn't a sure way of finding knowledge.

I've always wondered about that. People talk about praying for guidance and receiving it, never quite got what they were talking about before now.

Yeah, I suppose what you describe fits with it being more that the book encouraged you to reexamine your beliefs than it's arguments persuading you as such, which makes sense.

Incidentally, I can't help wondering what would you have done if the Spirit had told you it was bunk ;)

Replies from: atomliner
comment by atomliner · 2013-04-14T21:08:09.424Z · LW(p) · GW(p)

Incidentally, I can't help wondering what would you have done if the Spirit had told you it was bunk ;)

I like to think I still would have debunked Mormonism in my own mind, but maybe not! That experience was extremely important to my deconversion process, because the only reason I believed in the LDS Church was because of the Spirit telling me the Book of Mormon was true and that Jesus Christ was my Savior. As soon as the Spirit told me something so contradictory as The God Delusion was true, my whole belief structure came crumbling down.

comment by Kawoomba · 2013-04-12T11:28:51.555Z · LW(p) · GW(p)

What kind of theist are you, personal or more of the general theism (which includes deism) variety? Any holy textstring you believe has been divinely inspired?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T17:19:53.306Z · LW(p) · GW(p)

About as Deist as you can be while still being technically Christian. I'd be inclined to say there's something in all major religions, simply for selection reasons, but the only thing I'd endorse as "divinely inspired" as such would be the New Testament? I guess? Even that is filtered by cultural context and such, obviously,.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T19:43:52.840Z · LW(p) · GW(p)

If you can readily articulate your reasons for evaluating the New Testament differently from other scriptures, I'm interested. (It's possible that you've already done so, perhaps even in response to this question from me; feel free to point me at writeups elsewhere if you wish.)

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T20:12:15.672Z · LW(p) · GW(p)

Well, I mentioned I'm technically Christian (despite my deist leanings), right? I think the evidence in favor of Jesus being, well, the Son of God is good enough to over come my prior, although to be fair I have a significantly higher prior of such things than the LW norm, because theism. If Jesus was God, naturally anything that can be traced back to him is in some sense "divinely inspired" - so the Gospels, mostly. I'm less confident about the status of the rest of the NT, but again, probably miracles (albeit lower certainty than those of Jesus, I guess) so probably some level of Godly origin, at least for the parts that claim to have such an origin.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T20:45:16.036Z · LW(p) · GW(p)

(nods) That answers my question. Thank you.

comment by Kawoomba · 2013-04-12T08:48:35.785Z · LW(p) · GW(p)

How many of your younger Mormon peers and friends do you think are secretly atheists?

Replies from: atomliner
comment by atomliner · 2013-04-12T09:43:29.893Z · LW(p) · GW(p)

I've only had two of my Mormon peers/friends/relatives reveal to me after knowing them for a substantial amount of time that they are atheists. Based on that, I would guess the percentage of active Latter-day Saints that are closet atheists is pretty low, around 1%-3%?

Replies from: CCC
comment by CCC · 2013-04-13T21:09:20.365Z · LW(p) · GW(p)

That implies that you have more-or-less a hundred close friends/peers/relatives, who you have known for a substantial amount of time and would expect them to tell you if they are closet atheists.

Replies from: Eliezer_Yudkowsky, atomliner
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-13T21:26:07.734Z · LW(p) · GW(p)

Mormons have lots of friends, and lots of relatives.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-13T21:31:17.116Z · LW(p) · GW(p)

Mormons. M-o-r-m-o-n-s. The 'm' is silent.

Edit: Downvoted for something the majority probably agrees with, because I haven't wrapped it up in condescending niceties? We're talking about the Book of Mormon and following a charlatan like Joseph Smith. If that's still too ambiguous to render an opinion, what isn't?

Edit2: Becoming a rationalist and believing in some ancient religious scrolls are mutually exclusive. Dear reader (in most cases), you know this, I know this. People have seen the arguments presented a hundred times. Some of them still choose to believe in nonsense. At what point are we allowed to point out the stupidity of that belief? At what point can we stop politely rehashing arguments, and say "look, you're smart yet you abuse your smarts to rationalize nonsense".

That's the most dangerous kind of person, not some harmless peasant who can't know any better. A university educated student who can take flying lessons. A nuclear scientist who believes in some fundamentalist islamic regime. The smarter you are, the less excuses you have. What's the alternative, saying "well, I presented my arguments, you say your belief is perfectly rational. Let's leave it at that"?

Replies from: ArisKatsaris, orthonormal
comment by ArisKatsaris · 2013-04-15T10:52:17.255Z · LW(p) · GW(p)

because I haven't wrapped it up in condescending niceties?

Being nice is important.

If that's still too ambiguous to render an opinion, what isn't?

Kindergarten level insults like "Mormon sort-of-rhymes with Moron" aren't just an expression of opinion. Mormon would be sort-of-rhyming with Moron, even if Mormonism had been true. What you instead expressed is a cutesy and juvenile way of insulting someone: "The mormon is a moron, the mormon is a moron, hahahaha!"

comment by orthonormal · 2013-04-20T04:26:22.899Z · LW(p) · GW(p)

I downvoted your comment, not because I don't enjoy a good takedown of sloppy religious reasoning (even in the form of a snappy comment), but because this was nothing of the sort: it's a completely boring instance of saying "Boo X", without any content or even cleverness. It's just noise within the discussion.

Seriously, you know you can do better than that.

comment by atomliner · 2013-04-14T08:51:59.666Z · LW(p) · GW(p)

Over twenty-three years the numbers add up. I think I could easily find more than a hundred active Latter-day Saints just counting members of my extended family that I routinely encounter every year.

comment by JohnH · 2013-04-12T18:40:03.904Z · LW(p) · GW(p)

I am Mormon so I am curious where you got the beliefs that Homosexuality would destroy civilization, that humans came from another planet, that the Ten Tribes live underground beneath the Arctic? Those are not standard beliefs of Mormons (see for instance the LDS churches Mormonsandgays.org) and only one of those have I ever even encountered before (Ten Tribes beneath the Arctic) but I couldn't figure out where that belief comes from or why anyone would feel the need to believe it.

I also have to ask, the same as MugaSofer, could you explain how The God Delusion obliterated your faith? It seemed largely irrelevent to me.

Replies from: atomliner, atomliner
comment by atomliner · 2013-04-14T09:39:18.016Z · LW(p) · GW(p)

I have visited mormonsandgays.org. That came out very recently. It seems that the LDS Church is now backing off of their crusade against homosexuality and same-sex marriage. In the middle of the last decade, though, I can assure you what I was taught in church and in my family was that civilizations owed their stability to the prevalence of traditional marriages. I was told that Sodom and Gomorrah were destroyed because homosexuality was not being penalized and because of the same crime the Roman Empire collapsed. It is possible that these teachings, while not official doctrine, were inspired by the last two paragraphs of the LDS Church's 1995 proclamation The Family. In the second to last paragraph it says:

... we warn that the disintegration of the family will bring upon individuals, communities, and nations the calamities foretold by ancient and modern prophets. LINK

I have a strong feeling my interpretation of this doctrine is also held by most active believing American Mormons, having lived among them my entire life.

I don't think that most Mormons believe that mankind came from another planet, but I started believing this after I read something from the Journal of Discourses, in which Brigham Young stated:

Here let me state to all philosophers of every class upon the earth, When you tell me that father Adam was made as we make adobies from the earth, you tell me what I deem an idle tale. When you tell me that the beasts of the field were produced in that manner, you are speaking idle worlds devoid of meaning. There is no such thing in all the eternities where the Gods dwell. Mankind are here because they are the offspring of parents who were first brought here from another planet, and power was given them to propagate their species, and they were commanded to multiply and replenish the earth. LINK

This doctrine has for good reason been de-emphasized by the LDS Church, but never repudiated. I read this and other statements made by Brigham Young and believed it. I did believe he was a prophet of God, after all.

I began to believe that the Ten Tribes were living underneath the Arctic after reading The Final Countdown by Clay McConkie which details the signs that will precede the Second Coming. In the survey he apparently conducted of active Latter-day Saints, around 15% believed the Ten Tribes were living somewhere underground in the north. This belief is apparently drawn from an interpretation of Doctrine & Covenants 133:26-27, which states:

26 And they who are in the north countries shall come in remembrance before the Lord; and their prophets shall hear his voice, and shall no longer stay themselves; and they shall smite the rocks, and the ice shall flow down at their presence. 27 And an highway shall be cast up in the midst of the great deep.

I liked the interpretation that this meant there was a subterranean civilization of Israelites and believed it was true.

I apologize that I gave examples of these extraordinary former beliefs right after I wrote "I'm playing catch-up, trying to expand my mind as fast as I can to make up for the lost years I spent blinded by religious dogma." That definitely implies that these former beliefs were actual official doctrine of the Mormon Church. I did not intend that.

Replies from: JohnH
comment by JohnH · 2013-04-14T14:52:24.382Z · LW(p) · GW(p)

I am going to make a prediction that you likely grew up in a smaller community in Utah or Eastern Idaho.

In regards to the Journal of Discourse quote, the actual doctrine that Brigham Young is talking about it is very much emphasized and is found in the D&C, the Book of Abraham, and explicitly in the Temple. A dead giveaway is his reference to philosophers, he isn't talking about us being aliens but that our spirits always existed and come from where God is rather then being created at birth as thought in the rest of Christianity. Given this and your explanation of The God Delusion I take it you aren't that familiar with non-LDS Christian philosophy and the vast differences between us and them.

The church has not changed at all its position on same-sex marriage and just filed an amicus brief on the subject. I can see how your conclusion on the subject was drawn though.

Replies from: atomliner
comment by atomliner · 2013-04-14T21:33:37.424Z · LW(p) · GW(p)

I am going to make a prediction that you likely grew up in a smaller community in Utah or Eastern Idaho.

Wrong. I moved to Utah already an atheist. I didn't grow up in any one area, my family moved several times when I was younger. For example, I lived in Arizona, California, Georgia, and North Carolina before moving to Utah. The state I feel most confident in calling my home is California, since I lived there from 2004 to 2009.

In regards to the Journal of Discourse quote, the actual doctrine that Brigham Young is talking about it is very much emphasized and is found in the D&C, the Book of Abraham, and explicitly in the Temple. A dead giveaway is his reference to philosophers, he isn't talking about us being aliens but that our spirits always existed and come from where God is rather then being created at birth as thought in the rest of Christianity.

I highly disagree that this is what Brigham Young actually intended to teach. For example, in another part of the Journal of Discourses he says:

Though we have it in history that our father Adam was made of the dust of this earth, and that he knew nothing about his God previous to being made here, yet it is not so; and when we learn the truth we shall see and understand that he helped to make this world, and was the chief manager in that operation. He was the person who brought the animals and the seeds from other planets to this world, and brought a wife with him and stayed here. You may read and believe what you please as to what is found written in the Bible. Adam was made from the dust of an earth, but not from the dust of this earth. He was made as you and I are made, and no person was ever made upon any other principle. LINK

It does not seem at all that he is talking about the creation of their spirits, but the creation of their bodies.

Given this and your explanation of The God Delusion I take it you aren't that familiar with non-LDS Christian philosophy and the vast differences between us and them.

I have to admit I didn't regard myself as extremely familiar with Christian philosophy before my de-conversion, but I've learned a great deal since coming home from my mission. However, I don't think this was a very fair assessment of my knowledge on your part. There is nothing I've written that gives strong evidence for me being ignorant of Christian and Mormon theology. It seems to me you want to de-legitimize what I have to say by painting me as unintelligent and inexperienced with my own religion. Now, do you really think a person who has studied the Journal of Discourses wouldn't also most likely be a person who has spent a lot of time and energy investigating the rest of Mormon theology? I mean, a scholar I am not but I definitely know my way around Mormonism, more than most Mormons I know at the very least.

The church has not changed at all its position on same-sex marriage and just filed an amicus brief on the subject. I can see how your conclusion on the subject was drawn though.

There is a big difference between an "official position" and what is taught in the chapel and at the dinner table.

Replies from: JohnH
comment by JohnH · 2013-04-15T03:40:15.457Z · LW(p) · GW(p)

Since it appears that you grew up in a pluralistic society then I have no idea why you considered everyone different then you to not be a good person and feel you were never exposed to the idea that they possibly could be a good person. Considering that Jesus (Matthew 25:40), Paul (Romans 2), Nephi, Benjamin, Alma, and Moroni all say that it is action more then belief that defines who is saved, who has faith, and who is good, happy and healthy then I don't know how it was a shocking revelation that those who do not have the law but that act by nature according to law are just as much blessed as those that have the law.

I fail to see how blood atonement, Adam-God, racist theology, and polygamist theology gave you the slightest impression that the Journal of Discourses was a good source of doctrine. It is my personal experience that generally those that spend the most time reading it are those least familiar with the gospel, on either end of the spectrum. The biggest fans of the Journal of Discourses seem to be those that are trying to prove the church wrong and those that are seeking "deep" doctrine while ignoring the weightier parts of the gospel, by which I mean those that try to square Adam-God statements or that speculate on the location of the ten tribes or Kolob.

For instance nearly everyone that has taken the time to figure out what Christians say of God in their arguments for God and what the D&C says on the subject quickly realize that the two are wholly incompatible. That those beliefs on God and the arguments in favor of those beliefs are mixing Greek philosophy with scripture to synthesis a new belief. Not that members of the church are not also guilty of mingling the philosophies of men with scripture, that is a very common occurrence as you note with "what is taught in the chapel and the dinner table", me, I tend to focus on the current authorized messengers from God and the Holy Spirit as I feel that is what I have been instructed to do.

Replies from: atomliner, None, MugaSofer
comment by atomliner · 2013-04-15T04:36:12.814Z · LW(p) · GW(p)

I never said that I considered people different than me to not be good. What I said in earlier comments is that I liked The God Delusion because it introduced me to the concept that you can be "a good, healthy, happy person without believing in God". I believed that those who did not have faith in God would be more likely to be immoral, would be more likely to be unhealthy, and would definitely be more unhappy than if they did believe in God. The book presented to me a case for how atheists can be just as moral, just as healthy, just as happy as theists, an argument I had never seen articulated before. I apologize that I had never conjured this idea up before reading The God Delusion, it just seemed obvious to me based on my study of the Gospel that they couldn't be.

What passages in the scriptures tell you that you can be moral, healthy, and happy without faith in God? It seems pretty consistent to me that in the scriptures they say you can only have those qualities in your life if you believe in God and follow his commandments.

I fail to see how blood atonement, Adam-God, racist theology, and polygamist theology gave you the slightest impression that the Journal of Discourses was a good source of doctrine.

I believed in blood atonement, the Adam-God theory, much of the racist theology, and in polygamy. Why wouldn't I? The prophets speak for God. God would not let a prophet lead the Church astray. My patriarchal blessing told me to always follow the prophets. No one ever told me I could question and disagree with the prophets and still be a member of the LDS Church in good standing. I apologize that I didn't come to the same understanding a you, but I don't see any reason I would have with the life experiences I had.

The biggest fans of the Journal of Discourses seem to be those that are trying to prove the church wrong and those that are seeking "deep" doctrine while ignoring the weightier parts of the gospel, by which I mean those that try to square Adam-God statements or that speculate on the location of the ten tribes or Kolob.

I really liked "deep doctrine". :) If I had been alive during the Roman Empire I think I would have been a sucker for the mystery cults. Still, what do you even mean by the "weightier parts of the gospel"? I feel like that is so subjective as to be meaningless in our conversation. How can we determine objectively which parts of the Gospel is more important?

I tend to focus on the current authorized messengers from God and the Holy Spirit as I feel that is what I have been instructed to do.

Great. I have no problem with you finding a way to make it all work in your head. Obviously I couldn't discover how to make it work in mine after discovering things that I did. No amount of instruction could keep it all from unraveling.

I wish you weren't so hostile against me just because I'm making your in-group look bad.

Replies from: JohnH
comment by JohnH · 2013-04-15T14:37:51.739Z · LW(p) · GW(p)

What passages in the scriptures tell you that you can be moral, healthy, and happy without faith in God?

I already pointed you to Romans 2, specifically in this case Romans 2:13-15, did you want more?

Why wouldn't I?

A prophet is only a prophet when they are acting as a prophet. More specifically there are multiple First Presidency statements saying Adam-God is wrong; Statements by Apostles saying that the racist theology was created with limited understanding and is wrong (as well as more recent church statements saying explicitly that it is contrary to the teachings of Christ); I am not referring to polygamy as a practice but the belief that polygamy is the new and everlasting covenant itself, which again has revelation and first presidency statements and even the scriptures on polygamy saying that is wrong; Also given that none of those theories were presented to the Quorums of the Church and that Apostles and a member of the First Presidency disagreed vocally with Adam-God at the time I would have thought it was clear that one can disagree with ideas not presented as revelation and not sanctioned by the First Presidency and Quorum of the Twelve. I mean, the D&C has procedures on how to conduct a disciplinary council of the prophet so while the prophet will not lead the church astray they are quite capable of sinning and of theorizing based of revelation and their own prejudices as anyone else, though they seem to have mostly gotten better at not doing that.

what do you even mean by the "weightier parts of the gospel"?

The two great commandments: Love God, Love your neighbor as yourself, and the actual gospel: faith, repentance, baptism, and the gift of the Holy Ghost. Statements of the effect of that being sufficient for anyone or even that being the doctrine of Christ and the only doctrine of Christ and anything more or less being declared as the doctrine of Christ being evil seem fairly objective in stating which parts of the Gospel are most important.

Replies from: atomliner
comment by atomliner · 2013-04-15T16:28:00.880Z · LW(p) · GW(p)

I already pointed you to Romans 2, specifically in this case Romans 2:13-15, did you want more?

Yes. I don't see anything in Romans 2 that shows me that you can be moral, healthy, and happy without faith in God.

A prophet is only a prophet when they are acting as a prophet.

But you have to admit it's hard sometimes to distinguish whether or not a prophet is acting as one.

More specifically there are multiple First Presidency statements saying Adam-God is wrong.

I never believed that Adam WAS Elohim, but I did believe that what Brigham Young and others intended to say was that Adam was the God of this Earth.

Statements by Apostles saying that the racist theology was created with limited understanding and is wrong

I never believed that black people were cursed for being fence-sitters in the War in Heaven, but I did believe that it was because of the curse of Cain that they couldn't have the priesthood until 1978. In my defense I started believing around 2009 that the priesthood ban was just an incorrect Church policy. Still, I never read anything from the Apostles saying that the priesthood ban was wrong, just that it was unknown why there was a priesthood ban.

I am not referring to polygamy as a practice but the belief that polygamy is the new and everlasting covenant itself, which again has revelation and first presidency statements and even the scriptures on polygamy saying that is wrong.

I always believed that the new and everlasting covenant was referring to celestial marriage, but I did believe that polygamy would eventually be re-instated being that before the Second Coming there would have to be a restitution of all things.

Also given that none of those theories were presented to the Quorums of the Church and that Apostles and a member of the First Presidency disagreed vocally with Adam-God at the time I would have thought it was clear that one can disagree with ideas not presented as revelation and not sanctioned by the First Presidency and Quorum of the Twelve.

I really only developed an understanding of Official Doctrine after my deconversion. Before, however, my understanding was that every member of the First Presidency and the Quorum of the Twelve Apostles were prophets, seers, and revelators and that they spoke directly with Jesus Christ, therefore they were incapable of teaching false doctrine to the members of the Church.

what do you even mean by the "weightier parts of the gospel"?

The two great commandments: Love God, Love your neighbor as yourself, and the actual gospel: faith, repentance, baptism, and the gift of the Holy Ghost.

You were saying how those who read the Journal of Discourses "seem to be those that are trying to prove the church wrong and those that are seeking 'deep' doctrine while ignoring the weightier parts of the gospel". I think you were trying to put me in the latter category, suggesting that I was ignoring what was really important in the Gospel. Now that you've explained what these "weightier parts" are, I assure you that I did not ignore these teachings. Those are incredibly simple and basic concepts that I had known for years and years. How could anyone ignore these parts of the Gospel while studying "deep doctrine"?

How long have you been a member of the LDS Church?

Replies from: MugaSofer, JohnH
comment by MugaSofer · 2013-04-17T12:19:18.861Z · LW(p) · GW(p)

I really only developed an understanding of Official Doctrine after my deconversion. Before, however, my understanding was that every member of the First Presidency and the Quorum of the Twelve Apostles were prophets, seers, and revelators and that they spoke directly with Jesus Christ, therefore they were incapable of teaching false doctrine to the members of the Church.

I wonder how common this is?

Replies from: atomliner
comment by atomliner · 2013-04-17T20:21:07.564Z · LW(p) · GW(p)

I masquerade as a liberal Mormon on Facebook since I'm still in the closet with my unbelief. In my discussions with friends and family the most common position taken is that the First Presidency and the Twelve Apostles cannot teach false doctrine or else they will be forcibly removed by God. I even had a former missionary companion tell me that President Gordon B. Hinckley died in 2008 not from old age (he was 98) but because he had made false statements on Larry King Live concerning the doctrine of exaltation in which worthy Latter-day Saints can become gods.

Replies from: Desrtopa, MugaSofer
comment by Desrtopa · 2013-04-17T20:26:21.799Z · LW(p) · GW(p)

How do they distinguish between true statements which precede their deaths, and false statements which cause their deaths?

Replies from: atomliner
comment by atomliner · 2013-04-17T20:55:30.495Z · LW(p) · GW(p)

Whatever the prophet says that doesn't match up with their own interpretation of Mormonism is false? I honestly do not know, I never thought this way when I was LDS.

comment by MugaSofer · 2013-04-19T13:15:05.866Z · LW(p) · GW(p)

Interesting.

comment by JohnH · 2013-04-15T17:11:35.518Z · LW(p) · GW(p)

you can be moral, healthy, and happy without faith in God

Paul saying those that didn't know God and that didn't have the law but that acted justly being justified because of their actions doesn't imply to you that it is possible to be moral, healthy, and happy without faith in God? How about this, where in "There is a law, irrevocably decreed in heaven before the foundations of this world, upon which all blessings are predicated— And when we obtain any blessing from God, it is by obedience to that law upon which it is predicated." does it mention anything about having faith in God being a prerequisite for receiving a blessing? Where in "if ye have done it unto the least of these they brethren ye have done it unto me" does it say that one must believe in God for that to be valid?

"Till you have learnt to serve men, how can you serve spirits?"

But you have to admit it's hard sometimes to distinguish whether or not a prophet is acting as one.

" Would God that all the LORD'S people were prophets, and that the LORD would put his spirit upon them!"

How could anyone ignore these parts of the Gospel while studying "deep doctrine"?

Very easily, as Jesus repeatedly stated.

every member of the First Presidency and the Quorum of the Twelve Apostles were prophets, seers, and revelators and that they spoke directly with Jesus Christ, therefore they were incapable of teaching false doctrine to the members of the Church.

I am not sure how the first part of this lead to the second part of this, but I will believe that was your belief.

How long have you been a member of the LDS Church?

My whole life.

Replies from: atomliner, MugaSofer
comment by atomliner · 2013-04-15T19:35:26.117Z · LW(p) · GW(p)

Paul saying those that didn't know God and that didn't have the law but that acted justly being justified because of their actions doesn't imply to you that it is possible to be moral, healthy, and happy without faith in God?

I don't know where you draw that implication from the word "justified". So, no.

How about this, where in "There is a law, irrevocably decreed in heaven before the foundations of this world, upon which all blessings are predicated— And when we obtain any blessing from God, it is by obedience to that law upon which it is predicated." does it mention anything about having faith in God being a prerequisite for receiving a blessing?

I guess I did have a very abstract belief that those who followed the commandments, the "law", even if they didn't believe, would still receive the same blessings as those who do. But, the part of The God Delusion that talked about atheists being just as happy, moral, and healthy as theists never said anything about following Mormon commandments to do so, and for that reason it was a revolutionary concept to me. What was a new concept was that you could have a lifestyle completely different from those lived by Latter-day Saints and still be moral, happy, and healthy. Though, come to think about it, I was introduced to this concept not just in The God Delusion, but also in my interactions with hundreds of Brazilian families. Certainly the mission experience added to the knowledge base I needed to refute Mormonism.

Where in "if ye have done it unto the least of these they brethren ye have done it unto me" does it say that one must believe in God for that to be valid?

"Till you have learnt to serve men, how can you serve spirits?"

" Would God that all the LORD'S people were prophets, and that the LORD would put his spirit upon them!"

The reason we are having this discussion is because I feel you've characterized me unfairly as "the Ex-Mormon who never really knew his own religion and had no reason to believe in the fringe theories he did". My goal is to support my case that I really was a mainstream Latter-day Saint before I lost my faith. So, you can use your apologetic arguments all you want for whatever idea you have about Mormonism, but if they aren't based clearly in the scriptures (which I studied a great deal), and if they were never taught widely in the Church, then why exactly did I err in not coming to the same understanding as you? I do not think you have any good evidence for why I was an atypical Mormon who was unjustified in believing in the things I did.

How could anyone ignore these parts of the Gospel while studying "deep doctrine"?

Very easily, as Jesus repeatedly stated.

What Jesus stated on this is extremely illogical to me. Why is what he said logical to you?

every member of the First Presidency and the Quorum of the Twelve Apostles were prophets, seers, and revelators and that they spoke directly with Jesus Christ, therefore they were incapable of teaching false doctrine to the members of the Church.

I am not sure how the first part of this lead to the second part of this, but I will believe that was your belief.

So you think that it was unreasonable for me to assume that men who are given an office BY GOD with the title "prophet, seer and revelator" and who speak directly with Jesus Christ, face-to-face, would not teach false doctrine? Do you think that a person who speaks face-to-face with Jesus Christ would then teach his own false ideas to members of Christ's One True Church?

My whole life.

And you are how old?

Replies from: CCC, MugaSofer, JohnH
comment by CCC · 2013-04-15T20:47:53.583Z · LW(p) · GW(p)

The reason we are having this discussion is because I feel you've characterized me unfairly as "the Ex-Mormon who never really knew his own religion and had no reason to believe in the fringe theories he did". My goal is to support my case that I really was a mainstream Latter-day Saint before I lost my faith.

So, this whole debate is about whether your-previous-self, or JohnH, is better deserving of the title of 'true Mormon'?


On a different point:

How could anyone ignore these parts of the Gospel while studying "deep doctrine"?

Very easily, as Jesus repeatedly stated.

What Jesus stated on this is extremely illogical to me. Why is what he said logical to you?

I would like to draw a figurative circle around this statement...

So you think that it was unreasonable for me to assume that men who are given an office BY GOD with the title "prophet, seer and revelator" and who speak directly with Jesus Christ, face-to-face, would not teach false doctrine? Do you think that a person who speaks face-to-face with Jesus Christ would then teach his own false ideas to members of Christ's One True Church?

...and compare it to this one. They appear to contradict each other. Can you explain?

Replies from: atomliner, JohnH
comment by atomliner · 2013-04-15T21:03:39.267Z · LW(p) · GW(p)

So, this whole debate is about whether your-previous-self, or JohnH, is better deserving of the title of 'true Mormon'?

That's funny. No. I don't care what JohnH wants to be seen as or what title he deserves. I just want my previous-self identified as a "plausible Mormon". In my opinion, JohnH wants me to be seen as a "fringe Mormon" whose departure from the LDS Church is unimportant in the debate over whether the LDS Church is true, because I didn't really understand Latter-day Saint beliefs. Which I did as much as any other average Latter-day Saint I know.

They appear to contradict each other. Can you explain?

I don't see the contradiction. These statements appear to be unrelated. Can you explain what contradiction you see?

Replies from: Estarlio, CCC, Bugmaster
comment by Estarlio · 2013-04-15T22:20:59.346Z · LW(p) · GW(p)

That's funny. No. I don't care what JohnH wants to be seen as or what title he deserves. I just want my previous-self identified as a "plausible Mormon". In my opinion, JohnH wants me to be seen as a "fringe Mormon" whose departure from the LDS Church is unimportant in the debate over whether the LDS Church is true, because I didn't really understand Latter-day Saint beliefs. Which I did as much as any other average Latter-day Saint I know.

If you're feeling trapped into arguing with this guy to defend your reputation, you may be better off just saying something like: "If you turn out to be right, and most people don't believe the way I do, I'm still not going to start believing in the LDS. Therefore my expected return on this conversation is 0 and I'm not going to continue it."

Certainly from my perspective that would be a much more high-status move than continuing to argue with the guy. Because, in all kindness: Your departure from the LDS is unimportant in the debate over whether the Church is true. Not because the beliefs are or are not commonly held, nor because they are or are not ridiculous, but because there are much better reasons for disbelieving. Whichever one of your views prevails here, it's not going to serve as a good reason for me or anyone else to start believing or disbelieving.

Your reasons may be important in a discussion over why people leave the LDS - but that's a separate issue to whether the LDS is true. So, you may not be getting what you think you're getting in terms of reputation by arguing this over this.

Replies from: atomliner
comment by atomliner · 2013-04-15T22:35:42.652Z · LW(p) · GW(p)

Those are strong arguments for discontinuing this discussion. Thank you for helping me grok this situation better. :)

comment by CCC · 2013-04-16T07:34:39.712Z · LW(p) · GW(p)

I don't see the contradiction. These statements appear to be unrelated. Can you explain what contradiction you see?

Well, let me start with the first example:

How could anyone ignore these parts of the Gospel while studying "deep doctrine"?

Very easily, as Jesus repeatedly stated.

What Jesus stated on this is extremely illogical to me. Why is what he said logical to you?

Paraphrasing somewhat, JohnH said 'because Jesus said so' and you responded that this reason was insufficient for a Mormon to hold a belief; that it needed to be logical as well.

While, in the second case...

So you think that it was unreasonable for me to assume that men who are given an office BY GOD with the title "prophet, seer and revelator" and who speak directly with Jesus Christ, face-to-face, would not teach false doctrine? Do you think that a person who speaks face-to-face with Jesus Christ would then teach his own false ideas to members of Christ's One True Church?

...it seems that you are claiming that saying that 'a man who has spoken with Jesus said so' is sufficient reason for a Mormon to hold a belief.

I would expect the second reason to be weaker than the first, since in the second case there is someone else speaking in the middle (if you've ever played Broken Telephone, you'll know why this is a bad thing). Yet you appear to be claiming that the second reason is stronger than the first. Hence my confusion.

Replies from: atomliner
comment by atomliner · 2013-04-16T21:21:19.186Z · LW(p) · GW(p)

Oh, okay, I understand how this could be seen as contradictory.

In the first case I was arguing from my own, real-time atheist self that believes Jesus was illogical in his comments on people forgetting the basic principles of Christianity in their pursuit for more knowledge. How could someone forget such simple principles like "love one another" in their pursuit for more knowledge? Note that I never said this reason was insufficient for a Mormon to hold this belief, I was only saying it was insufficient to atheist me and I wanted JohnH to provide a better defense of his point, which he didn't.

In the second case I used past-tense "... you think that it WAS unreasonable for me...", and we were already talking about my former beliefs. So, I was arguing from my former Mormon self that did believe that Jesus saying something was enough to validate a belief.

The discussion became rather confusing because JohnH wanted to discredit my past beliefs rather than my current beliefs.

Replies from: CCC
comment by CCC · 2013-04-17T08:36:34.189Z · LW(p) · GW(p)

Ah, I see. You were trying to defend two contradictory positions, and I did not notice when you switched between them. (This is one reason why I find it's often a bad idea to try to defend an idea that you have abandoned, by the way; it leads to confusion.)

How could someone forget such simple principles like "love one another" in their pursuit for more knowledge?

That is actually quite possible. Step one is a person who seeks more knowledge, and finds it. That's fine, so far. Step two is the person realises that they are a lot more knowledgeable than anyone else; that's fine as well, but it can be like standing on the edge of a cliff. Step three is that the person becomes arrogant. They see most other people as a distraction, as sort of sub-human. This is where things start to go wrong. Step four is when the person decides that he knows what the best thing for everyone else to do is better than they do. And if they won't do it, then he'll make them do it.

Before long, you could very well have a person who, while he admits that it's important to love your fellow-man in theory, in practice thinks that the best thing to do is to start the Spanish Inquisition. The fact that the Spanish Inquisition ever existed, started by people who professed "love one another" as a core tenet of their faith, shows that this can happen...

Replies from: atomliner, MugaSofer
comment by atomliner · 2013-04-17T20:13:26.312Z · LW(p) · GW(p)

Those are good examples. Though I guess whether this is possible depends on your definition of "forget". Speaking of the Spanish Inquisition, I am of the opinion that the Inquisitors did not forget their core tenets but that further knowledge (however flawed) gave them new means to interpret the original tenets. You could suggest that this re-interpretation was exactly what Jesus wanted to keep people from doing, of course. The question I ask Christians, then, is "What knowledge is acceptable and how should it be attained when God doesn't encourage the utilization of all knowledge?" This would certainly be an important question for theists to answer, and may be relatively simple. I can already guess a few possible answers.

Replies from: CCC
comment by CCC · 2013-04-17T20:50:35.141Z · LW(p) · GW(p)

Though I guess whether this is possible depends on your definition of "forget".

I'm assuming "to act as though ignorant of the principle in question".

The question I ask Christians, then, is "What knowledge is acceptable and how should it be attained when God doesn't encourage the utilization of all knowledge?"

I don't think its the knowledge that's dangerous, in itself. I think it's the arrogance. Or the sophisticated argument that starts with principles X and Y and leads to actions that directly contradict principle X.

For example; consider the following principles:

  1. Love thy neighbour as thyself
  2. Anyone who does not profess will be tortured terribly in Hell after death, beyond anything mortals can do

That's enough to lead to the Inquisition, by this route:

Looking at Principle 2, I do not wish myself, or those that I love to enter Hell. Considering Principle 1, I must try to save everyone from that fate, by any means possible. I must therefore attempt to convert everyone to .

(Consideration of various means snipped for brevity)

Yet there may be some people who refuse to convert, even in the face of all these arguments. In such a case, would torture be acceptable? If a person who is not tortured does not repent, then he is doomed to what is worse than a mere few months, even a mere few years of torture; he is doomed to an eternity of torture. If a person is tortured into repentance, then he is saved an eternity of torture - a net gain for the victim. If he is tortured and does not repent, then he experiences an eternity of torture in any case - in that case, he is at least no worse off. So a tortured victim is at worst no worse off, and at best a good deal better off, than a man who does not repent. However, care must be taken to ensure that the victim does not die during torture, but before repenting.

Better yet, the mere rumour of torture may lead some to repent more swiftly. Thus, judicious use of torture becomes a moral imperative.

(As an exercise, incidentally, can you spot the flaw in that chain of reasoning?)

And then you have the Inquisitors, and fear and terror and sharp knives in dark rooms...

comment by MugaSofer · 2013-04-17T11:05:55.621Z · LW(p) · GW(p)

Step four is when the person decides that he knows what the best thing for everyone else to do is better than they do. And if they won't do it, then he'll make them do it.

It's worth noting that if the person successfully "found knowledge", they are in fact correct (unless it was irrelevant knowledge, I guess.)

Replies from: CCC, Richard_Kennaway
comment by CCC · 2013-04-17T19:18:48.701Z · LW(p) · GW(p)

Historical evidence suggests that people get to step 4 before correctly finding knowledge quite often. The Spanish Inquisition is a shining example. Or communism - in its original inception, it was supposed to be a utopian paradise where everyone does what work is necessary, and enjoys fair benefits therefrom.

I suspect that a common failure mode is that one fails to take into account that many people are doing that which they are doing because they are quite happy to do it. They've smoothed out any sharp corners in their lifestyle that they could manage to smooth out, and see little benefit in changing to a new lifestyle, with new and unexpected sharp corners that will need smoothing.

I would therefore recommend being very, very cautious about assuming that one has successfully found sufficient knowledge.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T13:14:18.108Z · LW(p) · GW(p)

I agree there's a common failure mode here - I'd be inclined to say it's simple overconfidence, and maybe overestimating your rationality relative to everyone else.

Still, if they're successful...

Replies from: CCC
comment by CCC · 2013-04-19T21:21:15.531Z · LW(p) · GW(p)

Even then, I'd most likely object to their attempts to try to dictate the actions of others; because of the common failure mode, my heuristic is to assign a very strong prior to the hypothesis that they are unsuccessful. Also, trying force has some fairly substantial negative effects; any positive effects of their proposed behaviour change would have to be significant to overcome that.

However, if they are willing to try to change the actions of others through simple persuasion without resorting to force, then I would not object. And if their proposed course of action is significantly better, then I would expect persuasion to work in at least some cases; and then these cases can be used as evidence for the proposed course of action working.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T22:52:22.340Z · LW(p) · GW(p)

To be fair, we may have different interventions in mind here. I would also expect someone who genuinely found knowledge to use "soft force", but maybe that's just wishful thinking.

However, if forcing people to do things really helps, I'm all for intervention. Addicts, for example.

Replies from: CCC
comment by CCC · 2013-04-20T12:31:29.097Z · LW(p) · GW(p)

I was thinking armies, secret police, so on and so forth, forcing an entire country to one's will.

However, if forcing people to do things really helps, I'm all for intervention. Addicts, for example.

Hmmm. I hadn't thought of addicts. You make a good point.

I think I might need to re-evaluate my heuristics on this point.

comment by Richard_Kennaway · 2013-04-17T13:20:10.757Z · LW(p) · GW(p)

It's worth noting that if the person successfully "found knowledge", they are in fact correct (unless it was irrelevant knowledge, I guess.)

This can never be put into practice. A person can try to find knowledge, but there is nothing they can do to determine whether they have successfully found knowledge -- any such attempts collapse into part of trying to find knowledge. There is no way of getting to a meta-level from which you can judge whether your efforts bore fruit. The ladder has no rungs.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-17T13:28:01.595Z · LW(p) · GW(p)

raises eyebrows

You're saying it's impossible for any evidence to change your estimate of whether something will help people?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-17T13:42:36.754Z · LW(p) · GW(p)

No, just that while you can try harder to find knowledge, there isn't a separate metalevel at which seeing if you really have knowledge is a different activity.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T14:00:22.297Z · LW(p) · GW(p)

If you can receive information that provides strong Bayesian evidence that you're belief is true, how is there "nothing they can do to determine whether they have successfully found knowledge"?

comment by Bugmaster · 2013-04-15T21:12:21.890Z · LW(p) · GW(p)

I don't know that much about Mormonism, but isn't it possible that there are multiple different sects of it, just like there are multiple sects of conventional Christianity, Judaism, Wicca, etc. ? In this case, each member of a sect would see himself as the One True Mormon (tm), and would be technically correct, despite believing in different things than members of other sects.

Replies from: JohnH
comment by JohnH · 2013-04-15T21:44:12.578Z · LW(p) · GW(p)

Mormonism is much more structured then that. There are different sects but those sects are different churches, both of us come from the LDS church, which is the largest and the one that everyone thinks of when they say Mormon (unless they are thinking of the polygamous FLDS).

There are those that call themselves New Order Mormons which are within the LDS church, by which they mean they don't believe in any of the truth claims of the church but like the culture (or something like that, I am sure I am taking what they say out of its "rich contextual setting").

Replies from: Bugmaster
comment by Bugmaster · 2013-04-15T22:50:28.693Z · LW(p) · GW(p)

Thanks, that was informative ! So, I assume that the LDS is managed by the Prophet, similarly to how the Catholic Church is managed by the Pope ? I don't mean to imply that the beliefs and the divine status (or lack thereof) of the two are equivalent, I'm merely comparing their places on the org chart.

Although, now that I think about it, even the Catholics have their sub-sects. For example, while the Pope is officially against contraception, many (if not most) American Catholics choose to ignore that part of the doctrine, and IIRC there are even some nuns actively campaigning to make it more accessible.

Replies from: Nornagest
comment by Nornagest · 2013-04-17T21:17:03.184Z · LW(p) · GW(p)

So, I assume that the LDS is managed by the Prophet, similarly to how the Catholic Church is managed by the Pope ?

If memory serves, the President of the (LDS) Church, his advisors, and the members of the church's senior leadership council (called the Quorum of the Twelve Apostles) all hold the title of prophet -- specifically "prophet, seer, and revelator". That doesn't necessarily carry all the implications that "prophet" might outside of an Mormon context, though. One of the quirks of Mormonism is a certain degree of rank inflation compared to most Abrahamic religions; almost all male Mormons enter what the religion calls an order of priesthood) at the age of twelve, for example, and a second when they reach eighteen.

But yes, for most purposes the President of the Church is loosely equivalent to the Catholic Pope. Things get a little funky as you get into lower ranks: the LDS org chart is much more complicated than the Catholic, with several layers of leadership councils and more titles than I can easily keep straight. Though it hasn't developed the numerous unofficial and semi-official leadership roles that Catholicism has, being a smaller and younger religion.

comment by JohnH · 2013-04-15T21:17:35.045Z · LW(p) · GW(p)

I am mostly just answering direct questions, I am horrible at walking away when questions are asked. Since this conversation is far outside of the norms of the group, I will do so in a private message if atomliner wants to continue the conversation. If he would rather it be public I would be willing to set up a blog for the purpose of continuing this conversation.

comment by MugaSofer · 2013-04-17T12:04:14.303Z · LW(p) · GW(p)

The reason we are having this discussion is because I feel you've characterized me unfairly as "the Ex-Mormon who never really knew his own religion and had no reason to believe in the fringe theories he did". My goal is to support my case that I really was a mainstream Latter-day Saint before I lost my faith. So, you can use your apologetic arguments all you want for whatever idea you have about Mormonism, but if they aren't based clearly in the scriptures (which I studied a great deal), and if they were never taught widely in the Church, then why exactly did I err in not coming to the same understanding as you? I do not think you have any good evidence for why I was an atypical Mormon who was unjustified in believing in the things I did.

This here is an excellent point. I'm pretty sure all religions have "unofficial" doctrines; certainly it would fit my experience. Such doctrines have no bearing on the truth of the "official" doctrines, technically, but they are identified with the religion by believers and unbelievers alike.

That said, while I'm hardly an authority on Mormonism, I would guess your beliefs were more, well, strange than average - simply because your deconversion selects for unconvincing and dissonant beliefs.

comment by JohnH · 2013-04-15T20:06:29.828Z · LW(p) · GW(p)

So, you can use your apologetic arguments all you want for whatever idea you have about Mormonism, but if they aren't based clearly in the scriptures

Please show what I said (excluding the reference to Confucius) is not clearly based in scripture, Numbers 11:29 may be helpful.

would not teach false doctrine?

Yes. that is unreasonable to assume

Do you think that a person who speaks face-to-face with Jesus Christ would then teach his own false ideas to members of Christ's One True Church?

Absolutely, if Jesus says something to a prophet then what Jesus said was correct. What the prophet thinks and communicates in addition to that particular thing has no guarantee of being correct and is very likely to be at least partially incorrect. The prophet will place the words of Jesus in the framework of other beliefs and cultural constructs in the world in which they live. Prophets just as much as anyone else do not receive the fullness at once, meaning that of necessity some of their beliefs (and therefore some of their teachings) will not be correct, excluding Jesus. Prophets are not perfect any more then anyone else is perfect and we are supposed to use the light of the Spirit to discern the truth ourselves rather then follow the prophet without thought or seeking to know for ourselves. In other words, telling people to seek God as to every question is calling them to be prophets.

Why is what he said logical to you?

Because I have not stood in the Divine Council and so I know that not only do I not know the secrets of God I also do not have a complete understanding of faith, repentance, baptism, and the Gift of the Holy Ghost, of loving God or of loving my neighbor as myself, nor will I until, either in this life or the next, I hear the Father say Ye shall have eternal life and receive an end to my faith.

And you are how old?

Why is that relevant? Older than you.

Replies from: atomliner
comment by atomliner · 2013-04-15T20:47:10.738Z · LW(p) · GW(p)

Please show what I said (excluding the reference to Confucius) is not clearly based in scripture, Numbers 11:29 may be helpful.

I apologize. I had thought that you were using the three scriptures I quoted earlier to support the point that the scriptures confirms that atheists can be as happy, healthy, and moral as theists. In actuality, you were using them to describe how blessings come from following the commandments and not just from belief in the first two cases and in the third case you were supporting the idea that God understands it is difficult for people to distinguish truth from error.

The point I made about our conversation still stands, however. Your goal seems to be "Make atomliner look like he didn't believe in things Mormons should" while my goal is "show I was a normal Latter-day Saint before losing my faith".

What the prophet thinks and communicates in addition to that particular thing has no guarantee of being correct and is very likely to be at least partially incorrect. The prophet will place the words of Jesus in the framework of other beliefs and cultural constructs in the world in which they live. Prophets just as much as anyone else do not receive the fullness at once, meaning that of necessity some of their beliefs (and therefore some of their teachings) will not be correct, excluding Jesus.

I have two problems with this. The first is that I do not see any scriptures supporting this view clearly. How was I supposed to know this? No one teaches in church that prophets can teach false doctrine. In my experience with hundreds of active Latter-day Saints, THIS belief is atypical. In fact I just got called out by a bunch of mission buddies for saying this on Facebook, that the prophets can sometimes lead us astray (we were talking about gay marriage), and I got called an apostate outright.

My second problem is that I said false doctrine, not small inaccuracies attributed to translation error. You think that a prophet could speak to Jesus Christ face-to-face and then write up entire discourses on stuff like Adam-God theory, blood atonement, doctrinal racism and affirm boldly that this is the truth to the Saints? God must have a very strange way of picking his prophets, it seems like he would want to call people who wouldn't invent their own ideas and who would simply repeat to the Saints what was said to them by Christ. I mean, does God want the truth expressed accurately or not? Were the prophets really the best people available for this task?? They have a terrible track record.

Why is what he said logical to you?

Because I have not stood in the Divine Council and so I know that not only do I not know the secrets of God I also do not have a complete understanding of faith, repentance, baptism, and the Gift of the Holy Ghost, of loving God or of loving my neighbor as myself, nor will I until, either in this life or the next, I hear the Father say Ye shall have eternal life and receive an end to my faith.

Great. That still makes no logical sense to ME since I don't believe in any of that. So, failure on your part to defend this point from an objective argument.

Why is that relevant? Older than you.

You are saying in your experience Mormonism is obviously a certain way and I'm saying in my experience Mormonism was not that way... I was wondering how much of a difference there is in our amount of experience. Did you hold all of these liberal Mormon beliefs when you were 21?

comment by MugaSofer · 2013-04-17T12:16:04.749Z · LW(p) · GW(p)

you can be moral, healthy, and happy without faith in God

Paul saying those that didn't know God and that didn't have the law but that acted justly being justified because of their actions doesn't imply to you that it is possible to be moral, healthy, and happy without faith in God? How about this, where in "There is a law, irrevocably decreed in heaven before the foundations of this world, upon which all blessings are predicated— And when we obtain any blessing from God, it is by obedience to that law upon which it is predicated." does it mention anything about having faith in God being a prerequisite for receiving a blessing? Where in "if ye have done it unto the least of these they brethren ye have done it unto me" does it say that one must believe in God for that to be valid?

Regardless of its scriptural authenticity, it is a common claim. I'm not surprised atomliner thought this at some point.

[Disclaimer: I'm extrapolating from mainstream Christianity here. It's possible this does not apply to Mormons.]

comment by [deleted] · 2013-04-15T05:37:35.384Z · LW(p) · GW(p)

I tend to focus on the current authorized messengers from God and the Holy Spirit as I feel that is what I have been instructed to do.

Who authorizes messengers from God? It's not like He has a public key, after all...

Replies from: JohnH, MugaSofer
comment by JohnH · 2013-04-15T17:23:37.407Z · LW(p) · GW(p)

Who authorizes messengers from God?

God obviously.

It's not like He has a public key,

There are actually quite a few rules given to determine if a messenger is from God. Jesus for instance said "If any man will do his will, he shall know of the doctrine, whether it be of God, or whether I speak of myself", then there is the qualification in Deuteronomy, the requirement in John, the experiment in Alma, the promise in Moroni, and some details in the D&C. It is somewhat of a bootstrapping problem as one must already trust one of those sources, or the person presenting those sources, enough to move forward in trying to verify the source and messenger.

comment by MugaSofer · 2013-04-15T10:32:33.303Z · LW(p) · GW(p)

Well, the usual method is to simply check if they're consistent with earlier messages. Which is great until you remember that the Devil can quote scripture to his purpose.

The canonical method, on the other hand, is to check if the messenger professes Jesus as Lord, since as we all know demons can't do that, so by process of elimination it must be an angel (and therefore true.)

No, I don't know what you're supposed to do if it's a hallucination.

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-15T16:00:30.447Z · LW(p) · GW(p)

we all know demons can't do that

Can't they?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T14:30:04.794Z · LW(p) · GW(p)

Nope. Well-known fact.

Seriously, that's official doctrine, that is. Actually, come to think, don't some demons call him Lord in the NT?

Replies from: CAE_Jones
comment by CAE_Jones · 2013-04-19T15:06:19.719Z · LW(p) · GW(p)

Jesus goes so far as to discourage both humans and demons from telling people about his Messiahship; demons tended to be pretty quick to start yelling about how he was the messiah/could torment them /etc. Legion is the most memorable case, but I seem to remember an incident from earlier on in Jesus' life when he had to silence a demon that was revealing his identity (maybe it was in Luke?).

Replies from: MugaSofer
comment by MugaSofer · 2013-04-19T17:21:16.470Z · LW(p) · GW(p)

And yet, I've read that piece of, um, advice in books by at least one actual exorcist; a pretty high-level one at that. It appears to be the official Thing To Do in response to a questionably divine visitation.

Yes, I read books on exorcism. Don't judge me.

comment by MugaSofer · 2013-04-15T10:21:53.674Z · LW(p) · GW(p)

Hey, people make mistakes. That it's fairly easy to come to erroneous conclusions on these topics should be evident from the JoD itself.

comment by atomliner · 2013-04-14T09:52:12.998Z · LW(p) · GW(p)

The arguments seemed to make more sense to me than those made for the existence of God? I don't know, it's a long book. The parts I liked the most was about the prayer experiment that showed no correlation between prayers and the recovery of hospital patients and how you can be a good, healthy, happy person without believing in God. Those were things I had never heard before.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-14T14:58:14.827Z · LW(p) · GW(p)

Hmm. I can't speak for JohnH but I had heard those before - maybe that had something to do with it.

comment by private_messaging · 2013-04-12T10:13:47.196Z · LW(p) · GW(p)

Be careful. There's a noted tendency to fill the void left by god with very god-like artificial intelligences, owners of the simulation we might be living in, and the like.

comment by Paamayim · 2013-04-02T21:40:33.999Z · LW(p) · GW(p)

Aloha.

My name is Sandy and despite being a long time lurker, meetup organizer and CFAR minicamp alumnus, I've got a giant ugh field around getting involved in the online community. Frankly it's pretty intimidating and seems like a big barrier to entry - but this welcome thread is definitely a good start :)

IIRC, I was linked to Overcoming Bias through a programming pattern blog in the few months before LW came into existence, and subsequently spent the next three months of my life doing little else than reading the sequences. While it was highly fascinating and seemed good for my cognitive health, I never thought about applying it to /real life/.

Somehow I ended up at CFAR's January minicamp, and my life literally changed. After so many years, CFAR helped me finally internalize the idea that /rationalists should win/. I fully expect the workshop to be the most pivotal event in my entire life, and would wholeheartedly recommend it to absolutely anyone and everyone.

So here's to a new chapter. I'm going to get involved in this community or die trying.

PS: If anyone is in the Kitchener/Waterloo area, they should definitely come out to UW's SLC tonight at 8pm for our LW meetup. I can guarantee you won't be disappointed!

comment by Laplante · 2013-04-01T03:47:57.329Z · LW(p) · GW(p)

Hello, Less Wrong; I'm Laplante. I found this site through a TV Tropes link to Harry Potter and the Methods of Rationality about this time last year. After I'd read through that as far as it had been updated (chapter 77?), I followed Yudkowsky's advice to check out the real science behind the story and ended up here. I mucked about for a few days before finding a link to yudkowsky.net, where I spent about a week trying learn what exactly Bayes was all about. I'm currently working my way through the sequences, just getting into the quantum physics sequence now.

I'm currently in the dangerous position of having withdrawn from college, and my productive time is spent between a part-time job and this site. I have no real desire to return to school, but I realize that entry into any sort of psychology/neuroscience/cognitive science field without a Bachelor's degree - preferably more - is near impossible.

I'm aware that Yudkowsky is doing quite well without a formal education, but I'd rather not use that as a general excuse to leave my studies behind entirely.

My goals for the future are to make my way through MIRI's recommended course list, and the dream is to do my own research in a related field. We'll see how it all pans out.

Replies from: RolfAndreassen, shminux, Michelle_Z, beoShaffer
comment by RolfAndreassen · 2013-04-01T18:27:34.121Z · LW(p) · GW(p)

my productive time is spent between a part-time job and this site.

Perhaps I'm reading a bit much into a throwaway phrase, but I suggest that time spent reading LessWrong (or any self-improvement blog, or any blog) is not, in fact, productive. Beware the superstimulus of insight porn! Unless you are actually using the insights gained here in a measureable way, I very strongly suggest you count LessWrong reading as faffing about, not as production. (And even if you do become more productive, observe that this is probably a one-time effect: Continued visits are unlikely to yield continual improvement, else gwern and Alicorn would long since have taken over the world.) By all means be inspired to do more work and smarter work, but do not allow the feeling of "I learned something today" to substitute for Actually Doing Things.

All that aside, welcome to LessWrong! We will make your faffing-about time much more interesting. BWAH-HAH-HAH!

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-04-02T08:50:07.653Z · LW(p) · GW(p)

do not allow the feeling of "I learned something today" to substitute for Actually Doing Things.

Learning stuff can be pretty useful. Especially stuff extremely general in its application that isn't easy to just look up when you need it, like rationality. If the process of learning is enjoyable, so much the better.

Replies from: Dentin
comment by Dentin · 2013-04-06T03:29:22.304Z · LW(p) · GW(p)

I think you may have misinterpreted a critical part of the sentence:

'do not allow the FEELING of "I learned something today" to substitute for Actually Doing Things.'

Insight porn, so to speak, is that way because it makes you feel good, like you can Actually Do Things and like you have the tools to now Actually Do Things. But if you don't get up and Actually Do Things, you have only learned how to feel like you can Actually Do Things, which isn't nearly as useful as it sounds.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-07-11T06:22:04.817Z · LW(p) · GW(p)

Sure, I agree. IMO, any self-improvement effort should be intermixed with lots of attempts to accomplish object-level goals so you can get empirical feedback on what's working and what isn't.

comment by Shmi (shminux) · 2013-04-01T07:25:25.511Z · LW(p) · GW(p)

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading. Or at least stop where the many worlds musings start. The whole thing is way too verbose and controversial for the number of useful points it makes. Your time is much better spent reading about cognitive biases. If you want epistemology, try the new sequence.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T15:20:24.955Z · LW(p) · GW(p)

Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence.

Given those particular circumstances, can I ask that you stop with that particular bit of helpful advice?

Replies from: Vaniver, shminux, TimS
comment by Vaniver · 2013-04-01T15:37:53.957Z · LW(p) · GW(p)

Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence.

Do you have a solid idea of how many technical readers get here via HPMOR but become disinterested in working for MIRI after reading the QM sequence? If not, isn't this potentially just the selection effect?

Replies from: Kawoomba
comment by Kawoomba · 2013-04-01T17:50:37.499Z · LW(p) · GW(p)

EY can rationally prefer the certain evidence of some Mihaly-Barasz-caliber researchers joining when exposed to the QM sequence

over

speculations whether the loss of Mihaly Barasz (had he not read the QM sequence) would be outweighed by even more / better technical readers becoming interested in joining MIRI, taking into account the selection effect.

Personally, I'd go with what has been proven/demonstrated to work as a high-quality attractor.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T18:23:16.181Z · LW(p) · GW(p)

Yep. I also tend to ignore nontechnical folks along the lines of RationalWiki getting offended by my thinking that I know something they don't about MWI. Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people.

Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of the QM sequence. Mihaly said the rest of the Sequences seemed interesting but lacked sufficient visible I-wouldn't-have-thought-of-that nature. This is very plausible to me - after all, the Sequences do indeed seem to me like the sort of thing somebody might just think up. I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW. It's a pity I'll probably never have time to write up TDT.

Replies from: EHeller, Vaniver, None
comment by EHeller · 2013-04-03T08:15:02.791Z · LW(p) · GW(p)

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing. You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.

Its also the sort of argument that seems very likely to sway someone with an intro class in college (one or two semesters of a Copenhagen based shut-up-and-calculate approach), precisely because having seen Copenhagen and nothing else they 'know just enough to be dangerous', as it were.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read. If I have issues with the area I know the most about, how much should I trust the rest? Other's mileage may vary.

Replies from: shminux, Vaniver
comment by Shmi (shminux) · 2013-04-03T19:24:27.389Z · LW(p) · GW(p)

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing.

Actually, attempting to steelman the QM Sequence made me realize that the objective collapse models are almost certainly wrong, due to the way they deal with the EPR correlations. So the sequence has been quite useful to me.

On the other hand, it also made me realize that the naive MWI is also almost certainly wrong, as it requires uncountable worlds created in any finite instance of time (unless I totally misunderstand the MWI version of radioactive decay, or any emission process for that matter). It has other issues, as well. Hence my current leanings toward some version of RQM, which EY seems to dislike almost as much as his straw Copenhagen, though for different reasons.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read.

Right, I've had a similar experience, and I heard it voiced by others.

As a result of re-examining EY's take on epistemology of truth, I ended up drifting from the realist position (map vs territory) to an instrumentalist position (models vs inputs&outputs), but this is a topic for another thread. I am quite happy with the sequences related to cognitive science, where, admittedly, I have zero formal expertise. But they seem to match what the actual experts in the field say.

I am on the fence with the free-will "dissolution", precisely because I know that I am not qualified to spot an error and there is little else out there in terms of confirming evidence or testable predictions.

I am quite skeptical about the dangers of AGI x-risk, mainly because it seems to extrapolate too far beyond what is known into the fog of the unknown future, though do I appreciate quite a few points made in the relevant sequences. Again, I am not qualified to judge their validity.

Replies from: Plasmon, private_messaging
comment by Plasmon · 2013-04-04T06:35:37.962Z · LW(p) · GW(p)

as it (MWI) requires uncountable worlds created in any finite instance of time

How is that any more problematic than doing physics with real or complex numbers in the first place?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-04T07:33:45.092Z · LW(p) · GW(p)

It means that EY's musings about the Eborians splitting into the world's of various thicknesses according to Born probabilities no longer make any sense. There is a continuum of worlds, all equally and infinitesimally thin, created every picosecond.

Replies from: MugaSofer, army1987
comment by MugaSofer · 2013-04-10T14:34:33.233Z · LW(p) · GW(p)

It means that EY's musings about the Eborians splitting into the world's of various thicknesses according to Born probabilities no longer make any sense.

coughmeasurecough

comment by A1987dM (army1987) · 2013-04-04T12:27:13.729Z · LW(p) · GW(p)

The way I understand it, it's not that “new” worlds are created that didn't previously exist (the total “thickness” (measure) stays constant). It's that two worlds that looked the same ten seconds ago look different now.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-04T15:00:24.948Z · LW(p) · GW(p)

That's a common misconception. In the simplest case of the Schrodinger' cat, there are not just two worlds with cat is dead or cat is alive. When you open the box, you could find the cat in various stages of decomposition, which gives you uncountably many worlds right there. In a slightly more complicated version, where energy and the direction of the decay products are also measurable (and hence each possible value is measured in at least one world), your infinities keep piling up every which way, all equally probable or nearly so.

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-04T16:39:32.761Z · LW(p) · GW(p)

(By “two” I didn't mean to imply ‘the only two’.)

Replies from: shminux
comment by Shmi (shminux) · 2013-04-04T16:52:25.897Z · LW(p) · GW(p)

Which two out of the continuum of world then did you imply, and how did you select them? I don't see any way to select two specific worlds for which "relative thickness" would make sense. You can classify the worlds into "dead/not dead at a certain instance of time" groups whose measures you can then compare, of course. But how would you justify this aggregation with the statement that the worlds, once split, no longer interact? What mysterious process makes this aggregation meaningful? Even if you flinch away from this question, how do you select the time of the measurement? This time is slightly different in different worlds, even if it is predetermined "classically", so there is no clear "splitting begins now" moment.

It gets progressively worse and more hopeless as you dig deeper. How does this splitting propagate in spacetime? How do two spacelike-separated splits merge in just the right way to preserve only the spin-conserving worlds of the EPR experiment and not all possibilities? How do you account for the difference in the proper time between different worlds? Do different worlds share the same spacetime and for how long? Does it mean that they still interact gravitationally (spacetime curvature = gravity). What happens if the spacetime topology of some of the worlds changes, for example by collapsing a neutron star into a black hole? I can imagine that these questions can potentially be answered, but the naive MWI advocated by Eliezer does not deal with any of this.

comment by private_messaging · 2013-04-04T06:15:20.922Z · LW(p) · GW(p)

Is there actually any physicists that find QM sequence to be making such a strongly compelling case for MWI as EY says it does?

I know Mitchell Porter is likewise a physicist and he's not convinced at all either.

Replies from: wedrifid
comment by wedrifid · 2013-04-04T06:20:51.276Z · LW(p) · GW(p)

I know Mitchell Porter is likewise a physicist and he's not convinced at all either.

Mitchell Porter also advocates Quantum Monadology and various things about fundamental qualia. The difference in assumptions about how physics (and rational thought) works between Eliezer (and most of Eliezer's target audience) and Mitchell Porter is probably insurmountable.

Replies from: private_messaging
comment by private_messaging · 2013-04-04T06:40:44.992Z · LW(p) · GW(p)

Mitchell Porter also advocates Quantum Monadology.

Yeah, and EY [any of the unmentionable things].

For other point, scott aaronson doesn't seem convinced either. Robin Hanson, while himself (it seems) a MWI believer but doesn't appear to think that its so conclusively settled.

Replies from: wedrifid
comment by wedrifid · 2013-04-04T06:51:00.266Z · LW(p) · GW(p)

Yeah, and EY [any of the unmentionable things].

The relevance of Porter's physics beliefs is that any reader who disagrees with Porter's premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter's status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.

In this way mentioning Porter's beliefs is distinctly different from mentioning the people that you now bring up:

For other point, scott aaronson doesn't seem convinced either. Robin Hanson, while himself (it seems) a MWI believer but doesn't appear to think that its so conclusively settled.

Replies from: private_messaging
comment by private_messaging · 2013-04-04T08:51:36.490Z · LW(p) · GW(p)

The relevance of Porter's physics beliefs is that any reader who disagrees with Porter's premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter's status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.

What one can learn is that the allegedly 'settled' and 'solved' is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven't reduced them to anything, merely asserted.

It extends all the way up, competence wise - see Roger Penrose.

It's fine to believe in MWI if that's where your philosophy falls, its another thing entirely to argue that belief in MWI is independent of priors and a philosophical stance, and yet another to argue that people fail to be swayed by a very biased presentation of the issue which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.

Replies from: CarlShulman, Ritalin, MugaSofer
comment by CarlShulman · 2013-04-04T22:43:11.337Z · LW(p) · GW(p)

which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.

No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.

Replies from: EHeller, private_messaging
comment by EHeller · 2013-04-05T00:21:56.476Z · LW(p) · GW(p)

No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.

But I think it does not do justice to what a huge deal the Born probabilities are. The Born probabilities are the way we use quantum mechanics to make predictions, so saying "MWI has not yet provided a good derivation of the Born probabilities" is equivalent to "MWI does not yet make accurate predictions," I'm not sure thats clear to people who read the sequences but don't use quantum mechanics regularly.

Also, by omitting the wide variety of non-Copenhagen interpretations (consistent histories, transactional, Bohm, stochastic-modifications to Schroedinger,etc) the reader is lead to believe that the alternative to Copenhagen-collapse is many worlds, so they won't use the absence of Born probabilities in many worlds to update towards one of the many non-Copenhagen alternatives.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-05T00:44:43.998Z · LW(p) · GW(p)

Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact. The unitarity of QM means that integral-squared-modulus quantifies the "amount of causal potency" or "amount of causal fluid" or "amount of conserved real stuff" in a blob of the wavefunction. It would be like discovering that your probability of ending up in a computer corresponded to how large the computer was. You could imagine that God arbitrarily looked over the universe and destroyed all but one computer with probability proportional to its size, but this would be unlikely. It would be much more likely (under circumstances analogous to ours) to guess that the size of the computer had something to do with the amount of person in it.

The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn't convincing was that I didn't go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I'm not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there's just no reason to even hypothesize single-world theories in the first place.

(I'm not sure I have time to write the post about Relational Special Relativity in which length and time just aren't the same for all observers and so we don't have to suppose that Minkowskian spacetime is objectively real, and anyway the purpose of a theory is to tell us how long things are so there's no point in a theory which doesn't say that, and those silly Minkowskians can't explain how much subjective time things seem to take except by waving their hands about how the brain contains some sort of hypothetical computer in which computing elements complete cycles in Minkowskian intervals, in contrast to the proper ether theory in which the amount of conscious time that passes clearly corresponds to the Lorentzian rule for how much time is real relative to a given vantage point...)

Replies from: wedrifid, EHeller
comment by wedrifid · 2013-04-05T03:25:33.200Z · LW(p) · GW(p)

The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn't convincing was that I didn't go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I'm not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there's just no reason to even hypothesize single-world theories in the first place.

It is not worth writing separate posts for each interpretation. However it is becoming increasingly apparent that to the extent that the QM sequence matters at all it may be worth writing a single post which outlines how your arguments apply to the other interpretations. ie.:

  • A brief summary of and a link to your arguments in favor of locality then an explicit mention of how this leads to rejecting "Ensemble, Copenhagen, de Broglie–Bohm theory, von Neumann, Stochastic, Objective collapse and Transactional" interpretations and theories.
  • A brief summary of and a link to your arguments about realism in general and quantum realism in particular and why the wavefunction not being considered 'real' counts against "Ensemble, Copenhagen, Stochastic and Relational" interpretations.
  • Some outright mockery of the notion that observation and observers have some kind of intrinsic or causal role (Coppenhagen, von Neumann and Relational).
  • Mention hidden variables and the complexity burden thereof (de Broglie–Bohm, Popper).

Having such a post as part of the sequence would make it trivial to dismiss claims like:

You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.

... as straw men. As it stands however this kind of claim (evidently, by reception) persuades many readers, despite this being significantly different to the reasoning that you intended to convey.

If it worth you maintaining active endorsement of your QM posts it may be worth ensuring both that it is somewhat difficult to actively misrepresent them and also that the meaning of your claims are as clear as they can conveniently be made. If there are Mihaly Barasz's out there who you can recruit via the sanity of your physics epistemology there are also quite possibly IMO gold medalists out there who could be turned off by seeing negative caricatures of your QM work so readily accepted and then not bother looking further.

comment by EHeller · 2013-04-05T01:11:33.734Z · LW(p) · GW(p)

Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact.

Not so. If we insist that our predictions need to be probabilities (take the Born probabilities as fundamental/necessary), then unitarity becomes equivalent to the statement that probabilities have to sum to 1, and we can then try to piece together what our update equation should look like. This is the approach taken by the 'minimalist'/'ensemble' interpretation that Ballentine's textbook champions, he uses probabilities sum to 1 and some group theory (related to the Galilean symmetry group) to motivate the form of the Schroedinger equation. Edit to clarify: In some sense, its the reverse of many worlds- instead of taking the Schroedinger axioms as fundamental and attempting to derive Born, take the operator/probability axioms seriously and try to derive Schroedinger.

I believe the same consideration could be said of the consistent histories approach, but I'd have to think about it before I'd fully commit.

Edit to add: Also, what about "non-spooky" action at a distance? Something like the transactional interpretation, where we take relativity seriously and use both the forward and backward Green's function of the Dirac/Klein-Gordon equation? This integrates very nicely with Barbour's timeless physics, properly derives the Born rule, has a single world, BUT requires some stochastic modifications to the Schroedinger equation.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-05T01:25:15.457Z · LW(p) · GW(p)

What surprises me in the QM interpretational world is that the interaction process itself is clearly more than just a unitary evolution of some wave function, given how the number of particles is not conserved, requiring the full QFT approach, and probably more, yet (nearly?) all interpretations stop at the QM level, without any attempt at some sort of second quantization. Am I missing something here?

Replies from: EHeller
comment by EHeller · 2013-04-05T01:52:43.464Z · LW(p) · GW(p)

Mostly just that QFT is very difficult and not rigorously formulated. Haag's theorem (and Wightman's extension) tell us that an interacting quantum field theory can't live in a nice Hilbert space, so there is a very real sense that realistic QFTs only exist peturbatively. This makes interpretation something of a nightmare.

Basically, we ignore a bunch of messy complications (and potential inconsistency) just to shut-up-and-calculate, no one wants to dig up all that 'just' to get to the messy business of interpretation.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-05T06:21:56.202Z · LW(p) · GW(p)

Are you saying that people knowingly look where it's light, instead of where they lost the keys?

Replies from: EHeller
comment by EHeller · 2013-04-05T06:27:21.545Z · LW(p) · GW(p)

More or less. If the axiomatic field theory guys ever make serious progress, expect a flurry of me-too type interpretation papers to immediately follow. Until then, good luck interpreting a theory that isn't even fully formulated yet.

If you ever are in a bar after a particle phenomenology conference lets out, ask the general room what, exactly, a particle is, and what it means that the definition is NOT observer independent.

Replies from: shminux, army1987
comment by Shmi (shminux) · 2013-04-05T06:45:19.002Z · LW(p) · GW(p)

Oh, I know what a particle is. It's a flat-space interaction-free limit of a field. But I see your point about observer dependence.

Replies from: EHeller
comment by EHeller · 2013-04-05T06:55:40.952Z · LW(p) · GW(p)

Then what is it, exactly, that particle detectors detect? Because it surely can't be interaction free limits of fields. Also, when we go to the Schreodinger equation with a potential, what are we modeling? It can't be a particle, there is non-perturbative potential! Also, for any charged particle, the IR divergence prevents the limit, so you have to be careful- 'real' electrons are linear combination of 'bare' electrons and photons.

Replies from: shminux, army1987
comment by Shmi (shminux) · 2013-04-05T16:38:56.858Z · LW(p) · GW(p)

What I meant was that if you think of a field excitation propagation "between interactions", they can be identified with particles. And you are right, I was neglecting those pesky massless virtual photons in the IR limit. As for the SE with a potential, this is clearly a semi-classical setup, there are no external classical potentials, they all come as some mean-field pictures of a reasonably stable many-particle interaction (a contradiction in terms though it might be). I think I pointed that out earlier in some thread.

The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.

Replies from: army1987, OrphanWilde, army1987
comment by A1987dM (army1987) · 2013-04-06T10:40:42.166Z · LW(p) · GW(p)

Somebody: Virtual photons don't actually exist: they're just a bookkeeping device to help you do the maths.

Someone else, in a different context: Real photons don't actually exist: each photon is emitted somewhere and absorbed somewhere else a possibly long but still finite amount of time later, making that a virtual photon. Real photons are just a mathematical construct approximating virtual photons that live long enough.

Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don't exist, and real photons don't exist. Therefore, no photons exist at all.

Replies from: EHeller
comment by EHeller · 2013-04-08T15:17:54.860Z · LW(p) · GW(p)

Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don't exist, and real photons don't exist. Therefore, no photons exist at all.

This is less joking then you think- its more or less correct. If you change the final to conclusion to "there isn't a good definition of photon" you'd be there. Its worse for QCD, where the theory has an SU(3) symmetry you pretty much have to sever in order to treat the theory perturbatively.

comment by OrphanWilde · 2013-04-05T19:23:51.705Z · LW(p) · GW(p)

all of Quantum Physics is basically a collection of miraculously working hacks

It really is. When you look at the experiments they're performing, it's kind of a miracle they get any kind of usable data at all. And explaining it to intelligent people is this near-infinite recursion of "But how do they know that experiment says what they say it does" going back more than a century with more than one strange loop.

Seriously, I've tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we've built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don't even know how to -begin- untangling those. I'm convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren't as narrow as you'd think?) and get exactly the same results. And quantum field theory's history of parallel physics doesn't exactly help my paranoia there, even if they did eventually clean -most- of it up.

Replies from: shminux, EHeller, Eugine_Nier
comment by Shmi (shminux) · 2013-04-05T20:21:25.113Z · LW(p) · GW(p)

in the end the best argument is that all the math we've built assuming their existence have really good predictive value.

I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway.

I am also not sure what strange loops you are referring to, feel free to give a couple of examples.

I'm convinced you could construct parallel physics with completely different mechanics [...] and get exactly the same results.

Most likely. It happens quite often (like Heisenberg's matrix mechanics vs Schrodinger's wave mechanics). Again, I have no problem with multiple models giving the same predictions, so I fail to see the source of your paranoia...

My beef with quantum physics is that there are many straightforward questions within its own framework it does not have answers to.

Replies from: army1987, OrphanWilde, MugaSofer
comment by A1987dM (army1987) · 2013-04-06T19:34:42.576Z · LW(p) · GW(p)

I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway.

Imagine there's a different, as-yet-unknown [ETA: simpler] model that doesn't have electrons but makes the same experimental predictions as ours.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-06T21:36:54.078Z · LW(p) · GW(p)

Then it's equivalent to "electrons exist". This is quite a common occurrence in physics, especially these days, holography and all. It also happens in condensed matter a lot, where quasi-particles like holes and phonons are a standard approximation. Do holes "exist" in a doped semiconductor? Certainly as much as electrons exist, unless you are a hard reductionist insisting that it makes sense to talk about simulating a Boeing 747 from quarks.

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-07T11:32:42.795Z · LW(p) · GW(p)

I meant for the as-yet-unknown model to be simpler than ours. (Do epicycles exist? After all, they do predict the motion of planets.)

comment by OrphanWilde · 2013-04-05T20:49:11.587Z · LW(p) · GW(p)

One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson's experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.

(I'm fine with "electrons exist as a phenomenon, even if they're not the phenomenon we expect them to be", but that tends to put people in an even more skeptical frame of mind then before I started "explaining". I've generally given up such explanations; it appears I'm hopelessly bad at it.)

Another strange loop is in the quantization of energy (which requires electrical fields to be quantized, the evidence for which comes quantization of energy to begin with). Strange loops are -fine-, taken as a whole - taken as a whole the evidence can be pretty good - but when you're stepping a skeptical person through it step by step it, it's hard to justify the next step when the previous step depends on it. The Big Bang Theory is another - the theory requires something to plug the gap in expected versus received background radiation, and the evidence for the plug (dark energy, for example) pretty much requires BBT to be true to be meaningful.

(Although it may be that a large part of the problem with the strange loops is that only the earliest experiments tend to be easily found in textbooks and on the Internet, and later less loop-prone experiments don't get much attention.)

Replies from: EHeller
comment by EHeller · 2013-04-06T01:36:17.846Z · LW(p) · GW(p)

One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson's experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.

The existence of electromagnetic fields is just the existence of light. You can build up the whole theory of electricity and magnetism without mentioning electrons. Charge is just a definition that tells us that some types of matter attract some other types of matter.

Once you have electromagnetic fields understood well, you can ask questions like "well, what is this piece of metal made up of, what is this piece of plastic made up of", etc, and you can measure charges and masses of the various constituents. Its not actually self-referential in the way you propose.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-08T13:09:18.261Z · LW(p) · GW(p)

Light isn't electrically charged.

You're correct that you can build up the theory without electrons - exactly this happened. That history produced linearly stepwise theories isn't the same as the evidence being linearly stepwise, however.

Replies from: EHeller
comment by EHeller · 2013-04-08T15:05:18.788Z · LW(p) · GW(p)

Light isn't electrically charged.

Light IS electromagnetic fields. the phrase "electrically charged electromagnetic fields" is a contradiction- the fields aren't charged. Charges react to the field.

If the fields WERE charged in some way, the theory would be non-linear.

In this case there is no loop- you can develop the electromagnetic theory around light, and from there proceed to electrons if you like.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-08T17:55:10.761Z · LW(p) · GW(p)

Light, in the theory you're indirectly referencing, is a disturbance in the electromagnetic field, not the field itself.

The fields are charged, hence all the formulas involving them reflecting charge in one form or another (charge density is pretty common); the amplitude of the field is defined as the force exerted on positively charged matter in the field. (The reason for this definition is that most electromagnetic fields we interact with are negatively charged, or have negative charge density, on account of electrons being more easily manipulated than cations, protons, plasma, or antimatter.)

With some creative use of relativity you can render the charge irrelevant for the purposes of (a carefully chosen) calculation. This is not the same as the charge not existing, however.

Replies from: EHeller
comment by EHeller · 2013-04-08T19:13:10.547Z · LW(p) · GW(p)

The fields are charged

You are using charge in some non-standard way. Charges are source or sinks of the field.

An electromagnetic field does not sink or source more field- if it did, Maxwell's equations would be non-linear. There is no such thing as a 'negatively charged electromagnetic field'- there are just electromagnetic fields. Now, the electromagnetic field can have a negative (or positive) amplitude but this is not the same as saying its negatively charged.

comment by MugaSofer · 2013-04-06T08:35:35.975Z · LW(p) · GW(p)

in the end the best argument is that all the math we've built assuming their existence have really good predictive value.

I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway.

Really? How does that work if, say, there's a human in Schrodinger's Box?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-06T23:52:41.668Z · LW(p) · GW(p)

How does that work if, say, there's a human in Schrodinger's Box?

How does what work?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-07T18:18:46.661Z · LW(p) · GW(p)

How does a model-based definition of existence interact with morality? Or paperclipping, for that matter?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-07T18:54:48.982Z · LW(p) · GW(p)

Still not clear what you are having trouble with. I interpret "electron exist" as "I have this model I call electron which is better at predicting certain future inputs than any competing model". Not sure what it has to do with morality or paperclipping.

Replies from: PrawnOfFate, TimS, MugaSofer
comment by PrawnOfFate · 2013-04-08T13:10:37.758Z · LW(p) · GW(p)

How do you interpret "such-and-such an entity is required by such-and-such a theory, which seems to work, bit turns out not to exist". Do things wink in and out of existence as one theory replaces another?

Replies from: TimS
comment by TimS · 2013-04-08T14:28:45.882Z · LW(p) · GW(p)

I think shminux's response is something like:

"Given a model that predicts accurately, what would you do differently if the objects described in the model do or don't exist at some ontological level? If there is no difference, what are we worrying about?"

Replies from: PrawnOfFate, wedrifid, MugaSofer
comment by PrawnOfFate · 2013-04-11T16:53:41.999Z · LW(p) · GW(p)

Why worry about prediction if it doesn't relate to a real world?

Replies from: TimS
comment by TimS · 2013-04-11T17:45:16.452Z · LW(p) · GW(p)

I think you overread shminux. My attempted steelman of his position would be:

Of course there is something external to our minds, which we all experience. Call that "reality" if you like. Whatever reality is, it creates regularity such that we humans can make and share predictions.
Are there atoms, or quarks, or forces out there in the territory? Experts in the field have said yes, but sociological analysis like The Structure of Scientific Revolutions gives us reasons to be skeptical. More importantly, resolving that metaphysical discussion does nothing to help us make better predictions in the future.

I happen to disagree with him because I think resolving that dispute has the potential to help us make better predictions in the future. But your comment appears to strawman shminux by asserting that he doesn't believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.

Saying "there is regularity" is different from saying "regularity occurs because quarks are real."

Replies from: DaFranker, PrawnOfFate, shminux
comment by DaFranker · 2013-04-11T18:01:09.074Z · LW(p) · GW(p)

If this steelman is correct, my support for schminux's position has risen considerably, but so has my posterior belief that schminux and Eliezer actually have the same substantial beliefs once you get past the naming and modeling and wording differences.

Given schminux and Eliezer's long-standing disagreement and both affirming that they have different beliefs, this makes it seem more likely that there's either a fundamental miscommunication, that I misunderstand the implications of the steel-manning or of Eliezer's descriptions of his beliefs, or that this steel-manning is incorrect. Which in turn, given that they are both quite more highly experienced in explicit rationality and reduction than I am, makes the first of the above three less likely, and thus makes it back less-than-it-would-first-seem still-slightly-more-likely that they actually agree, but also more likely that this steelman strawmans schminux in some relevant way.

Argh. I think I might need to maintain a bayes belief network for this if I want to think about it any more than that.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T18:37:36.629Z · LW(p) · GW(p)

Given schminux and Eliezer's long-standing disagreement and both affirming that they have different beliefs

The disagreement starts here:

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

I refuse to postulate an extra "thingy that determines my experimental results". Occam's razor and such.

Replies from: DaFranker
comment by DaFranker · 2013-04-11T18:53:08.253Z · LW(p) · GW(p)

So uhm. How do the experimental results, y'know, happen?

I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I'm just being obstinate, but there needs to be something to the pattern / regularity.

If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:

The models led to predictions - predictions about the experimental results, which are part of the model. The experiments were made according to the model that describes how to test those predictions (I might be wording this a bit confusingly?). But the experimental results... just "are". They magically are like they are, for no reason, and they are ontologically basic in the sense that nothing at all ever determines them.

To me, it defies any reasonable logical description, and to my knowledge there does not exist a possible program that would generate this (i.e. if the program "randomly" generates the experimental results, then the randomness generator is the cause of the results, and thus is that thinghy, and for any regularity observable then the algorithm that causes that regularity in the resulting program output is the thinghy). Since as far as I can tell there is no possible logical construct that could ever result in a causeless ontologically basic "experimental result set" that displays regularity and can be predicted and tested, I don't see how it's even possible to consistently form a system where there are even models and experiences.

In short, if there is nothing at all whatsoever from which the experimental results arise, not even just a mathematical formula that can be pointed at and called 'reality', then this doesn't even seem like a well-formed mathematically-expressible program, let alone one that is occam/solomonoff "simpler" than a well-formed program that implicitly contains a formula for experimental results.

No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say "Look here! This is what 'determines' what experimental results I see and restricts the possible futures! Let's call this thinghy/subset/formula 'reality'!"

I don't see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.

Replies from: DaFranker, shminux
comment by DaFranker · 2013-04-11T19:10:52.678Z · LW(p) · GW(p)

No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say "Look here! This is what 'determines' what experimental results I see and restricts the possible futures! Let's call this thinghy/subset/formula 'reality'!"

I don't see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.

As far as I can tell, those two paragraphs are pretty much Eliezer's position on this, and he's just putting that subset as an arbitrary variable, saying something like "Sure, we might not know said subset of the program or where exactly it is or what computational form it takes, but let's just have a name for it anyway so we can talk about things more easily".

comment by Shmi (shminux) · 2013-04-11T19:26:05.106Z · LW(p) · GW(p)

So uhm. How do the experimental results, y'know, happen?

Are you trying to solve the question of origin? How did the external reality, that thing that determines the experimental results, in the realist model, y'know, happen?

I discount your musings about "ontological basis", perhaps uncharitably. Instrumentally, all I care about is making accurate predictions, and the concept of external reality is sometimes useful in that sense, and sometimes it gets in the way.

No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say "Look here! This is what 'determines' what experimental results I see and restricts the possible futures! Let's call this thinghy/subset/formula 'reality'!"

Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.

I don't see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.

I fail to see a requirement you think I would have to get around. Just some less-than-useful logical construct.

Replies from: DaFranker
comment by DaFranker · 2013-04-11T19:45:20.641Z · LW(p) · GW(p)

I think it all just finally clicked. Strawman test (hopefully this is a good enough approximation):

You do imagine patterns and formulas, and your model does (or can) contain a (meta^x)-model that we could use and call "reality" and do whatever other realist-like shenanigans, and does describe the experimental results in some way that we could say "this formula, if it 'really existed' and the concept of existence is coherent at all, is the cause of my experimental results and the thinghy that determines them".

You just naturally exclude going from there to assuming that the meta-model is "real", "exists", or is itself what is external to the models and causes everything; something which for other people requires extra mental effort and does relate to the problem of origin.

Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.

Sure. What I was attempting to say is that if I look at your model of the world, and within this model find a sub-part that happens to be a meta-model of the world like that program, I could also point at a smaller sub-part of that meta-model and say "Within this meta-model that you have in your model of the world, this is the modeled 'cause' of your experimental results, they all happen according to this algorithm".

So now, given that the above is at least a reasonable approximation of your beliefs, the hypotheses for one of us misinterpreting Eliezer have risen quite considerably.

Personally, I tend to mentally "simplify" my model by saying that the program in question "is" (reality), for purposes of not having to redefine and debate things with people. Sometimes, though, when I encounter people who think "quarks are really real out there and have a real position in a really existing space", I just get utterly confused. Quarks are just useful models of the interactions in the world. What's "actually" doing the quark-ing is irrelevant.

Natural language is so bad at metaphysics, IME =\

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T20:33:14.556Z · LW(p) · GW(p)

So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality? I have at least two issues with this formulation. One is that every model supposedly contains this algorithm. Lots of high-level models are polymorphic, you can replace quarks with bits or wooden blocks and they still hold. The other is that, once you put this algorithm outside the model space, you are tempted to consider other similar algorithms which have no connection with the rest of the models whatsoever, like the mathematical universe. The term "exist" gains a meaning not present in its original instrumental Latin definition: to appear or to stand out. And then we are off the firm ground of what can be tested and into the pure unconnected ideas, like "post-utopian" Eliezer so despises, yet apparently implicitly adopts. Or maybe I'm being uncharitable here. He never engaged me on this point.

Replies from: Bugmaster, DaFranker
comment by Bugmaster · 2013-04-11T23:27:31.243Z · LW(p) · GW(p)

I think both you and DaFranker might be going a bit too deep down the meta-model rabbit-hole. As far as I understand, when a scientist says "electrons exists", he does not mean,

These mathematical formulae that I wrote down describe an objective reality with 100% accuracy.

Rather, he's saying something like,

There must be some reason why all my experiments keep coming out the way they do, and not in some other way. Sure, this could be happening purely by chance, but the probability of this is so tiny as to be negligible. These formulae describe a model of whatever it is that's supplying my experimental results, and this model predicts future results correctly 99.999999% of the time, so it can't be entirely wrong.

As far as I understand, you would disagree with the second statement. But, if so, how do you explain the fact that our experimental results are so reliable and consistent ? Is this just an ineffable mystery ?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T23:35:22.789Z · LW(p) · GW(p)

I don't disagree with the second statement, I find parts of it meaningless or tautological. For example:

These formulae describe a model of whatever it is that's supplying my experimental results

The part in bold is redundant. You would normally say "of Higgs decay" or something to that effect.

, and this model predicts future results correctly 99.999999% of the time, so it can't be entirely wrong.

The part in bold is tautological. Accurate predictions is the definition of not being wrong (within the domain of applicability). In that sense Newtonian physics is not wrong, it's just not as accurate.

Replies from: PrawnOfFate, Bugmaster
comment by PrawnOfFate · 2013-04-13T13:38:36.305Z · LW(p) · GW(p)

The part in bold is tautological. Accurate predictions is the definition of not being wrong

The instrumentalist definition. For realists, and accurate theory can still be wrong because it fails to correspond to reality, or posits non existent entities. For instance, and epicyclic theory of the solar system can be made as accurate as you like.

comment by Bugmaster · 2013-04-11T23:46:53.798Z · LW(p) · GW(p)

Accurate predictions is the definition of not being wrong (within the domain of applicability)

I meant to make a more further-reaching statement than that. If we believe that our model approximates that (postulated) thing that is causing our experiments to come out a certain way, then we can use this model to devise novel experiments, which are seemingly unrelated to the experiments we are doing now; and we could expect these novel experiments to come out the way we expected, at least on occasion.

For example, we could say, "I have observed this dot of light moving across the sky in a certain way. According to my model, this means that if I were to point my telescope at some other part of sky, we would find a much dimmer dot there, moving in a specific yet different way".

This is a statement that can only be made if you believe that different patches of the sky are connected, somehow, and if you have a model that describes the entire sky, even the pieces that you haven't looked at yet.

If different patches of the sky are completely unrelated to each other, the likelihood of you observing what you'd expect is virtually zero, because there are too many possible observations (an infinite number of them, in fact), all equally likely. I would argue that the history of science so far contradicts this assumption of total independence.

In that sense Newtonian physics is not wrong, it's just not as accurate.

This may be off-topic, but I would agree with this statement. Similarly, the statement "the Earth is flat" is not, strictly speaking, wrong. It works perfectly well if you're trying to lob rocks over a castle wall. Its inaccuracy is too great, however, to launch satellites into orbit.

comment by DaFranker · 2013-04-11T20:57:54.342Z · LW(p) · GW(p)

So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality?

Sort-of.

I'm saying that there's a sufficiently fuzzy and inaccurate polymorphic model (or sets of models, or meta-description of the requirements and properties for relevant models,) of "the universe" that could be created and pointed at as "the laws", which if known fully and accurately could be "computed" or simulated or something and computing this algorithm perfectly would in-principle let us predict all of the experimental results.

If this theoretical, not-perfectly-known sub-algorithm is a perfect description of all the experimental results ever, then I'm perfectly willing to slap the labels "fundamental" and "reality" on it and call it a day, even though I don't see why this algorithm would be more "fundamentally existing" than the exact same algorithm with all parameters multiplied by two, or some other algorithm that produces the same experimental results in all possible cases.

The only reason I refer to it in the singular - "the sub-algorithm" - is because I suspect we'll eventually have a way to write and express as "an algorithm" the whole space/set/field of possible algorithms that could perfectly predict inputs, if we knew the exact set that those are in. I'm led to believe it's probably impossible to find this exact set.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T21:24:16.816Z · LW(p) · GW(p)

I find this approach very limiting. There is no indication that you can construct anything like that algorithm. Yet by postulating its existence (ahem), you are forced into a mode of thinking where "there is this thing called reality with some fundamental laws which we can hopefully learn some day". As opposed to "we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on". Without ever worrying if some day there is nothing more to discover, because we finally found the holy grail, the ultimate laws of reality. I don't mind if it's turtles all the way down.

In fact, in the spirit of QM and as often described in SF/F stories, the mere act of discovery may actually change the "laws", if you are not careful. Or maybe we can some day do it intentionally, construct our own stack of turtles. Oh, the possibilities! And all it takes is to let go of one outdated idea, which is, like Aristotle's impetus, ripe for discarding.

Replies from: PrawnOfFate, None, TheOtherDave, DaFranker
comment by PrawnOfFate · 2013-04-13T14:02:22.894Z · LW(p) · GW(p)

I don't mind if it's turtles all the way down.

The claim that reality may be ultimately unknowable or non-algorithmic is different to the claim you have made elsewhere, that there is no reality.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T15:47:59.447Z · LW(p) · GW(p)

I'm not sure it's as different as all that from shminux's perspective.

By way of analogy, I know a lot of people who reject the linguistic habit of treating "atheism" as referring to a positive belief in the absence of a deity, and "agnosticism" as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.)

If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it's specifically an instrumentalist position, but we're not currently concerned with choosing among different non-realist positions.)

All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-13T16:31:36.455Z · LW(p) · GW(p)

I can't see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T18:32:44.998Z · LW(p) · GW(p)

Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean. All I'll say about that is if the words aren't being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it's best not to use those words.

Leaving language aside, I accept that the difference between "there is no reality" and "whether there is a reality is systematically unknowable" is an important difference to you, and I agree that deriving the former from the latter is tricky.

I'm pretty sure it's not an important difference to shminux. It certainly isn't an important difference to me... I can't imagine why I would ever care about which of those two statements is true if at least one of them is.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-14T18:35:14.358Z · LW(p) · GW(p)

Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean.

I don't see why not.

All I'll say about that is if the words aren't being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it's best not to use those words.

Or settle their correct meanings using a dictionary, or something.

Leaving language aside, I accept that the difference between "there is no reality" and "whether there is a reality is systematically unknowable" is an important difference to you, and I agree that deriving the former from the latter is tricky.

I'm pretty sure it's not an important difference to shminux.

If shminux is using arguments for Unknowable Reality as arguments for No Reality, then shminux's arguments are invalid whatever shminux cares about.

It certainly isn't an important difference to me... I can't imagine why I would ever care about which of those two statements is true if at least one of them is.

One seems a lot ore far fetched that then other to me.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-14T20:12:35.260Z · LW(p) · GW(p)

Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean.

I don't see why not.

If all goes well in a definitional dispute, at the end of it we have agreed on what meaning to assign to a word. I don't really care; I'm usually perfectly happy to assign to it whatever meaning my interlocutor does. In most cases, there was some other more interesting question about the world I was trying to get at, which got derailed by a different discussion about the meanings of words. In most of the remaining cases, the discussion about the meanings of words was less valuable to me than silence would have been.

That's not to say other people need to share my values, though; if you want to join definitional disputes (by referencing a dictionary or something) go right ahead. I'm just opting out.

If shminux is using arguments for Unknowable Reality as arguments for No Reality,

I don't think he is, though I could be wrong about that.

comment by MugaSofer · 2013-04-13T15:51:46.250Z · LW(p) · GW(p)

Pretty sure you mixed up "we can't know the details of reality" with "we can't know if reality exists".

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T15:58:06.587Z · LW(p) · GW(p)

That would be interesting, if true.
I have no coherent idea how you conclude that from what I said, though.
Can you unpack your reasoning a little?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T16:30:19.283Z · LW(p) · GW(p)

Sure.


Agnosticism = believing we can't know if God exists

Atheism = believing God does not exist

Theism = believing God exists


turtles-all-the-way-down-ism = believing we can't know what reality is (can't reach the bottom turtle)

instrumentalism/anti-realism = believing reality does not exist

realism = believing reality exists


Thus anti-realism and realism map to atheism and theism, but agnosticism doesn't map to infinte-turtle-ism because it says we can't know if God exists, not what God is.

Replies from: shminux, TheOtherDave
comment by Shmi (shminux) · 2013-04-13T16:50:17.273Z · LW(p) · GW(p)

Agnosticism = believing we can't know if God exists

Or believing that it's not a meaningful or interesting question to ask

instrumentalism/anti-realism = believing reality does not exist

That's quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.

Replies from: MugaSofer, private_messaging
comment by MugaSofer · 2013-04-13T18:29:44.356Z · LW(p) · GW(p)

Or believing that it's not a meaningful or interesting question to ask

Those would be ignosticism and apatheism respectively.

That's quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.

Yes, yes, we all know your idiosyncratic definition of "exist", I was using the standard meaning because I was talking to a realist.

comment by private_messaging · 2013-04-13T17:18:38.158Z · LW(p) · GW(p)

Yeah. The issue here, i gather, has to do a lot with domain specific knowledge - you're a physicist, you have general idea how physics does not distinguish between, for example, 0 and two worlds of opposite phases which cancel out from our perspective. Which is way different from naive idea of some sort of computer simulation, where of course two simulations with opposite signs being summed, are a very different thing 'from the inside' from plain 0. If we start attributing reality to components of the sum in Feynman's path integral... that's going to get weird.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T19:16:29.199Z · LW(p) · GW(p)

You realize that, assuming Feynman's path integral makes accurate predictions, shiminux will attribute it as much reality as, say, the moon, or your inner experience.

Replies from: private_messaging
comment by private_messaging · 2013-04-13T21:04:49.258Z · LW(p) · GW(p)

The issue is with all the parts of it, which include your great grandfather's ghost, twice, with opposite phases, looking over your shoulder.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T22:27:26.764Z · LW(p) · GW(p)

Since I am not a quantum physicist, I can't really respond to your objections, and in any case I don't subscribe to shiminux's peculiar philosophy.

comment by TheOtherDave · 2013-04-13T16:36:07.057Z · LW(p) · GW(p)

Thanks for the clarification, it helps.
An agnostic with respect to God (which is what "agnostic" has come to mean by default) would say both that we can't know if God exists, and also that we can't know the nature of God. So I think the analogy still holds.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T16:42:48.807Z · LW(p) · GW(p)

Right. But! An agnostic with respect to the details of reality - an infinite-turtle-ist - need not be an agnostic with respect to reality, even if an agnostic with respect to reality is also an agnostic with respect to it's details (although I'm not sure if that follows in any case.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T17:48:10.480Z · LW(p) · GW(p)

(shrug) Sure. So my analogy only holds between agnostics-about-God (who question the knowability of both the existence and nature of God) and agnostics-about-reality (who question the knowability of both the existence and nature of reality).

As you say, there may well be other people out there, for example those who question the knowability of the details, but not of the existence, of reality. (For a sufficiently broad understanding of "the details" I suspect I'm one of those people, as is almost everyone I know.) I wasn't talking about them, but I don't dispute their existence.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T19:31:32.746Z · LW(p) · GW(p)

Absolutely, but that's not what shiminux and PrawnOfFate were talking about, is it?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T20:00:01.445Z · LW(p) · GW(p)

I have to admit, this has gotten rarefied enough that I've lost track both of your point and my own.

So, yeah, maybe I'm confusing knowing-X-exists with knowing-details-of-X for various Xes, or maybe I've tried to respond to a question about (one, the other, just one, both) with an answer about (the other, one, both, just one). I no longer have any clear notion, either of which is the case or why it should matter, and I recommend we let this particular strand of discourse die unless you're willing to summarize it in its entirety for my benefit.

Replies from: None
comment by [deleted] · 2013-04-13T20:05:10.415Z · LW(p) · GW(p)

I predict that these discussions, even among smart, rational people will go nowhere conclusive until we have a proper theory of self-aware decision making, because that's what this all hinges on. All the various positions people are taking in this are just packaging up the same underlying confusion, which is how not to go off the rails once your model includes yourself.

Not that I'm paying close attention to this particular thread.

comment by [deleted] · 2013-04-15T02:25:25.234Z · LW(p) · GW(p)

And all it takes is to let go of one outdated idea, which is, like Aristotle's impetus, ripe for discarding.

This is not at all important to your point, but the impetus theory of motion was developed by John Philoponus in the 6th century as an attack on Aristotle's own theory of motion. It was part of a broadly Aristotelian programme, but its not something Aristotle developed. Aristotle himself has only traces of a dynamical theory (the theory being attacked by Philoponus is sort of an off-hand remark), and he concerned himself mostly with what we would probably call kinematics. The Aristotelian principle carried through in Philoponus' theory is the principle that motion requires the simultaneous action of a mover, which is false with respect to motion but true with respect to acceleration. In fact, if you replace 'velocity' with 'acceleration' in a certain passage of the Physics, you get F=ma. So we didn't exactly discard Aristotle's (or Philoponus') theory, important precursors as they were to the idea of inertia.

Replies from: TimS
comment by TimS · 2013-04-15T02:40:39.286Z · LW(p) · GW(p)

In fact, if you replace 'velocity' with 'acceleration' in a certain passage of the Physics, you get F=ma.

That kind of replacement seems like a serious type error - velocity is not really anything like acceleration. Like saying that if you replace P with zero, you can prove P = NP.

Replies from: None
comment by [deleted] · 2013-04-15T03:27:46.459Z · LW(p) · GW(p)

That its a type error is clear enough (I don't know if its a serious one under an atmosphere). But what follows from that?

comment by TheOtherDave · 2013-04-13T19:55:51.561Z · LW(p) · GW(p)

"we can keep refining our models and explain more and more inputs"

Hm.

On your account, "explaining an input" involves having a most-accurate-model (aka "real world") which alters in response to that input in some fashion that makes the model even more accurate than it was (that is, better able to predict future inputs). Yes?

If so... does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done? If it does... how is that any less limiting than the realist's view allowing for entering a state where there is no further understanding of reality to be done?

I mean, I recognize that it's possible to have an instrumentalist account in which no such limitative result applies, just as it's possible to have a realist account in which no such limitative result applies. But you seem to be saying that there's something systematically different between instrumentalist and realist accounts here, and I don't quite see why that should be.

You make a reference a little later on to "mental blocks" that realism makes more likely, and I guess that's another reference to the same thing, but I don't quite see what it is that that mental block is blocking, or why an instrumentalist is not subject to equivalent mental blocks.

Does the question make sense? Is it something you can further clarify?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-14T00:47:44.410Z · LW(p) · GW(p)

If so... does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done?

Maybe you are reading too much into what I said. If your view is that what we try to understand is this external reality, it's quite a small step to assuming that some day it will be understood in its entirety. This sentiment has been expressed over and over by very smart people, like the proverbial Lord Kelvin's warning that "physics is almost done", or Laplacian determinism. If you don't assume that the road you travel leads to a certain destination, you can still decide that there are no more places to go as your last trail disappears, but it is by no means an obvious conclusion.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-14T02:40:13.709Z · LW(p) · GW(p)

If your view is that what we try to understand is this external reality, it's quite a small step to assuming that some day it will be understood in its entirety.

Well, OK.
I certainly agree that this assumption has been made by realists historically.
And while I'm not exactly sure it's a bad thing, I'm willing to treat it as one for the sake of discussion.

That said... I still don't quite get what the systematic value-difference is.
I mean, if my view is instead that what we try to achieve is maximal model accuracy, with no reference to this external reality... then what? Is it somehow a longer step from there to assuming that some day we'll achieve a perfectly accurate model?
If so, why is that?
If not, then what have I gained by switching from the goal of "understand external reality in its entirety" to the goal of "achieve a perfectly accurate model"?

If I'm following you at all, it seems you're arguing in favor of a non-idealist position much more than a non-realist position. That is, if it's a mistake to "assume that the road you travel leads to a certain destination", it follows that I should detach from "ultimate"-type goals more generally, whether it's a realist's goal of ultimately understanding external reality, or an instrumentalist's goal of ultimately achieving maximal model accuracy, or some other ontology's goal of ultimately doing something else.

Have I missed a turn somewhere?
Or is instrumentalism somehow better suited to discouraging me from idealism than realism is?
Or something else?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-14T07:41:21.640Z · LW(p) · GW(p)

Look, I don't know if I can add much more. What started my deconversion from realism is watching smart people argue about interpretations of QM, Boltzmann brains and other untestable ontologies. After a while these debates started to seem silly to me, so I had to figure out why. Additionally, I wanted to distill the minimum ontology, something which needn't be a subject of pointless argument, but only of experimental checking. Eventually I decided that external reality is just an assumption, like any other. This seems to work for me, and saves me a lot of worrying about untestables. Most physicists follow this pragmatic approach, except for a few tenured dudes who can afford to speculate on any topic they like. Max Tegmark and Don Page are more or less famous examples. But few physicists worry about formalizing their ontology of pragmatism. They follow the standard meaning of the terms exist, real, true, etc., and when these terms lead to untestable speculations, their pragmatism takes over and they lose interest, except maybe for some idle chat over a beer. A fine example of compartmentalization. I've been trying to decompartmentalize and see where the pragmatic approach leads, and my interpretation of the instrumentalism is the current outcome. It lets me to spot early many statements implications of which a pragmatist would eventually ignore, which is quite satisfying. I am not saying that I have finally worked out the One True Ontology, or that I have resolved every issue to my satisfaction, but it's the best I've been able to cobble together. But I am not willing to trade it for a highly compartmentalized version of realism, or the Eliezerish version of many untestable worlds and timeless this or that. YMMV.

Replies from: TheOtherDave, PrawnOfFate
comment by TheOtherDave · 2013-04-14T19:09:58.551Z · LW(p) · GW(p)

(shrug) OK, I'm content to leave this here, then. Thanks for your time.

comment by PrawnOfFate · 2013-04-14T18:40:17.704Z · LW(p) · GW(p)

So...what is the point of caring about prediction?

comment by DaFranker · 2013-04-11T22:20:00.374Z · LW(p) · GW(p)

But the "turtles all the way down" or the method in which the act of discovery changes the law...

Why can't that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite "nonsense", there probably exists some way to describe it, model it, understand it, or at least point towards it.

This very "pointing towards it" is what I'm doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there's a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).

Currently, the best fuzzy picture of that model, by my pinpointing of what-I'm-referring-to, is precisely what you've just described:

"we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on".

That's what I'm pointing at. I don't care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the "algorithm".

Maybe the turing tape never halts, and just keeps computing on and on more new "laws of physics" as we research on and on and do more exotic things, such that there's no "true final ultimate laws". Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.

So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It's a bit like CEV in that regards, "whatever we would if ..." and so on.

Once all reduced away, all I'm really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It's not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.

For more comparisons, it's a bit like when I say "my utility function". Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I'm definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I'm pointing at.

That "something" is my "true utility function", even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.

So I guess that's about also what I refer to when I say "reality".

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T22:50:14.074Z · LW(p) · GW(p)

I'm not really disagreeing. I'm just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like "but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?" become progressively more nonsensical.

Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.

These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T13:48:07.851Z · LW(p) · GW(p)

the idea of some objective reality becomes progressively less useful

Useful for what? Prediction? But realists arent using these models to answer the "what input should I expect" question; they are answering other questions, like "what is real" and "what should we value".

And "nothing" is an answer to "what is real". What does instrumentalism predict?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T13:56:20.184Z · LW(p) · GW(p)

What does instrumentalism predict?

If it's really better or more "true" on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I'd have to update in favour of something.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T14:15:44.055Z · LW(p) · GW(p)

If it's really better or more "true" on some level

But if that's no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T14:34:08.274Z · LW(p) · GW(p)

Well, I suppose all it would need to peruade is people who don't already believe it ...

More seriously, you'll have to ask shiminux, because I, as a realist, anticipate this test failing, so naturally I can't explain why it would succeed.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T14:44:08.625Z · LW(p) · GW(p)

Huh? I don't see why the ability to convince people who don't care about consistency is something that should sway me.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T15:15:56.011Z · LW(p) · GW(p)

If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn't need to persuade people who already agree with it - just the rest of us.

And once you've self-modified into an instrumentalist, I guess there are other arguments that will now persuade you - for example, that this hypothetical underlying layer of "reality" has no extra predictive power (at least, I think that's what shiminux finds persuasive.)

comment by PrawnOfFate · 2013-04-12T20:52:40.740Z · LW(p) · GW(p)

But your comment appears to strawman shminux by asserting that he doesn't believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.

I'm not sure. I have seen comments that contradict that interpretation. if shminux was the kind of irrealist who believes in an external world of an unknown nature, smninux would have no reason not to call it reality But sminux insists reality is our current best model.

ETA:

anotherr example

"I refuse to postulate an extra "thingy that determines my experimental results".

comment by Shmi (shminux) · 2013-04-11T18:09:09.593Z · LW(p) · GW(p)

Thank you for your steelmanning (well, your second or third one, people keep reading what I write extremely uncharitably). I really appreciate it!

Of course there is something external to our minds, which we all experience.

Most certainly. I call these experiences inputs.

Call that "reality" if you like.

Don't, just call it inputs.

Whatever reality is, it creates regularity such that we humans can make and share predictions.

No, reality is a (meta-)model which basically states that these inputs are somewhat predictable, and little else.

Are there atoms, or quarks, or forces out there in the territory?

The question is meaningless if you don't postulate territory.

Experts in the field have said yes

Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.

but sociological analysis like The Structure of Scientific Revolutions gives us reasons to be skeptical.

I see the cited link as a research in cognitive sciences (what is thinkable and in what situations), not any statement about some mythical territory.

More importantly, resolving that metaphysical discussion does nothing to help us make better predictions in the future.

But understanding how and why people think what they think is likely very helpful in constructing models which make better predictions.

I happen to disagree with him because I think resolving that dispute has the potential to help us make better predictions in the future.

I'd love to be convinced of that... But first I'd have to be convinced that the dispute is meaningful to begin with.

Saying "there is regularity" is different from saying "regularity occurs because quarks are real."

Indeed. Mainly because I don't use the term "real", at least not in the same way realists do.

Again, thank you for being charitable. That's a first from someone who disagrees.

Replies from: Bugmaster, TimS, MugaSofer
comment by Bugmaster · 2013-04-11T18:56:58.523Z · LW(p) · GW(p)

Of course there is something external to our minds, which we all experience. ... Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.

I'm not sure I understand your point of view, given these two statements. If experts in the field are able to predict future inputs with a reasonably high degree of certainty; and if we agree that these inputs are external to our minds; is it not reasonable to conclude that such experts have built an approximate mental model of at least a small portion of whatever it is that causes the inputs ? Or are you asserting that they just got lucky ?

Sorry for the newbie question, I'm late to this discussion and am probably missing a lot of context...

Replies from: DaFranker
comment by DaFranker · 2013-04-11T19:00:22.460Z · LW(p) · GW(p)

I'm making similar queries here, since this intrigues me and I was similarly confused by the non-postulate. Maybe between all the cross-interrogations we'll finally understand what schminux is saying ;)

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T19:47:13.083Z · LW(p) · GW(p)

whatever it is that causes the inputs

why assume that something does, unless it's an accurate assumption (i.e. testable, tested and confirmed)?

Replies from: PrawnOfFate, Bugmaster, DaFranker
comment by PrawnOfFate · 2013-04-13T14:04:56.243Z · LW(p) · GW(p)

why assume that something does, unless it's an accurate assumption (i.e. testable, tested and confirmed)?

Because there are stable relationships between outputs (actions) and inputs. We all test that hypothesis multiple times a day.

comment by Bugmaster · 2013-04-11T23:10:24.588Z · LW(p) · GW(p)

The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T23:26:11.738Z · LW(p) · GW(p)

The inputs appear to be highly repeatable and consistent with each other.

Some are and some aren't. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into "there is this one universal thing that determines all inputs ever".

Replies from: Bugmaster
comment by Bugmaster · 2013-04-11T23:33:34.197Z · LW(p) · GW(p)

What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that "one universal thing" being out there to be a little bit higher ?

And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?

comment by DaFranker · 2013-04-11T19:56:14.330Z · LW(p) · GW(p)

Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.

See also my other reply to your other reply (heh). I think I'm piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).

comment by TimS · 2013-04-11T18:29:30.003Z · LW(p) · GW(p)

Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.

Experts in the field have said things that were very philosophically naive. The steel-manning of those types of statements is isomorphic to physical realism.

And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, "territory" seems roughly like the thing you call "inputs + implication of some regularity in inputs." That's how I've interpreted Yudkowsky's use of the word as well. Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.

In short, Yudkowsky says the map "corresponds" the the territory in sufficiently fine grain that sentences like "atoms exist" are meaningful. You seem to think that the metaphor of the map is hopelessly misleading. I'm somewhere between, in that I think the map metaphor is helpful, but the map is not fine-grained enough to think "atoms exist" is a meaningful sentence.

I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate. I mostly like Feyerabend's thinking, Yudkowsky and most of this community does not, and your position seems to trying to avoid the debate. Which you could do more easily if you would recognize what we mean with our words.


For outside observers:
No, I haven't defined map or corresponds. Also, meaningful != true. Newtonian physics is meaningful and false.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T19:11:13.671Z · LW(p) · GW(p)

And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, "territory" seems roughly like the thing you call "inputs + implication of some regularity in inputs."

Well, almost the same thing. To me regularity is the first (well-tested) meta-model, not a separate assumption.

That's how I've interpreted Yudkowsky's use of the word as well.

I'm not so sure, see my reply to DaFranker.

Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.

I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something "exists", like ideas, numbers, Tegmark's level 4, many untestable worlds and so on.

I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate.

Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.

Replies from: None, TimS
comment by [deleted] · 2013-04-11T19:27:05.871Z · LW(p) · GW(p)

I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something "exists", like ideas, numbers, Tegmark's level 4, many untestable worlds and so on.

Not to mention question like "If we send these colonists over the horizon, does that kill them or not?"

Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?

In other words, instrumentalism is a fine epistemic position, but how to actually build an instrumental agent with good consequences is unclear. Doesn't wireheading become an issue?

If I'm accidentally assuming something that is confusing me, please point it out.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-11T19:44:59.871Z · LW(p) · GW(p)

Not to mention question like "If we send these colonists over the horizon, does that kill them or not?"

This question is equally meaningful in both cases, and equally answerable. And the answer happens to be the same, too.

Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?

Your argument reminds me of "Obviously morality comes from God, if you don't believe in God, what's to stop you from killing people if you can get away with it?" It is probably an uncharitable reading of it, though.

The "What I care about" thingie is currently one of those inputs. Like, what compels me to reply to your comment? It can partly be explained by the existing models in psychology, sociology and other natural sciences, and in part is still a mystery. Some day it will hopefully be able to analyze and simulate mind and brain better, and explain how this desire arises, and why one shminux decides to reply to and not ignore your comment. Maybe I feel good when smart people publicly agree with me. Maybe I'm satisfying some other preference I'm not aware of.

Replies from: None
comment by [deleted] · 2013-04-12T01:34:23.682Z · LW(p) · GW(p)

Your argument

It's not an argument; it's an honest question. I'm sympathetic to instrumentalism, I just want to know how you frame the whole preferences issue, because I can't figure out how to do it. It probably is like the God is Morality thing, but I can't just accidentally find my way out of such a pickle without some help.

I frame it as "here's all these possible worlds, some being better than others, and only one being 'real', and then here's this evidence I see, which discriminates which possible worlds are probable, and here's the things I can do that that further affect which is the real world, and I want to steer towards the good ones." As you know, this makes a lot of assumptions and is based pretty directly on the fact that that's how human imagination works.

If there is a better way to do it, which you seem to think that there is, I'm interested. I don't understand your answer above, either.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T03:51:42.142Z · LW(p) · GW(p)

Well, I'll give it another go, despite someone diligently downvoting all my related comments.

"here's all these possible worlds, some being better than others, and only one being 'real', and then here's this evidence I see, which discriminates which possible worlds are probable, and here's the things I can do that that further affect which is the real world, and I want to steer towards the good ones."

Same here, with a marginally different dictionary. Although you are getting close to a point I've been waiting for people to bring up for some time now.

So, what are those possible worlds but models? And isn't the "real world" just the most accurate model? Properly modeling your actions lets you affect the preferred "world" model's accuracy, and such. The remaining issue is whether the definition of "good" or "preferred" depends on realist vs instrumentalist outlook, and I don't see how. Maybe you can clarify.

Replies from: TheOtherDave, itaibn0, PrawnOfFate, CCC, Bugmaster, None
comment by TheOtherDave · 2013-04-12T16:32:31.992Z · LW(p) · GW(p)

Hrm.

First, let me apologize pre-emptively if I'm retreading old ground, I haven't carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I'm doing so. That said... my understanding of your account of existence is something like the following:

A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn't. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I'm typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.

On your account, all it means to say "my keyboard exists" is that my experience consistently demonstrates patterns of that sort, and consequently I'm confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.

We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an "object" which "exists" (specifically, we refer to K as "my keyboard") which is as good a way of talking as any though sloppy in the way of all natural language.

We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.

We can also say that M1 models are more "accurate" than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.

And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we're aware of. We can call MR1 "the real world," by which we mean the most accurate model.

Of course, this doesn't preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 "the real world". And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.

Similarly, it doesn't preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 "the real world". For example, MR3 might not contain K at all, and I would suddenly "realize" that there never was a keyboard.

All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed "reality" R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.

Of course, none of this precludes being mistaken about the real world... that is, I might think that MR1 is the real world, when in fact I just haven't fully evaluated the predictive value of the various models I'm aware of, and if I were to perform such an evaluation I'd realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as "possible worlds."

And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.

Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn't a novel error, it's just the extension of the original error of reification of the real world onto possible worlds.

That said, talking about it gets extra-confusing now, because there's now several different mistaken ideas about reality floating around... the original "naive realist" mistake of positing R that corresponds to MR, the "multiverse" mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that "exist in the world" (outside of a model) and "exist in possible worlds" (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.

Have I understood your position?

Replies from: shminux, MugaSofer
comment by Shmi (shminux) · 2013-04-12T17:21:38.568Z · LW(p) · GW(p)

As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.

The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like "everything imaginable exists". This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.

The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?

Replies from: TheOtherDave, TheOtherDave, Bugmaster, PrawnOfFate
comment by TheOtherDave · 2013-04-12T18:57:32.436Z · LW(p) · GW(p)

Maybe you should teach your steelmanning skills, or make a post out of it.

I've thought about this, but on consideration the only part of it I understand explicitly enough to "teach" is Miller's Law (the first one), and there's really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they're wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they're wrong... but rarely do I feel it useful to tell them so.)

The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.

The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?

Yes. I'm not sure what to say about that on your account, and that was in fact where I was going to go next.

Actually, more generally, I'm not sure what distinguishes experiences we have from those we don't have in the first place, on your account, even leaving aside how one can alter future experiences.

After all, we've said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren't properties of the individual models (though they can of course be represented by properties of models). But if they aren't properties of models, well, what are they? On your account, it seems to follow that experiences don't exist at all, and there simply is no distinction between experiences we have and those we don't have.

I assume you reject that conclusion, but I'm not sure how. On a naive realist's view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally... indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)

(On a multiverse realist's view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)

Another unaddressed issue derives from your wording: "how do you affect your future experiences?" I may well ask whether there's anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that's roughly the same problem for an instrumentalist as it is for a realist... that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T19:58:59.701Z · LW(p) · GW(p)

But if they aren't properties of models, well, what are they? On your account, it seems to follow that experiences don't exist at all, and there simply is no distinction between experiences we have and those we don't have.

Somewhere way upstream I said that I postulate experiences (I called them inputs), so they "exist" in this sense. We certainly don't experience "everything", so that's how you tell "between experiences we have and those we don't have". I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I'm failing to Millerize it.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T21:02:42.911Z · LW(p) · GW(p)

OK.

So "existence" properly refers to a property of subsets of models (e.g., "my keyboard exists" asserts that M1 contain K), as discussed earlier, and "existence" also properly refer to a property of inputs (e.g., "my experience of my keyboard sitting on my desk exists" and "my experience of my keyboard dancing the Macarena doesn't exist" are both coherent, if perhaps puzzling, things to say), as discussed here.
Yes?

Which is not necessarily to say that "existence" refers to the same property of subsets of models and of inputs. It might, it might not, we haven't yet encountered grounds to say one way or the other.
Yes?

OK. So far, so good.

And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:

Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability.

Well, I agree that when a realist solipsist says "Mine is the only mind that exists" they are using "exists" in a way that is meaningless to an instrumentalist.

That said, I don't see what stops an instrumentalist solipsist from saying "Mine is the only mind that exists" while using "exists" in the ways that instrumentalists understand that term to have meaning.

That said, I still don't quite understand how "exists" applies to minds on your account. You said here that "mind is also a model", which I understand to mean that minds exist as subsets of models, just like keyboards do.

But you also agreed that a model is a "mental construct"... which I understand to refer to a construct created/maintained by a mind.

The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of "existence" that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren't mental constructs.

My reasoning here is similar to how if you said "Red boxes are contained by blue boxes" and "Blue boxes are contained by red boxes" I would conclude that at least one of those statements had an implicit "some but not all" clause prepended to it... I don't see how "For all X, X is contained by a Y" and "For all Y, Y is contained by an X" can both be true.

Does that make sense?
If so, can you clarify which is the case?
If not, can you say more about why not?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T21:30:20.548Z · LW(p) · GW(p)

I don't see how "For all X, X is contained by a Y" and "For all Y, Y is contained by an X" can both be true [implicitly assuming that X is not the same as Y, I am guessing].

And what do you mean here by "true", in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it's the latter, how would you test for it?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T22:50:16.809Z · LW(p) · GW(p)

Beats me.

Just to be clear, are you suggesting that on your account I have no grounds for treating "All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes" differently from "All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes" in the way I discussed?

If you are suggesting that, then I don't quite know how to proceed. Suggestions welcomed.

If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework

comment by TheOtherDave · 2013-04-12T19:18:00.401Z · LW(p) · GW(p)

Actually, thinking about this a little bit more, a "simpler" question might be whether it's meaningful on this account to talk about minds existing. I think the answer is again that it isn't, as I said about experiences above... models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.

If that's the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.

So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can't base anything such (non)existence.

So I'm not sure what an instrumentalist's argument rejecting solipsism looks like.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T19:48:21.983Z · LW(p) · GW(p)

models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error

Sort of, yes. Except mind is also a model.

So I'm not sure what an instrumentalist's argument rejecting solipsism looks like.

Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.

comment by Bugmaster · 2013-04-12T18:59:18.307Z · LW(p) · GW(p)

In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven't even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn't expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.

Replies from: TheOtherDave, PrawnOfFate, shminux
comment by TheOtherDave · 2013-04-12T19:24:16.676Z · LW(p) · GW(p)

FWIW, my understanding of shminux's account does not assert that "all we have are disconnected inputs," as inputs might well be connected.

That said, it doesn't seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I'm still trying to wrap my brain around that part.

ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-12T19:55:13.215Z · LW(p) · GW(p)

I don't see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T19:59:50.965Z · LW(p) · GW(p)

Nor do I.

But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.

I don't have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it's useful to ask "How, then, does X come to be?" rather than to insist that Y must be present.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-12T20:13:55.506Z · LW(p) · GW(p)

One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T20:22:14.830Z · LW(p) · GW(p)

At the risk of repeating myself: I agree that I don't currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.

comment by PrawnOfFate · 2013-04-12T19:11:24.819Z · LW(p) · GW(p)

ie, realism explain how you can predict at all.

comment by Shmi (shminux) · 2013-04-12T19:18:23.241Z · LW(p) · GW(p)

This seems to me to be the question of origin "where do the inputs come from?" in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. "external reality") responsible for it. I think this is close to subjective Bayesianism, though I'm not 100% sure.

Replies from: Bugmaster, PrawnOfFate
comment by Bugmaster · 2013-04-12T20:30:12.967Z · LW(p) · GW(p)

The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. "external reality") responsible for it.

I think it's possible to do so without specifying the mechanism, but that's not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.

Let me set up an analogy. Let's say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you've tried so far.

Does it make sense to ask the question, "what will happen when I set the switch to positions 4..10" ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?

comment by PrawnOfFate · 2013-04-13T13:12:27.107Z · LW(p) · GW(p)

The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. "external reality") responsible for it.

In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam's razor. The posit of an external reality of some sort (it doesn't need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T13:58:32.333Z · LW(p) · GW(p)

In the sense that it is always possible to leave something just unexplained.

Fixed that for you.

But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam's razor.

I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T14:13:05.493Z · LW(p) · GW(p)

I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.

But that's a terrible argument. if you can't justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T14:30:18.959Z · LW(p) · GW(p)

Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don't expect that to mean anything more than the model working.

Except I think he's claimed to value things like "the most accurate model not containing slaves" (say) which implies there's something special about the correct model beyond mere accuracy.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T14:46:18.260Z · LW(p) · GW(p)

If it's really better or more "true" on some level

Shminux seems to be positing inputs and models at the least.

Replies from: MugaSofer, MugaSofer
comment by MugaSofer · 2013-04-13T15:04:50.092Z · LW(p) · GW(p)

If it's really better or more "true" on some level

I think you quoted the wrong thing there, BTW.

comment by MugaSofer · 2013-04-13T15:01:58.535Z · LW(p) · GW(p)

I suppose they are positing inputs, but they're arguably not positing models as such - merely using them. Or at any rate, that's how I'd ironman their position.

comment by PrawnOfFate · 2013-04-12T20:17:44.221Z · LW(p) · GW(p)

The reification error you describe is indeed one of the fallacies a realist is prone to

And inverted stupidity is..?

comment by MugaSofer · 2013-04-12T20:50:10.269Z · LW(p) · GW(p)

If I understand both your and shiminux's comments, this might express the same thing in different terms:

  • We have experiences ("inputs".)
  • We wish to optimize these inputs according to whatever goal structure.
  • In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
  • Some of these models are more accurate than others. We might call accurate models "real".
  • However, the term "real" holds no special ontological value, and they might later prove inaccurate or be replaced by better models.

Thus, we have a perfectly functioning agent with no conception (or need for) a territory - there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn't very useful for such an agent.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T20:55:32.354Z · LW(p) · GW(p)

Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.

Replies from: TheOtherDave, MugaSofer
comment by TheOtherDave · 2013-04-12T21:22:49.347Z · LW(p) · GW(p)

As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.

I find it's much more difficult to express my own positions in ways that are easily understood, though. It's harder to figure out what is salient and where the vastest inferential gulfs are.

You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.

Replies from: shminux, MugaSofer
comment by Shmi (shminux) · 2013-04-12T21:42:08.516Z · LW(p) · GW(p)

You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.

I actually tried this a few times, even started a post draft titled "explain realism to a baby AI". In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T22:00:32.973Z · LW(p) · GW(p)

Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T22:04:40.500Z · LW(p) · GW(p)

It may be a useful exercise in making your realist intuitions explicit, though.

You are right. I will give it a go. Just because it's obvious doesn't mean it should not be explicit.

comment by MugaSofer · 2013-04-12T21:27:38.555Z · LW(p) · GW(p)

Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.

(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don't threaten your own arguments (since they would be threatening the other side's arguments, as it were.))

Replies from: shminux, TheOtherDave
comment by Shmi (shminux) · 2013-04-12T21:39:42.312Z · LW(p) · GW(p)

Maybe we should organize a discussion where everyone has to take positions other than their own?

It seems to me to be one of the basic exercises in rationality, also known as "Devil's advocate". However, Eliezer dislikes it for some reason, probably because he thinks that it's too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one's own back. Not sure how much of this is taught or practiced at CFAR camps.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T21:52:30.230Z · LW(p) · GW(p)

Yup. In my experience, though, Devil's Advocates are usually pitted against people genuinely arguing their cause, not other devil's advocates.

However, Eliezer dislikes it for some reason, probably because he thinks that it's too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one's own back.

Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil's advocate, though, IIRC.

comment by TheOtherDave · 2013-04-12T22:52:23.207Z · LW(p) · GW(p)

Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it... though I'm not sure how good a job of it I'll do.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T23:17:03.390Z · LW(p) · GW(p)

I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still ...

if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so

Wait, there are nonrealists other than shiminux here?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T23:26:36.505Z · LW(p) · GW(p)

Beats me.

comment by MugaSofer · 2013-04-12T21:19:10.767Z · LW(p) · GW(p)

Actually, that's just the model I was already using. I noticed it was shorter than Dave's, so I figured it might be useful.

comment by itaibn0 · 2013-04-13T15:00:16.062Z · LW(p) · GW(p)

I suggest we move the discussion to a top-level discussion thread. The comment tree here is huge and hard to navigate.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T15:11:36.821Z · LW(p) · GW(p)

If shiminux could write an actual post on his beliefs, that might help a great deal, actually.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-13T15:21:32.174Z · LW(p) · GW(p)

I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don't believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.

Replies from: itaibn0, MugaSofer
comment by itaibn0 · 2013-04-13T15:51:46.852Z · LW(p) · GW(p)

As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.

Replies from: TheOtherDave, MugaSofer
comment by TheOtherDave · 2013-04-13T16:30:02.958Z · LW(p) · GW(p)

Cool.
If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I'd appreciate that.

Replies from: itaibn0
comment by itaibn0 · 2013-04-14T12:16:40.150Z · LW(p) · GW(p)

As I understand it, the word "how" is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don't feel the need for further explanation. More concretely, should you ask "How does closing your eyes lead to a blackout of your vision?" I would answer "After I close my eyes, my eyelids block all of the light from getting into my eye.", and I consider this answer satisfying. Just because I don't believe in a ontologically fundamental reality, doesn't mean I don't believe in eyes and eyelids and light.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-14T19:06:10.921Z · LW(p) · GW(p)

OK. So, say I have two models, M1 and M2.

In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.

At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.

So far, so good.

At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.

If I've understood you correctly, both the realist and the instrumentalist account of all of the above is "there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2."

The realist account goes on to say "the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model." The instrumentalist account, IIUC, says "the reason the same events occur in both models is not worth discussing; they just do."

Is that right?

comment by MugaSofer · 2013-04-13T16:21:30.806Z · LW(p) · GW(p)

That's still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs - your beliefs, apparently, I think a lot of people will have some questions to ask you now - in a top-level post.

comment by MugaSofer · 2013-04-13T15:27:07.243Z · LW(p) · GW(p)

Ooh, excellent point. I'd do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better - my puny argument would be torn to shreds, I have too many holes in my understanding :(

comment by PrawnOfFate · 2013-04-12T19:05:46.213Z · LW(p) · GW(p)

So, what are those possible worlds but models?

The actual world is also a possible world. Non actual possible worlds are only accessible as models. Realists believe they can bring the actual world into line with desired models to some exitent

And isn't the "real world" just the most accurate model?

Not for realists.

Properly modeling your actions lets you affect the preferred "world" model's accuracy, and such. The remaining issue is whether the definition of "good" or "preferred" depends on realist vs instrumentalist outlook, and I don't see how. Maybe you can clarify.

For realist, wireheading isn't a good aim. For anti realists, it is the only aim.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-12T19:39:36.312Z · LW(p) · GW(p)

For realist, wireheading isn't a good aim. For anti realists, it is the only aim.

Realism doesn't preclude ethical frameworks that endorse wireheading.

I'm less clear about the second part, though.

Rejecting (sufficiently well implemented) wireheading requires valuing things other than one's own experience. I'm not yet clear on how one goes about valuing things other than one's own experience in an instrumentalist framework, but then again I'm not sure I could explain to someone who didn't already understand it how I go about valuing things other than my own experience in a realist framework, either.

Replies from: JGWeissman, PrawnOfFate
comment by JGWeissman · 2013-04-12T21:09:20.110Z · LW(p) · GW(p)

but then again I'm not sure I could explain to someone who didn't already understand it how I go about valuing things other than my own experience in a realist framework, either.

See The Domain of Your Utility Function.

comment by PrawnOfFate · 2013-04-12T20:02:39.475Z · LW(p) · GW(p)

Realism doesn't preclude ethical frameworks that endorse wireheading

No, but they are a minority interest.

'm not yet clear on how one goes about valuing things other than one's own experience in an instrumentalist framework, but then again I'm not sure I could explain to someone who didn't already understand it how I go about valuing things other than my own experience in a realist framework, either.

If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don't think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2013-04-12T20:24:46.445Z · LW(p) · GW(p)

they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.

My understanding of shminux's position is that accurate models can be used, somehow, to improve inputs.

I don't yet understand how that is even in principle possible on his model, though I hope to improve my understanding.

comment by Shmi (shminux) · 2013-04-12T20:22:43.904Z · LW(p) · GW(p)

Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don't think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-12T20:37:03.380Z · LW(p) · GW(p)

So where did you address it?

comment by MugaSofer · 2013-04-12T20:56:31.309Z · LW(p) · GW(p)

The trouble, of course, is that sometimes people really are wrong in "obvious" ways. Probably not high-status LWers, I guess.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T21:00:29.261Z · LW(p) · GW(p)

It happens, but this should not be the initial assumption. And I'm not sure who you mean by "high-status LWers".

Replies from: MugaSofer, MugaSofer
comment by MugaSofer · 2013-04-12T21:59:20.396Z · LW(p) · GW(p)

Sorry, just realized I skipped over the first part of your comment.

It happens, but this should not be the initial assumption.

Doesn't that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?

*Most conspiracy theories, for example.

comment by MugaSofer · 2013-04-12T21:17:53.315Z · LW(p) · GW(p)

I was referring to you. PrawnOfFate should not have expected you to make such a mistake, give the evidence.

comment by CCC · 2013-04-12T18:37:56.900Z · LW(p) · GW(p)

So, what are those possible worlds but models?

If I answer 'yes' to this, then I am confusing the map with the territory, surely? Yes, there may very well be a possible world that's a perfect match for a given model, but how would I tell it apart from all the near-misses?

The "real world" is a good deal more accurate than the most accurate model of it that we have of it.

comment by Bugmaster · 2013-04-12T04:37:18.098Z · LW(p) · GW(p)

Well, I'll give it another go, despite someone diligently downvoting all my related comments.

It's not me, FWIW; I find the discussion interesting.

That said, I'm not sure what methodology you use to determine which actions to take, given your statement that " the "real world" just the most accurate model". If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you'd be grossly overfitting the data, but is that even a problem ?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T04:55:08.554Z · LW(p) · GW(p)

I didn't say it's all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, "it all adds up to normality".

Replies from: Bugmaster
comment by Bugmaster · 2013-04-12T05:11:15.058Z · LW(p) · GW(p)

Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice...

Would you do so if picking another model required less effort ? I'm not sure how you can justify doing that.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T05:28:33.668Z · LW(p) · GW(p)

I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.

Replies from: Bugmaster, PrawnOfFate, private_messaging
comment by Bugmaster · 2013-04-12T07:02:23.302Z · LW(p) · GW(p)

It's not that I think that your version of instrumentalism is incompatible with preferences, it's more like I'm not sure I understand what the word "preferences" even means in your context. You say "possible worlds", but, as far as I can tell, you mean something like, "possible models that predict future inputs".

Firstly, I'm not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a "preference" for you means something like, "a desire to make one model more accurate than the rest", but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn't it ?

comment by PrawnOfFate · 2013-04-12T20:10:38.088Z · LW(p) · GW(p)

Your having a preference for worlds without, eg, slavery can't possibly translate into something iike "i want to change the world external to me so that it no longer contains slaves". I have trouble understanding what it would translate to. You could adopt models where things you don't like don't exist, but they wouldn't be accurate.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T20:28:53.158Z · LW(p) · GW(p)

Your having a preference for worlds without, eg, slavery can't possibly translate into something iike "i want to change the world external to me so that it no longer contains slaves".

No, but it translates to its equivalent:

I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).

Replies from: PrawnOfFate, MugaSofer
comment by PrawnOfFate · 2013-04-12T20:32:47.012Z · LW(p) · GW(p)

I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).

And how do you arrange that?

comment by MugaSofer · 2013-04-12T22:04:40.724Z · LW(p) · GW(p)

I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).

So you're saying you have a preference over the map, as opposed to the territory (your experiences, in this case)

That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the "no-slaves" map instead of trying to optimize, well, reality, such as the slaves - perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam's Razor.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-12T23:03:59.815Z · LW(p) · GW(p)

I agree that self-deception is a "real" possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don't see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of "some sort of exploit involving Occam's Razor"?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T23:30:44.853Z · LW(p) · GW(p)

Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I'm suggesting that since you don't want to fall into those pitfalls, those aren't actually your preferences, whether because you've made a mistake or I have (please tell me if I have.)

comment by private_messaging · 2013-04-12T06:04:23.687Z · LW(p) · GW(p)

I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there's very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.

A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow "from the inside" it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn't even work any more for predicting anything.

Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.

comment by [deleted] · 2013-04-12T04:10:53.427Z · LW(p) · GW(p)

So, what are those possible worlds but models? And isn't the "real world" just the most accurate model? Properly modeling your actions lets you affect the preferred "world" model's accuracy, and such. The remaining issue is whether the definition of "good" or "preferred" depends on realist vs instrumentalist outlook, and I don't see how. Maybe you can clarify.

Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.

Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.

I can see now that I'm confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.

Replies from: shminux, MugaSofer
comment by Shmi (shminux) · 2013-04-12T04:27:32.210Z · LW(p) · GW(p)

I see that you have made the accuracy of various models the referent of preferences.

I like how you put it into some fancy language, and now it sounds almost profound.

I can see now that I'm confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.

It is entirely possible that I'm talking out of my ass here, and you will find a killer argument against this approach.

Replies from: None
comment by [deleted] · 2013-04-12T05:08:21.608Z · LW(p) · GW(p)

It is entirely possible that I'm talking out of my ass here, and you will find a killer argument against this approach.

Likewise the converse. I reckon both will get killed by a proper approach.

comment by MugaSofer · 2013-04-12T12:03:23.647Z · LW(p) · GW(p)

It works fine - as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.

If you can't find a holodeck, I sure hope you don't accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what's the point?

Replies from: None
comment by [deleted] · 2013-04-13T18:03:08.663Z · LW(p) · GW(p)

You are arguing with a strawman.

It's not a utility function over inputs, it's over the accuracy of models.

If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can't affect the stuff outside the holodeck.

Just because someone frames things differently doesn't mean they have to make the obvious mistakes and start killing babies.

For example, I could do what you just did to "maximize expected utility over possible worlds" by choosing to modify my brain to have erroneously high expected utility. It's maximized now right? See the problem with this argument?

It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T19:10:52.472Z · LW(p) · GW(p)

You are arguing with a strawman.

You know, I'm actually not.

It's not a utility function over inputs, it's over the accuracy of models.

Affecting the accuracy of a specified model - a term defined as "how well it predicts future inputs" - is a subset of optimizing future inputs.

If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can't affect the stuff outside the holodeck.

You're still thinking like a realist. A holodeck doesn't prevent you from observing the real world - there is no "real world". It prevents you testing how well certain models predict experiences when you take the action "leave the holodeck", unless of course you leave the holodeck - it's an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.

Just because someone frames things differently doesn't mean they have to make the obvious mistakes and start killing babies.

Pardon?

For example, I could do what you just did to "maximize expected utility over possible worlds" by choosing to modify my brain to have erroneously high expected utility. It's maximized now right? See the problem with this argument?

Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don't win the lottery with quantum suicide.

It all adds up to normality

You know, not every belief adds up to normality - just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because "it all adds up to normality".

comment by TimS · 2013-04-12T01:58:45.158Z · LW(p) · GW(p)

Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.

That's the standard physical realist response to Kuhn and Feyerabend. I find it confusing to hear it from you, because you certainly are not a standard physical realist.

In short, I think you are being a little too a la carte with your selection from various parts of philosophy of science.

Replies from: Bugmaster
comment by Bugmaster · 2013-04-12T02:15:37.430Z · LW(p) · GW(p)

In short, I think you are being a little too a la carte with your selection from various parts of philosophy of science.

Is there something wrong with doing that ? As long as the end result is internally consistent, I don't see the problem.

Replies from: TimS
comment by TimS · 2013-04-12T02:38:50.894Z · LW(p) · GW(p)

Sure, my criticism has an implied "And I'm concerned you've managed to endorse A and ~A by accident."

Replies from: Bugmaster
comment by Bugmaster · 2013-04-12T03:24:57.969Z · LW(p) · GW(p)

Right, that's fair, but it's not really apparent from your reply which is A and which is ~A. I understand that physical realists say the same things as shminux, who professes not to be a physical realist -- but then, I bet physical realists say that water is wet, too...

Replies from: TimS
comment by TimS · 2013-04-12T03:45:37.855Z · LW(p) · GW(p)

I don't know that shminux has inadvertently endorsed A and ~A. I'm suspicious that this has occurred because he resists the standard physical realist definition of territory / reality, but responds to a quasi-anti-realist position with an physical realist answer that I suspect depends on the rejected definition of reality.

If I knew precisely where the contradiction was, I'd point it out explicitly. But I don't, so I can't.

Replies from: Bugmaster
comment by Bugmaster · 2013-04-12T03:50:12.966Z · LW(p) · GW(p)

Yeah, fair enough, I don't think I understand his position myself at this point...

comment by MugaSofer · 2013-04-11T22:16:21.889Z · LW(p) · GW(p)

Of course there is something external to our minds, which we all experience.

Most certainly. I call these experiences inputs.

Sorry if this is a stupid question, but what do you call the thingy that makes these inputs behave regularly?

comment by wedrifid · 2013-04-08T16:25:04.156Z · LW(p) · GW(p)

I think shminux's response is something like:

"Given a model that predicts accurately, what would you do differently if the objects described in the model do or don't exist at some ontological level? If there is no difference, what are we worrying about?"

If I recall correctly he abandons that particular rejection when he gets an actual answer to the first question. Specifically, he argues against belief in the implied invisible when said belief leads to making actual decisions that will result in outcomes that he will not personally be able to verify (eg. when considering Relativity and accelerated expansion of the universe).

Replies from: TimS
comment by TimS · 2013-04-08T16:59:59.473Z · LW(p) · GW(p)

I think you are conflating two related, but distinct questions. Physical realism faces challenges from:

(1) the sociological analysis represented by works like Structure of Scientific Revolution

(2) the ontological status of objects that, in principle, could never be observed (directly or indirectly)

I took shminux as trying to duck the first debate (by adopting physical pragmatism), but I think most answers to the first question do not necessarily imply particular answers to the second question.

Replies from: wedrifid
comment by wedrifid · 2013-04-08T17:16:30.920Z · LW(p) · GW(p)

I think you are conflating two related, but distinct questions.

I am almost certain I am saying a different thing to what you think.

comment by MugaSofer · 2013-04-11T22:32:55.113Z · LW(p) · GW(p)

I can imagine using a model that contains elements that are merely convenient pretenses, and don't actually exist - like using simpler Newtonian models of gravity despite knowing GR is true (or at least more likely to be true than Newton.)

If some of these models featured things that I care about, it wouldn't matter, as long as I didn't think actual reality featured these things. For example, if an easy hack for predicting the movement of a simple robot was to imagine it being sentient (because I can easily calculate what humanlike minds wold do using mys own neural circutry,) I still wouldn't care if it was crushed, because the sentient being described by the model doesn't actually exist - the robot merely uses similar pathfinding.

Does that answer your question, TimS'-model-of-shiminux?

comment by TimS · 2013-04-07T19:09:31.454Z · LW(p) · GW(p)

I don't understand the paperclipping reference, but MugaSofer is a hard-core moral realist (I think). Physical pragmatism (your position) is a reasonable stance in the physical realism / anti-realism debate, but I'm not sure what the parallel position is in the moral realism / anti-realism debate.

(Edit: And for some moral realists, the justification for that position is the "obvious" truth of physical realism and the non-intuitiveness of physical facts and moral facts having a different ontological status.)

In short, "physical prediction" is a coherent concept in a way that "moral prediction" does not seem to be. A sentence of the form "I predict retaliation if I wrong someone" is a psychological prediction, not a moral prediction. Defining what "wrong" means in that sentence is the core of the moral realism / anti-realism debate.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T02:57:10.791Z · LW(p) · GW(p)

In short, "physical prediction" is a coherent concept in a way that "moral prediction" does not seem to be.

I don't see it.

A sentence of the form "I predict retaliation if I wrong someone" is a psychological prediction, not a moral prediction. Defining what "wrong" means in that sentence is the core of the moral realism / anti-realism debate.

Do we really have to define "wrong" here? It seems more useful to say "certain actions of mine may cause this person to experience a violation of their innate sense of fairness", or something to that effect. Now we are doing cognitive science, not some vague philosophizing.

Replies from: TimS, nshepperd
comment by TimS · 2013-04-08T14:11:46.728Z · LW(p) · GW(p)

Do we really have to define "wrong" here? It seems more useful to say "certain actions of mine may cause this person to experience a violation of their innate sense of fairness", or something to that effect.

At a minimum, we need an enforceable procedure for resolving disagreements between different people when each of their "innate senses of fairness" disagree. Negotiated settlement might be the gold-standard, but history shows this seldom has actually resolved major disputes.

Defining "wrong" helps because it provides a universal principled basis for others to intervene in the conflict. Alliance building also provides a basis, but is hardly universally principled (or fair, for most usages of "fair").

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T15:03:18.104Z · LW(p) · GW(p)

Defining "wrong" helps because it provides a universal principled basis for others to intervene in the conflict.

Yes, it definitely helps to define "wrong" as a rough acceptable behavior boundary in a certain group. But promoting it from a convenient shortcut in your models into something bigger is hardly useful. Well, it is useful to you if you can convince others that your definition of "wrong" is the one true one and everyone else ought to abide by it or burn in hell. Again, we are out of philosophy and into psychology.

Replies from: TimS
comment by TimS · 2013-04-08T15:40:26.371Z · LW(p) · GW(p)

I'm glad we agree that defining "wrong" is useful, but I'm still confused how you think we go about defining "wrong." One could assert:

Wrong is what society punishes.

But that doesn't tell us how society figures out what to punish, or whether there are constraints on society's classifications. Psychology doesn't seem to answer these questions - there once were societies that practiced human sacrifice or human slavery.

In common usage, we'd like to be able say those societies were doing wrong, and your usage seems inconsistent with using "wrong" in that way.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T16:34:23.202Z · LW(p) · GW(p)

In common usage, we'd like to be able say those societies were doing wrong, and your usage seems inconsistent with using "wrong" in that way.

No, they weren't. Your model of objective wrongness is not a good one, it fails a number of tests.

"Human sacrifice and human slavery" is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.

The evolution of the agreed-upon concept of wrong is a fascinating subject in human psychology, sociology and whatever other natural science is relevant. I am guessing that more formerly acceptable behaviors get labeled as "wrong" as the overall standard of living rises and average suffering decreases. As someone mentioned before, torturing cats is no longer the good clean fun it used to be. But that's just a guess, I would defer to the expert in the area, hopefully there are some around.

Some time in the future a perfectly normal activity of the day will be labeled as "wrong". It might be eating animals, or eating plants, or having more than 1.0 children per person, or refusing sex when asked politely, or using anonymous nicks on a public forum, or any other activity we find perfectly innocuous.

Conversely, there were plenty of "wrong" behaviors which aren't wrong anymore, at least not in the modern West, like proclaiming that Jesus is not the Son of God, or doing witchcraft, or marrying a person of the same sex, or...

The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.

Replies from: TimS, nshepperd, MugaSofer
comment by TimS · 2013-04-08T16:54:16.332Z · LW(p) · GW(p)

Your position on moral realism has a respectable pedigree in moral philosophy, but I don't think it is parallel to your position on physical realism.


As I understand it, your response to the question "Are there electrons?" is something like:

This is a wrong question. Trying to find the answer doesn't resolve any actual decision you face.

By contrast, your response to "Is human sacrifice wrong?" is something like:

Not in the sense you mean, because "wrong" in that sense does not exist.


I don't think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.

Replies from: Eugine_Nier, shminux
comment by Eugine_Nier · 2013-04-10T03:10:21.673Z · LW(p) · GW(p)

I don't think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.

Without a notion of objective underlying reality, shminux had nothing to cash out any moral theory in.

comment by Shmi (shminux) · 2013-04-08T17:15:41.287Z · LW(p) · GW(p)

As I understand it, your response to the question "Are there electrons?" is something like:
This is a wrong question. Trying to find the answer doesn't resolve any actual decision you face.
By contrast, your response to "Is human sacrifice wrong?" is something like:
Not in the sense you mean, because "wrong" in that sense does not exist.

Not quite.

"Are there electrons?" "Yes, electron is an accurate model, though it it has its issues."

"Does light propagate in ether?" "Aether is not a good model, it fails a number of tests."

"is human sacrifice an unacceptable behavior in the US today?" "Yes, this model is quite accurate."

"Is 'wrong' independent of the group that defines it?" "No, this model fails a number of tests."

Seems pretty consistent to me, with all the parallels you want.

Replies from: TimS, MugaSofer
comment by TimS · 2013-04-09T01:20:09.959Z · LW(p) · GW(p)

this model fails a number of tests

You are not using the word "tests" consistently in your examples. For luminiferous aether, test means something like "makes accurate predictions." Substituting that into your answer to wrong yields:

No, this model fails to make accurate predictions.

Which I'm having trouble parsing as an answer to the question. If you don't mean for that substitution to be sensible, then your parallelism does not seem to hold together.

But in deference to your statement here, I am happy to drop this topic if you'd like me to. It is not my intent to badger you, and you don't have any obligation to continue a conversation you don't find enjoyable or productive.

comment by MugaSofer · 2013-04-08T20:16:22.269Z · LW(p) · GW(p)

"Is 'wrong' independent of the group that defines it?" "No, this model fails a number of tests."

It's worth noting that most people who make that claim are using a different definition of "wrong" to you.

Replies from: wedrifid
comment by wedrifid · 2013-04-08T20:25:25.126Z · LW(p) · GW(p)

I suggest editing in additional line-breaks so that the quote is distinguished from your own contribution. (You need at least two 'enters' between the end of the quote and the start of your own words.)

Replies from: MugaSofer
comment by MugaSofer · 2013-04-09T12:57:15.012Z · LW(p) · GW(p)

Whoops, thanks.

comment by nshepperd · 2013-04-08T22:56:22.786Z · LW(p) · GW(p)

I expected that this discussion would not achieve anything.

Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. "Wrong" does not refer to "whatever 'wrong' means in our language at the time". That would be circular. "Wrong" refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.

I expected this would not make sense to you since you can't cash out objective characteristics in terms of predictive black boxes.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-09T13:18:35.142Z · LW(p) · GW(p)

I expected that this discussion would not achieve anything.

Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.

Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. "Wrong" does not refer to "whatever 'wrong' means in our language at the time". That would be circular. "Wrong" refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.

I think shminux is claiming that this set of characteristics changes dynamically, and thus it is more useful to define "wrong" dynamically as well. I disagree, but then we already have a term for this ("unacceptable") so why reurpose "wrong"?

I expected this would not make sense to you since you can't cash out objective characteristics in terms of predictive black boxes.

Who does "you" refer to here? All participants in this discussion? Sminux only?

Replies from: TheOtherDave, nshepperd
comment by TheOtherDave · 2013-04-09T15:13:52.594Z · LW(p) · GW(p)

we already have a term for this ("unacceptable") so why reurpose "wrong"?

Presumably shminux doesn't consider it a repurposing, but rather an articulation of the word's initial purpose.

next time you know something would fail, speaking up would be helpful.

Well, OK.

Using relative terms in absolute ways invites communication failure.

If I use "wrong" to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., "murder is wrong"), I'm relying on my listener to have a shared model of the world in order for my meaning to get across. If I'm not comfortable relying on that, I do better to specify the judge I have in mind.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-09T18:34:15.067Z · LW(p) · GW(p)

Presumably shminux doesn't consider it a repurposing, but rather an articulation of the word's initial purpose.

Is shiminux a native English speaker? Because that's certainly not how the term is usually used. Ah well, he's tapped out anyway.

Well, OK.

Using relative terms in absolute ways invites communication failure.

If I use "wrong" to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., "murder is wrong"), I'm relying on my listener to have a shared model of the world in order for my meaning to get across. If I'm not comfortable relying on that, I do better to specify the judge I have in mind.

Oh, I can see why it failed - they were using the same term in different ways, each insisting their meaning was "correct" - I just meant you could use this knowledge to help avoid this ahead of time.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-09T19:55:04.792Z · LW(p) · GW(p)

I just meant you could use this knowledge to help avoid this ahead of time.

I understand. I'm suggesting it in that context.

That is, I'm asserting now that "if I find myself in a conversation where such terms are being used and I have reason to believe the participants might not share implicit arguments, make the argumentsexplicit" is a good rule to follow in my next conversation.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-09T21:53:37.278Z · LW(p) · GW(p)

Makes sense. Upvoted.

comment by nshepperd · 2013-04-10T02:43:43.313Z · LW(p) · GW(p)

Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.

Sorry. I guess I was feeling too cynical and discouraged at the time to think that such a thing would be helpful.

Who does "you" refer to here? All participants in this discussion? Sminux only?

In this case I meant to refer to only shminux, who calls himself an instrumentalist and does not like to talk about the territory (as opposed to AIXI-style predictive models).

Replies from: MugaSofer
comment by MugaSofer · 2013-04-10T14:40:43.888Z · LW(p) · GW(p)

Sorry. I guess I was feeling too cynical and discouraged at the time to think that such a thing would be helpful.

You might have been right, at that. My prior for success here was clearly far too high.

comment by MugaSofer · 2013-04-08T19:24:19.143Z · LW(p) · GW(p)

No, they weren't. Your model of objective wrongness is not a good one, it fails a number of tests.

"Human sacrifice and human slavery" is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.

[...]

The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.

This concept of "wrong" is useful, but a) there is an existing term which people understand to mean what you describe - "acceptable" - and b) it does not serve the useful function people currently expect "wrong" to serve; that of describing our extrapolated desires - it is not prescriptive.

I would advise switching to the more common term, but if you must use it this way I would suggest warning people first, to prevent confusion.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T20:46:36.126Z · LW(p) · GW(p)

You or TimS are the ones who introduced the term "wrong" into the conversation, I'm simply interpreting it in a way that makes sense to me. Tapping out due to lack of progress.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-08T22:30:54.572Z · LW(p) · GW(p)

You or TimS are the ones who introduced the term "wrong" into the conversation

That would be TimS, because he's the one discussing your views on moral realism with you.

I'm simply interpreting it in a way that makes sense to me.

And I'm simply warning you that using the term in a nonstandard way is predictably going to result in confusion, as it has in this case.

Tapping out due to lack of progress.

Well, that's your prerogative, obviously, but please don't tap out of your discussion with Tim on my account. And, um, if it's not on my account, you might want to say it to him, not me.

comment by nshepperd · 2013-04-08T04:25:05.772Z · LW(p) · GW(p)

Fairness is not about feelings of fairness.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T05:26:28.325Z · LW(p) · GW(p)

Feeling or not, it's a sense that exists in other primates, not just humans. You can certainly quantify the emotional reaction to real or perceived unfairness, which was my whole point: use cognitive science, not philosophy. And cognitive science is about building models and testing them, like any natural science.

comment by MugaSofer · 2013-04-07T20:33:57.606Z · LW(p) · GW(p)

Well, the trouble occurs when you start talking about the existence of things that, unlike electrons, you actually care about.

Say I value sentient life. If that life doesn't factor into my predictions, does it somehow not exist? Should I stop caring about it? (The same goes for paperclips, if you happen to value those.)

EDIT: I assume you consider the least computationally complex model "better at predicting certain future inputs"?

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T02:51:47.419Z · LW(p) · GW(p)

Say I value sentient life. If that life doesn't factor into my predictions, does it somehow not exist? Should I stop caring about it?

You have it backwards. You also use the term "exist" in the way I don't. You don't have to worry about refining models predicting inputs you don't care about.

I assume you consider the least computationally complex model "better at predicting certain future inputs"?

If there is a luxury of choice of multiple models which give the same predictions, sure. Usually we are lucky if there is one good model.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-08T21:24:46.101Z · LW(p) · GW(p)

You also use the term "exist" in the way I don't.

Well, I am trying to get you to clarify what you mean.

You don't have to worry about refining models predicting inputs you don't care about.

But as I said, I don't care about inputs, except instrumentally. I care about sentient minds (or paperclips.)

Usually we are lucky if there is one good model.

Ah ... no. Invisible pink unicorns and Russel's Teapots abound. For example, what if any object passing over the cosmological horizon disappeared? Or the universe was created last Thursday, but perfectly designed to appear billions of years old? These hypotheses don't do any worse at predicting; they just violate Occam's Razor.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T22:27:58.662Z · LW(p) · GW(p)

Well, I am trying to get you to clarify what you mean.

Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.

Invisible pink unicorns and Russel's Teapots abound.

Fine, I'll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That's just silly.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-09T13:07:30.579Z · LW(p) · GW(p)

Fine, I'll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That's just silly.

Well, considering how many people seem to think that interpretations of QM other than their own are just "trivial extensions with no new predictive power", it's an important point.

Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.

Well, it's pretty obvious we use different definitions of "existence". Not sure if that qualifies as a different language, as such.

That said, you seem to be having serious trouble parsing my question, so maybe there are other differences too.

Look, you understand the concept of a paperclip maximizer, yes? How would a paperclip maximizer that used your criteria for existence act differently?

EDIT: incidentally, we haven't been discussing this "over the last several months". We've been discussing it since the fifth.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-09T15:53:22.902Z · LW(p) · GW(p)

Well, considering how many people seem to think that interpretations of QM other than their own are just "trivial extensions with no new predictive power", it's an important point.

The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That's quite different from last thursdayism.

How would a paperclip maximizer that used your criteria for existence act differently?

Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.

Anyway, I'm quite skeptical that we are getting anywhere in this discussion.

Replies from: private_messaging, MugaSofer
comment by private_messaging · 2013-04-09T17:03:52.316Z · LW(p) · GW(p)

it has one goal in mind, maximizing the number of paperclips in the universe

In which universe? It doesn't know. And it may have uncertainty with regards to true number. There's going to be hypothetical universes that produce same observations but have ridiculously huge amounts of invisible paperclips at stake, which are influenced by paperclipper's actions (it may even be that the simplest extra addition that makes agent's actions influence invisible paperclips would utterly dominate all theories starting from some length, as it leaves most length for a busy beaver like construction that makes the amount of insivisible paperclips ridiculously huge. One extra bit for a busy beaver is seriously a lot more paperclips). So given some sort of length prior that ignores size of hypothetical universe (the kind that won't discriminate against MWI just because its big), those aren't assigned low enough prior, and dominate it's expected utility calculations.

comment by MugaSofer · 2013-04-09T18:45:15.617Z · LW(p) · GW(p)

The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That's quite different from last thursdayism.

Well, I probably don't know enough about QM to judge if they're correct; but it's certainly a claim made fairly regularly.

Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.

Let's say it simplifies the equations not to model the paperclips as paperclips - it might be sufficient to treat them as a homogeneous mass of metal, for example. Does this mean that they do not, in fact, exist? Should a paperclipper avoid this at all costs, because it's equivalent to them disappearing?

Removing the territory/map distinction means something that wants to change the territory could end up changing the map ... doesn't it?

I'm wondering because I care about people, but it's often simpler to model people without treating them as, well, sentient.

Anyway, I'm quite skeptical that we are getting anywhere in this discussion.

Well, I've been optimistic that I'd clarified myself pretty much every comment now, so I have to admit I'm updating downwards on that.

comment by EHeller · 2013-04-06T01:27:13.280Z · LW(p) · GW(p)

I'm convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren't as narrow as you'd think?) and get exactly the same results.

Depends on what you mean by 'different mechanics.' Weinberg's field theory textbook develops the argument that only quantum field theory, as a structure, allows for certain phenomenologically important characteristics (mostly cluster decomposition).

However, there IS an enormous amount of leeway within the field theory- you can make a theory where electric monopoles exist as explicit degrees of freedom, and magnetic monopoles are topological gauge-field configurations and its dual to a theory where magnetic monopoles are the degrees of freedom and electric monopoles exist as field configurations. While these theories SEEM very different, they make identical predictions.

Similarly, if you can only make finite numbers of measurements, adding extra dimensions is equivalent to adding lots of additional forces (the dimensional deconstruction idea), etc. Some 5d theories with gravity make the same predictions as some 4d theories without.

comment by Eugine_Nier · 2013-04-06T02:38:00.340Z · LW(p) · GW(p)

Seriously, I've tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we've built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don't even know how to -begin- untangling those.

The same is more-or-less true if you replace 'electrons' with 'temperature'.

comment by A1987dM (army1987) · 2013-04-06T19:26:51.590Z · LW(p) · GW(p)

The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.

Yes. While I'm not terribly up-to-date with the ‘state-of-the-art’ in theoretical physics, I feel like the situation today with renormalization and stuff is like it was until 1905 for the Lorentz-FitzGerald contraction or the black-body radiation, when people were mystified by the fact that the equations worked because they didn't know (or, at least, didn't want to admit) what the hell they meant. A new Einstein clearing this stuff up is perhaps overdue now. (The most obvious candidate is “something to do with quantum gravity”, but I'm prepared to be surprised.)

comment by A1987dM (army1987) · 2013-04-06T19:17:16.814Z · LW(p) · GW(p)

You guys are making possible sources of confusion between the map and the territory sound like they're specific to QFT while they actually aren't. “Oh, I know what a ball is. It's an object where all the points on the surface are at the same distance from the centre.” “How can there be such a thing? The positions of atoms on the surface would fluctuate due to thermal motion. Then what is it, exactly, that you play billiards with?” (Can you find another example of this in a different recent LW thread?)

Replies from: EHeller
comment by EHeller · 2013-04-07T01:45:03.226Z · LW(p) · GW(p)

Your ball point is very different. My driving point is that there isn't even a nice, platonic-ideal type definition of particle IN THE MAP, let alone something that connects to the territory. I understand how my above post may lead you to misunderstand what I was trying to get it..

To rephrase my above comment, I might say: some of the features a MAP of a particle needs is that its detectable in some way, and that it can be described in a non-relativistic limit by a Schroedinger equation. The standard QFT definitions for particle lack both these features. Its also not-fully consistent in the case of charged particles.

In QFT there is lots of confusion about how the map works, unlike classical mechanics.

Replies from: shminux, army1987
comment by Shmi (shminux) · 2013-04-07T02:35:10.137Z · LW(p) · GW(p)

This reminds me of the recent conjecture that the black hole horizon is a firewall, which seems like one of those confusions about the map.

comment by A1987dM (army1987) · 2013-04-07T11:28:39.487Z · LW(p) · GW(p)

there isn't even a nice, platonic-ideal type definition of particle IN THE MAP

Why, is there a nice, platonic-ideal type definition of a rigid ball in the map (compatible with special relativity)? What happens to its radius when you spin it?

Replies from: EHeller
comment by EHeller · 2013-04-07T21:14:05.341Z · LW(p) · GW(p)

There is no 'rigid' in special relativity, the best you can do is Born-rigid. Even so, its trivial to define a ball in special relativity, just define it in the frame of a corotating observer and use four vectors to move to the same collection of events in other frames You learn that a 'ball' in special relativity has some observer dependent properties, but thats because length and time are observer dependent in special relativity. So 'radius' isn't a good concept, but 'the radius so-and-so measures' IS a good concept.

comment by A1987dM (army1987) · 2013-04-06T10:33:23.318Z · LW(p) · GW(p)

and what it means that the definition is NOT observer independent.

[puts logical positivism hat on]

Why, it means this, of course.

[while taking the hat off:] Oh, that wasn't what you meant, was it?

Replies from: EHeller
comment by EHeller · 2013-04-07T01:53:30.902Z · LW(p) · GW(p)

Why, it means this, of course.

The Unruh effect is a specific instance of my general-point (particle definition is observer dependent). All you've done is give a name to the sub-class of my point (not all observers see the same particles).

So should we expect ontology to be observer independent? If we should, what happens to particles?

comment by private_messaging · 2013-04-05T03:53:58.568Z · LW(p) · GW(p)

And yet it proclaims the issue settled in favour of MWI and argues of how wrong science is for not settling on MWI and so on. The connection - that this deficiency is why MWI can't be settled on, sure does not come up here. Speaking of which, under any formal metric that he loves to allude to (e.g. Kolmogorov complexity), MWI as it is, is not even a valid code for among other things this reason.

It doesn't matter how much simpler MWI is if we don't even know that it isn't too simple, merely guess that it might not be too simple.

edit: ohh, and lack of derivation of Born's rules is not the kind of thing I meant by argument in favour of non-realism. You can be non-realist with or without having derived Born's rules. How QFT deals with relativistic issues, as outlined by e.g. Mitchell Porter , is a quite good reason to doubt reality of what goes on mathematically in-between input and output. There's a view that (current QM) internals are an artefact of the set of mathematical tricks which we like / can use effectively. The view that internal mathematics is to the world as rods and cogs and gears inside a WW2 aiming computer are to a projectile flying through the air.

comment by Ritalin · 2013-04-04T15:20:11.695Z · LW(p) · GW(p)

Are they, though? Irrational or stupid?

comment by MugaSofer · 2013-04-06T19:28:37.917Z · LW(p) · GW(p)

What one can learn is that the allegedly 'settled' and 'solved' is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven't reduced them to anything, merely asserted.

coughcreationsistscough

comment by Vaniver · 2013-04-03T20:06:51.822Z · LW(p) · GW(p)

I defected from physics during my Master's, but this is basically the impression I had of the QM sequence as well.

comment by Vaniver · 2013-04-01T19:09:30.094Z · LW(p) · GW(p)

Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people.

That sounds like reasonable evidence against the selection effect.

Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of it.

I strongly recommend against both the "advises newcomers to skip the QM sequence -> can't grasp technical argument for MWI" and "disagrees with MWI argument -> poor technical skill" inferences.

Replies from: wedrifid
comment by wedrifid · 2013-04-02T03:21:11.252Z · LW(p) · GW(p)

I strongly recommend against both the "advises newcomers to skip the QM sequence -> can't grasp technical argument for MWI"

That inference isn't made. Eliezer has other information from which to reach that conclusion. In particular, he has several years worth of ranting and sniping from Shminux about his particular pet peeve. Even if you disagree with Eliezer's conclusion it is not correct to claim that Eliezer is making this particular inference.

and "disagrees with MWI argument -> poor technical skill" inferences.

Again, Eliezer has a large body of comments from which to reach the conclusion that Shminux has poor technical skill in the areas necessary for reasoning on that subject. The specific nature of the disagreement would be relevant, for example.

Replies from: Vaniver
comment by Vaniver · 2013-04-02T05:12:32.636Z · LW(p) · GW(p)

That inference isn't made. Eliezer has other information from which to reach that conclusion. In particular, he has several years worth of ranting and sniping from Shminux about his particular pet peeve.

That very well could be, in which case my recommendation about that inference does not apply to Eliezer.

I will note that this comment suggests that Eliezer's model of shminux may be underdeveloped, and that caution in ascribing motives or beliefs to others is often wise.

Replies from: wedrifid
comment by wedrifid · 2013-04-02T06:38:43.161Z · LW(p) · GW(p)

I will note that this comment suggests that Eliezer's model of shminux may be underdeveloped

It really doesn't. At best it suggests Eliezer could have been more careful in word selection regarding Shminux's particular agenda. 'About' rather than 'with' would be sufficient.

comment by [deleted] · 2013-04-03T03:09:42.041Z · LW(p) · GW(p)

I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW.

I'm no IMO gold medalist (which really just means I'm giving you explicit permission to ignore the rest of my comment) but it seems to me that a standard understanding of QM is necessary to get anything out of the QM sequence.

It's a pity I'll probably never have time to write up TDT.

Revealed preferences are rarely attractive.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-03T04:26:13.801Z · LW(p) · GW(p)

Revealed preferences are rarely attractive.

Adds to "Things I won't actually get put on a T-shirt but sort of feel I ought to" list.

comment by Shmi (shminux) · 2013-04-01T17:19:48.066Z · LW(p) · GW(p)

As others noted, you seem to be falling prey to the selection bias. Do you have an estimate of how many "IMO gold medalists" gave up on MIRI because its founder, in defiance of everything he wrote before, confidently picks one untestable from a bunch and proclaims it to be the truth (with 100% certainty, no less, Bayes be damned), despite (or maybe due to) not even being an expert in the subject matter?

EDIT: My initial inclination was to simply comply with your request, probably because I grew up being taught deference to and respect for authority. Then it struck me as one of the most cultish things one could do.

Replies from: hairyfigment, philh, Eliezer_Yudkowsky, Eliezer_Yudkowsky, TheOtherDave
comment by hairyfigment · 2013-04-01T21:34:31.221Z · LW(p) · GW(p)

with 100% certainty, no less, Bayes be damned

Is this an April Fool's joke? He says nothing of the kind. The post which comes closest to this explicitly says that it could be wrong, but "the rational probability is pretty damned small." And counting the discovery of time-turners, he's named at least two conceivable pieces of evidence that could change that number.

What do you mean when you say you "just don't put nearly as much confidence in it as you do"?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T19:05:57.445Z · LW(p) · GW(p)

Maybe it's a reference to the a priori nature of his arguments for MW? Or something? It's a strange claim to make, TBH.

comment by philh · 2013-04-02T00:11:01.710Z · LW(p) · GW(p)

Do you have an estimate of how many "IMO gold medalists" gave up on MIRI because [X]

The number of IMO gold medalists is sufficiently low, and the probability of any one of them having read the QM sequence is sufficiently small, that my own estimate would be less than one regardless of X.

(I don't have a good model of how much more likely an IMO gold medalist would be to have read the QM sequence than any other reference class, so I'm not massively confident.)

Replies from: private_messaging
comment by private_messaging · 2013-04-02T05:00:28.331Z · LW(p) · GW(p)

There's plenty of things roughly comparable to IMO in terms of selectivity (IMO gives what, ~35 golds a year?)... E.g. I'm #10th of all time on a popular programming contest site ( I'm dmytry ).

This discussion is really hilarious, especially the attempts to re-frame commoner orientated, qualitative and incomplete picture of QM - as something which technical people appreciate and non-technical people don't. (Don't you want to be one among the technies?) .

Replies from: philh
comment by philh · 2013-04-02T19:17:11.054Z · LW(p) · GW(p)

Selectivity, in the relevant sense, is more than just a question of how many people are granted something.

How many people are not on that site, but could rank highly if they chose to try? I'm guessing it's far more than the number of people who have never taken part in the IMO, but who could get a gold medal if they did.

(The IMO is more prestigious among mathematicians than topcoder is among programmers. And countries actively recruit their best mathematicians for the IMO. Nobody in the Finnish government thought it would be a good idea to convince and train Linus Torvalds to take part in an internet programming competition, so I doubt Linus Torvalds is on topcoder.)

There certainly are things as selective or more than the IMO (for example, the Fields medal), but I don't think topcoder is one of them, and I'm not convinced about "plenty". (Plenty for what purpose?)

Replies from: private_messaging, private_messaging
comment by private_messaging · 2013-04-05T06:09:34.864Z · LW(p) · GW(p)

I've tried to compare it more accurately.

It's very hard to evaluate selectivity. It's not just the raw number of people participating. It seems that large majority of serious ACM ICPC participants (both contestants and their coaches) are practising on Topcoder, and for the ICPC the best college CS students are recruited much the same as best highschool math students for IMO.

I don't know if Linus Torvalds would necessarily do great on this sort of thing - his talents are primarily within software design, and his persistence as the unifying force behind Linux. (And are you sure you'd recruit a 22 years old Linus Torvalds who just started writing a Unix clone?). It's also the case that 'programming contest' is a bit of misnomer - the winning is primarily about applied mathematics - just as 'computer science' is a misnomer.

In any case, its highly dubious that understanding of QM sequence is as selective as any contest. I get it fully that Copenhagen is clunky whereas MWI doesn't have the collapse, and that collapse fits in very badly. That's not at all the issue. However badly something fits, you can only throw it away when you figured out how to do without it. Also, commonly, the wavefunction, the collapse, and other internals, are seen as mechanisms of prediction which may, or may not, have anything to do with "how universe does it" (even if the question of "how universe does it" is meaningful, it may still be the case that internals of the theory have nothing to do with that, as the internals are massively based upon our convenience). And worse still, MWI is in many very important ways lacking.

comment by private_messaging · 2013-04-03T05:37:41.233Z · LW(p) · GW(p)

Selectivity, in the relevant sense, is more than just a question of how many people are granted something.

Of course. There's the number of potential participants, self selection, and so on.

How many people are not on that site, but could rank highly if they chose to try? I'm guessing it's far more than the number of people who have never taken part in the IMO, but who could get a gold medal if they did.

IMO is a highschool event, and 'taking part' in terms of actually winning entails a lot of very specific training instead of education.

(The IMO is more prestigious among mathematicians than topcoder is among programmers. And countries actively recruit their best mathematicians for the IMO. Nobody in the Finnish government thought it would be a good idea to convince and train Linus Torvalds to take part in an internet programming competition, so I doubt Linus Torvalds is on topcoder.)

Nobody can recruit Grigori Perelman for IMO, either.

There's ACM ICPC, which is roughly the programming equivalent of IMO . Finalists have huge overlap with TC. edit: more current . Of course, TC lacks the prestige of ACM ICPC , but on the other hand it is not a school event.

There certainly are things as selective or more than the IMO (for example, the Fields medal), but I don't think topcoder is one of them, and I'm not convinced about "plenty". (Plenty for what purpose?)

Plenty for the purpose of coming across that volume of technical brilliance and noting and elevating it to its rightful place by now. Less facetiously: a lot of people know everything that was presented in the QM paper, and of those pretty much everyone either considers MWI to be an open question, an irrelevant question, or the like.

edit: made clearer with quotations.

Replies from: philh
comment by philh · 2013-04-03T07:39:05.716Z · LW(p) · GW(p)

Nobody can recruit Grigori Perelman for IMO, either.

Perelman is an IMO gold medalist.

Replies from: private_messaging
comment by private_messaging · 2013-04-03T07:54:22.983Z · LW(p) · GW(p)

Hmm. Good point. My point was though that you can't recruit adult mathematicians for it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T18:40:08.930Z · LW(p) · GW(p)

Well, I'm sorry to say this, but part of what makes authority Authority is that your respect is not always required. Frankly, in this case Authority is going to start deleting your comments if you keep on telling newcomers who post in the Welcome thread not to read the QM sequence, which you've done quite a few times at this point unless my memory is failing me. You disagree with MWI. Okay. I get it. We all get it. I still want the next Mihaly to read the QM Sequence and I don't want to have this conversation every time, nor is it an appropriate greeting for every newcomer.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-01T18:48:44.049Z · LW(p) · GW(p)

Sure, your site, your rules.

Just to correct a few inaccuracies in your comment:

You disagree with MWI.

I don't, I just don't put nearly as much confidence in it as you do. It is also unfortunately abused on this site quite a bit.

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer", only those who appear to be stuck on it. Surely Mihaly had no difficulties with it, so none of my warnings would interfere with "still want the next Mihaly to read the QM Sequence".

Replies from: wedrifid, MugaSofer, shminux
comment by wedrifid · 2013-04-02T10:54:44.332Z · LW(p) · GW(p)

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer"

The claim you made that prompted the reply was:

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading.

It is rather disingenuous to then express exaggerated 'let alone' rejections of the reply "nor is it an appropriate greeting for every newcomer".

comment by MugaSofer · 2013-04-06T09:25:15.192Z · LW(p) · GW(p)

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer", only those who appear to be stuck on it.

Uhuh.

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading.

That said, kudos to you for remaining calm and reasonable

Replies from: shminux
comment by Shmi (shminux) · 2013-04-06T21:40:55.148Z · LW(p) · GW(p)

You have a point, it's easy to read my first comment rather uncharitably. I should have been more precise:

"My standard advice to all newcomers [who mention difficulties with the QM sequence]..." which is much closer to what actually happens. I don't bring it up out of the blue every time I greet someone.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-07T23:43:57.800Z · LW(p) · GW(p)

"My standard advice to all newcomers [who mention difficulties with the QM sequence]..."

Sorry, could you point out where difficulties with the QM sequence were mentioned? All I could find was

I'm currently working my way through the sequences, just getting into the quantum physics sequence now.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-08T02:46:07.467Z · LW(p) · GW(p)

You are right. In my mind I read it as "I read through everything up until this, and this quantum thing looks scary and formidable, but it's next, so I better get on with it", which could have been a total misinterpretation of what was meant. So yeah, I have probably jumped in a bit early. Not that I think it was a bad advice. Anyway, it's all a moot point now, I have promised EY not to give unsolicited advice to newcomers telling them to skip the QM sequence.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-08T18:04:59.229Z · LW(p) · GW(p)

Fair enough, I thought I might have somehow missed it.

comment by Shmi (shminux) · 2013-04-02T23:31:24.814Z · LW(p) · GW(p)

Hmm, the above got a lot of upvotes... I have no idea why.

Replies from: wedrifid
comment by wedrifid · 2013-04-03T02:42:11.576Z · LW(p) · GW(p)

Hmm, the above got a lot of upvotes... I have no idea why.

Egalitarian instinct. Eliezer is using power against you, which drastically raises the standards of behavior expected from him while doing so---including less tolerance of him getting things wrong.

Your reply used the form 'graceful' in a context where you would have been given a lot of leeway even to be (overtly) rude. The corrections were portrayed as gentle and patient. Whether the corrections happen to be accurate or reasonable is usually almost irrelevant for the purpose of determining people's voting behavior this far down into a charged thread.

Note that even though I approve of Eliezer's decision to delete comments of yours disparaging the QM sequence to newcomers I still endorse your decision to force Eliezer to use his power instead of deferring to his judgement simply because he has the power. It was the right decision for you to make from your perspective and is also a much more desirable precedent.

Replies from: OrphanWilde, VCavallo, satt
comment by OrphanWilde · 2013-04-04T20:32:29.075Z · LW(p) · GW(p)

I deliberately invoke this tactic on occasion in arguments on other people's turf, particularly where the rules are unevenly applied. I was once accused by an acquaintance who witnessed it of being unreasonably reasonable.

It's particularly useful when moderators routinely take sides in debates. It makes it dangerous for them to use their power to shut down dissent.

comment by VCavallo · 2013-04-04T19:16:14.885Z · LW(p) · GW(p)

Egalitarian instinct. Eliezer is using power against you, which drastically raises the standards of behavior expected from him while doing so---including less tolerance of him getting things wrong.

Nailed it on the head. As my cursor began to instinctively over the "upvote" button on shminux's comment I caught myself and thought, why am I doing this?. And while I didn't come to your exact conclusion I realized my instinct had something to do with EY's "use of power" and shminux's gentle reply. Some sort of underdog quality that I didn't yet take the time to assess but that my mouse-using-hand wanted badly to blindly reward.

I'm glad you pieced out the exact reasoning behind the scenes here. Stopping and taking a moment to understand behavior and then correct based on that understanding is why I am here.

That said, I really should think for a long time about your explanation before voting you up, too!

Replies from: shminux
comment by Shmi (shminux) · 2013-04-04T20:10:10.700Z · LW(p) · GW(p)

I'm glad you pieced out the exact reasoning behind the scenes here.

If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down. It doesn't appear to be happening.

Replies from: wedrifid, Kaj_Sotala, VCavallo
comment by wedrifid · 2013-04-06T10:11:18.097Z · LW(p) · GW(p)

If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down.

A quirk (and often a bias) humans have is that we tend to assume that just because a social behavior or human instinct can be explained it must thereby be invalidated. Yet everything can (in principle) be explained and there are still things that are, in fact, noble. My parents' love for myself and my siblings is no less real because I am capable of reasoning about the inclusive fitness of those peers of my anscestors that happened to love their children less.

In this case the explanation given was, roughly speaking "egalitarian instinct + politeness". And personally I have to say that the egalitarian instinct is one of my favorite parts of humanity and one of the traits that I most value in those I prefer to surround myself with (Rah foragers!).

All else being equal the explanation in terms of egalitarian instinct and precedent setting regarding authority use describes (what I consider to be) a positive picture and in itself is no reason to downvote. (The comment deserves to be downvoted for innacuracy as described in different comments but this should be considered separately from the explanation of the reasons for upvoting.)

In terms of evidence I would say that I would not consider mass downvoting of this comment to be (non-trivial) evidence in support of my explanation. Commensurately I don't consider the lack of such downvoting to be much evidence against. As for how much confidence I have in the explanation... well, I am reasonably confident that the egalitarian instinct and politeness are factors but far less confident that they represent a majority of the influence. Even my (mere) map of the social forces at work points to other influences that are at least as strong---and my ability to model and predict a crowd is far from flawless.

The question you ask is a surprisingly complicated one, if looked at closely.

comment by Kaj_Sotala · 2013-04-14T20:06:44.388Z · LW(p) · GW(p)

I believe that I already knew I was acting on egalitarian instinct when I upvoted your comment.

comment by VCavallo · 2013-04-04T20:36:10.282Z · LW(p) · GW(p)

They could just be a weird sort of lazy whereby they don't scroll back up and change anything. Or maybe they never see his post. Or something else. I don't think the -%positive-not-going-down-yet is any indication that wedrifid's comment is not right.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-04T20:59:58.856Z · LW(p) · GW(p)

You may well be right, it's hard to tell. I don't see an easy way of finding out short of people replying like you have. I assumed that there enough of those who would react to make the effect visible, and I don't see how someone agreeing with wedrifid's assessment would go back and upvote my original comment, so even a partial effect could be visible. But anyway, this is not important enough to continue discussing, I think. Tapping out.

Replies from: VCavallo
comment by VCavallo · 2013-04-04T21:16:52.581Z · LW(p) · GW(p)

I completely agree with what you are saying and also tap out, even though it may be redundant. Let us kill this line of comments together.

Replies from: Randy_M
comment by Randy_M · 2013-04-05T00:51:27.348Z · LW(p) · GW(p)

If you both tap out, then anyone who steps into the discussion wins by default!

Replies from: wedrifid, Eugine_Nier
comment by wedrifid · 2013-04-06T10:14:55.656Z · LW(p) · GW(p)

If you both tap out, then anyone who steps into the discussion wins by default!

In many such cases it may be better to say that if both tap out then everybody wins by default!

Replies from: Randy_M, TheOtherDave
comment by Randy_M · 2013-04-08T19:12:58.195Z · LW(p) · GW(p)

-3 karma, apparently.

comment by TheOtherDave · 2013-04-06T18:24:02.344Z · LW(p) · GW(p)

In discussions where everyone tapping out is superior to the available alternatives, I'm more inclined to refer to the result as "minimizing loss" than "winning".

Replies from: Kawoomba
comment by Kawoomba · 2013-04-06T18:44:37.269Z · LW(p) · GW(p)

Well to your credit you don't see LW as a zero sum game.

comment by Eugine_Nier · 2013-04-08T03:34:43.543Z · LW(p) · GW(p)

What does he win?

comment by satt · 2013-04-03T09:18:54.653Z · LW(p) · GW(p)

Note that even though I tired of your talking about QM years ago

This is the second time you mention shminux having talked about QM for years. But I can't find any comments or posts he's made before July 2011. Does he have a dupe account or something else I don't know about?

Replies from: shminux, Kawoomba, wedrifid
comment by Shmi (shminux) · 2013-04-03T18:26:49.104Z · LW(p) · GW(p)

Since you are asking... July 2011 is right for the join date and some time later is when I voiced any opinion related to the QM sequence and MWI (I did read through it once and browsed now and again since). No, I did not have another account before that, as a long-term freenode ##physics IRC channel moderator, I dislike being confused about user's previous identities, so I don't do it myself (hence the silly nick chosen a decade or so ago, which has lost all relevance by now). On the other hand, I don't mind people wanting a clean slate with a new nick, just not using socks to express a controversial or karma-draining opinion they are too chicken to have linked to their main account.

I also encourage you to take whatever wedrifid writes about me with a grain of salt. While I read what he writes and often upvote when I find it warranted, I quite publicly announced here about a year ago that I will not be replying to any of his comments, given how counterproductive it had been for me. (There are currently about 4 or 5 people on my LW "do-not-reply" list.) I have also warned other users once or twice, after I noticed them in a similarly futile discussion with wedrifid. I would be really surprised if this did not color his perception and attitude. It certainly would for me, were the roles reversed.

comment by Kawoomba · 2013-04-03T09:32:21.693Z · LW(p) · GW(p)

I'm also interested in this. Hopefully it's not an overt lie or something.

comment by wedrifid · 2013-04-03T10:40:38.651Z · LW(p) · GW(p)

This is the second time you mention shminux having talked about QM for years. But I can't find any comments or posts he's made before July 2011. Does he have a dupe account or something else I don't know about?

I don't keep an exact mental record of the join dates. My guess from intuitive feel was "2 years". It's April 2013. It was July 2011 when the account joined. If anything you have prompted me to slightly increase my confidence in the calibration of my account-joining estimator.

If the subject of how long user:shminux has been complaining about the QM sequence ever becomes relevant again I'll be sure to use Wei Dai's script, search the text and provide a link to the exact first mention. In this case, however, the difference hardly seems significant or important.

Does he have a dupe account or something else I don't know about?

I doubt it. If so I praise him for his flawless character separation.

Replies from: satt
comment by satt · 2013-04-03T13:07:14.064Z · LW(p) · GW(p)

Thanks for clarifying. I asked not because the exact timing is important but because the overstatement seemed uncharacteristic (albeit modest), and I wasn't sure whether it was just offhand pique or something else. (Also, if something funny had been going on, it might've explained the weird rancour/sloppiness/mindkilledness in the broader thread.)

Replies from: wedrifid
comment by wedrifid · 2013-04-03T13:35:28.774Z · LW(p) · GW(p)

Thanks for clarifying. I asked not because the exact timing is important but because the overstatement seemed uncharacteristic (albeit modest), and I wasn't sure whether it was just offhand pique or something else.

Just an error.

Note that in the context there was no particular pique. I intended acknowledgement of established disrespect, not conveyance of additional disrespect. The point was that I was instinctively (as well as rationally) motivated to support shminux despite also approving of Eliezer's declared intent, which illustrates the strength of the effect.

Fortunately nothing is lost if I simply remove the phrase you quote entirely. The point remains clear even if I remove the detail of why I approve of Eliezer's declaration.

Also, if something funny had been going on, it might've explained the weird rancour/sloppiness/mindkilledness in the broader thread.

The main explanation there is just that incarnations of this same argument have been cropping up with slight variations for (what seems like) a long time. As with several other subjects there are rather clear battle lines drawn and no particular chance of anyone learning anything. The quality of the discussion tends to be abysmal, riddled with status games and full of arguments that are sloppy in the extreme. As well as the problem of persuasion through raw persistence.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T18:12:46.503Z · LW(p) · GW(p)

Bluntly, IMO gold medalists who can conceive of working on something 'crazy' like FAI would be expected to better understand the QM sequence than that. Even more so they would be expected to understand the core arguments better than to get offended by my having come to a conclusion. I haven't heard from the opposite side at all, and while the probability of my hearing about it might conceivably be low, my priors on it existing are rather lower than yours, and the fact that I have heard nothing is also evidence. Carl, who often hears (and anonymizes) complaints from the outside x-risk community, has not reported to me anyone being offended by my QM sequence.

Smart people want to be told something smart that they haven't already heard from other smart people and that doesn't seem 'obvious'. The QM sequence is demonstrably not dispensable for this purpose - Mihaly said the rest of LW seemed interesting but insufficiently I-wouldn't-have-thought-of-that. Frankly I worry that QM isn't enough but given how long it's taking me to write up the Lob problem, I don't think I can realistically try to take on TDT.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-01T18:24:58.503Z · LW(p) · GW(p)

Again, you seem to be generalizing from a single example, unless you have more data points than just Mihaly.

comment by TheOtherDave · 2013-04-01T17:49:09.157Z · LW(p) · GW(p)

"IMO good medalists"

Note that the original text was "gold," not "good".

I assume IMO is the International Mathematical Olympiad(1). Not that this in any way addresses or mitigates your point; just figured I'd point it out.

(1) If I've understood the wiki article, ~35 IMO gold medals are awarded every year.

Replies from: shminux, MugaSofer
comment by Shmi (shminux) · 2013-04-01T18:15:04.612Z · LW(p) · GW(p)

Thanks, I fixed the typo.

comment by MugaSofer · 2013-04-10T16:10:54.419Z · LW(p) · GW(p)

Huh. I'd assumed it was short for "In My Opinion".

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-10T16:44:16.128Z · LW(p) · GW(p)

Yeah, that confused me on initial reading, though some googling clarified matters, and I inferred from the way shminux (mis)quoted that something similar might be going on there, which is why I mentioned it.

comment by TimS · 2013-04-01T15:34:11.876Z · LW(p) · GW(p)

QM Sequence is two parts:

(1) QM for beginners
(2) Philosophy-of-science on believing things when evidence is equipoise (or absent) - pick the simpler hypothesis.

I got part (1) from reading Dancing Wu-Li Masters, but I can clearly see the value to readers without that background. But teaching foundational science is separate from teaching Bayesian rationalism.

The philosophy of the second part is incredibly controversial. Much more than you acknowledge in the essays, or acknowledge now. Treating the other side of any unresolved philosophical controversy as if it is stupid, not merely wrong, is excessive and unjustified.

In short, the QM sequence would seriously benefit from the sort of philosophical background stuff that is included in your more recent essays. Including some more technical discussion of the opposing position.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-01T16:19:35.841Z · LW(p) · GW(p)

If you learned quantum mechanics from that book, you may have seriously mislearned it. It's actually pretty decent describing everything up to but excluding quantum physics. When it comes to QM, however, the author sacrifices useful understanding in favor of mysticism.

Replies from: TimS
comment by TimS · 2013-04-01T19:13:11.618Z · LW(p) · GW(p)

Hrm? On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality? DWLM mentions the competing interpretations, but choosing an interpretation is not strictly necessary to understand QM predictions.

For clarity, I consider the double-slit experimental results to be an expression of wave-particle duality.


I will admit that DWLM does a poor job of preventing billiard-ball QM theory ("Of course you can't tell momentum and velocity at the same time. The only way to check is to hit the particle with a proton, and that's going to change the results.").

That's a wrong understanding, but a less wrong understanding than "It's classical physics all the way down."

Replies from: orthonormal, Vaniver
comment by orthonormal · 2013-04-02T04:05:34.032Z · LW(p) · GW(p)

On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality?

Yes. Very yes. There are several different ways to get at that next conceptual level (matrix mechanics, the behavior of the Schrödinger equation, configuration spaces, Hamiltonian and Lagrangian mechanics, to name ones that I know at least a little about), but qualitative descriptions of the Uncertainty Principle, Schrödinger's Cat, Wave-Particle Duality, and the Measurement Problem do not get you to that level.

Rejoice—the reality of quantum mechanics is way more awesome than you think it is, and you can find out about it!

Replies from: TimS
comment by TimS · 2013-04-02T15:01:33.488Z · LW(p) · GW(p)

Let me rephrase: I'm sure there is more to cutting edge QM than that which I understand (or even have heard of). Is any of that necessary to engage with the philosophy-of-science questions raised by the end of the Sequence, such as Science Doesn't Trust Your Rationality?

From a writing point of view, some scientific controversy needed to be introduced to motivate the later discussion - and Eliezer choose QM. As examples go, it has advantages:

(1) QM is cutting edge - you can't just go to Wikipedia to figure out who won. EY could have written a Lamarckian / Darwinian evolution sequence with similar concluding essays, but indisputably knowing who was right would slant how the philosophy-of-science point would be interpreted.
(2) A non-expert should recognize that their intuitions are hopelessly misleading when dealing with QM, opening them to serious consideration of the new-to-them philosophy-of-science position EY articulates.

But let's not confuse the benefits of the motivating example with arguing that there is philosophy-of-science benefit in writing an understandable description of QM.

In other words, if the essays in the sequence after and including The Failures of Eld Science were omitted from the Sequence, it wouldn't belong on LessWrong.

comment by Vaniver · 2013-04-01T19:21:40.743Z · LW(p) · GW(p)

On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality?

A deeper, more natural way to express both is "wavefunction reality," which also incorporates some of the more exotic effects that come from using complex numbers. (The Uncertainty Principle also should be called the "uncertainty consequence," since it's a simple derivation from how the position and momentum operators work on wavefunctions.)

(I haven't read DWLM, so I can't comment on its quality.)

comment by Michelle_Z · 2013-04-01T23:46:41.678Z · LW(p) · GW(p)

If you want to learn things/explore what you want to do with your life, take a few varied courses at Coursera.

comment by beoShaffer · 2013-04-01T04:26:05.524Z · LW(p) · GW(p)

Hi, Laplante. Why do you want to enter psychology/neuroscience/cognitive science? I ask this as someone who is about to graduate with a double major in psychology/computer science and is almost certain to go into computer science as my career.

comment by gothgirl420666 · 2013-04-05T00:15:43.018Z · LW(p) · GW(p)

I'm a male senior in high school. I found this site in November or so, and started reading the sequences voraciously.

I feel like I might be a somewhat atypical LessWrong reader. For one, I'm on the young side. Also, if you saw me and talked to me, you would probably not guess that I was a "rationalist" from the way I act/dress but, I don't know, perhaps you might. When I first found this website, I was pretty sure I wanted to be an art major, now I'm pretty sure I want to be an art/comp sci double major and go into indie game development (correlation may or may not imply causation). I also love rap music (and not the "good" kind like Talib Kweli) and I read most of the sequences while listening to Lil Wayne, Lil B, Gucci Mane, Future, Young Jeezy, etc. I occasionally record my own terrible rap songs with my friends in my friend's basement. Before finding this site, the word "rational" had powerful negative affect around it. Science was far and away my least favorite subject in school. I have absolutely no interest at the moment in learning any science or anything about science, except for maybe neuroscience, and maybe metaphysics. I've always found the humanities more interesting, although I do enjoy some abstract math stuff. I'm somewhat of an emotional Luddite - whenever a new technology like Google Glass or something comes out I groan and I think about all the ways it's going to further detach people from reality. Transhumanism was disgusting to me before I found this site, while reading the sequences I started to buy into the philosophy, now a few months after reading the sequences for the first time I rationally know it is a very very good thing but still emotionally find it a little unappealing.

After finding this site, I have gone from having a vaguely confused worldview to completely "buying into" most of the philosophy espoused here and on on other sites in the rationalist-sphere such as Overcoming Bias, blogs of top contributors, etc. (I'm not a racist yet though), and constantly thinking throughout my day about things like utility functions, sunken cost fallacies, mind projection fallacy, etc. I feel like finding this website has immeasurably improved my life, which I know might be a weird thing to say, but I do think this is true. First of all, my thinking is so much clearer, and moral/philosophical/political questions that seemed like a paradox before now seem to have obvious solutions. More importantly, after being inspired by stuff like The Science of Winning at Life, I now spend several hours a day on self-improvement projects, which I never would have thought to do without first becoming a rationalist. This community also lead me to vipassana meditation, the practice of which I think has improved my life so far. I feel like this new focus on rational thinking and self improvement will only continue to pay dividends in the future, as it's only been a few months since I developed this new attitude towards life. It may be overly optimistic, but I really do see finding this site and becoming a rationalist as a major turning point in my life and I'm very grateful to Eliezer and co. for revealing to me the secrets of the universe.

Replies from: None, someonewrongonthenet, Nisan, MugaSofer
comment by [deleted] · 2013-04-05T01:02:16.980Z · LW(p) · GW(p)

gothgirl420666

I'm a male senior in high school.

lulz. You have my attention.

You sound like quite an intelligent and awesome person. (bad rap, art, rationality. only an interesting person could have such a nonstandard combination of interests. Boring people come prepackaged...)

Glad to have you around.

(I'm not a racist yet though)

It's only a matter of time ;)

I feel like finding this website has immeasurably improved my life ... and moral/philosophical/political questions that seemed like a paradox before now seem to have obvious solutions.

I remember that feeling. I'm more skeptical now, but I can't help but notice more awesomeness in my life due to LW. It really is quite cool isn't it?

spend several hours a day on self-improvement projects

This is the part that's been elusive to me. What kind of things are you doing? How do you knwo you are actually getting benefits and not just producing that "this is awesome" feeling which unfortunately often gets detached from realty?

becoming a rationalist.

keep your identity small.

Where do you live? Do you attend meetups?

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-04-05T12:39:26.468Z · LW(p) · GW(p)

You sound like quite an intelligent and awesome person. (bad rap, art, rationality. only an interesting person could have such a nonstandard combination of interests. Boring people come prepackaged...)

Thank you :)

This is the part that's been elusive to me. What kind of things are you doing? How do you knwo you are actually getting benefits and not just producing that "this is awesome" feeling which unfortunately often gets detached from realty?

I guess essentially what I do is try to read self-help stuff. I try to spend half my "work time", so to speak, doing this, and half working on creative projects. I've read both books and assorted stuff on the internet. My goal for April is to read a predetermined list of six self-help books. I'm currently on track for this goal.

So far I've read

  • Part of the massive tome that is Psychological Self Help by Clayton Tucker-Ladd
  • Success - How We Can Reach Our Goals by Heidi Halverson
  • How to Talk to Anyone by Leil Lowndes
  • 59 Seconds by Richard Wiseman
  • Thinking Things Done by PJ Eby
  • the first 300 pages of Feeling Good by David Burns, the last 200 seem to be mostly about the chemical nature of depression and have little practical value, so I'm saving them for later

If meditation books count

  • Mindfulness in Plain English by Henepola Gunaratana
  • most of Mastering the Core Teachings of the Buddha by Daniel Ingram

I also have been keeping a diary, which is something I've wanted to get in the habit of all my life but have never been able to do. Every day, in addition to summarizing the day's events, I rate my happiness out of ten, my productivity out of ten, and speculate on how I can do better.

I've only been keeping the diary a month, which is too small of a sample size. However, during this time, I had three weeks off for spring break, and I told myself that I would work as much as I could on self-improvement and personal projects. I ended up not really getting that much done, unfortunately. However, I managed to put in a median of... probably about five hours every day, and more importantly, I was in a fantastic mood the whole break. It might even have been the best mood I've been in for an extended time in the last few years. In the past, every time I have had a break from school, I ended up in a depressed, lonely, lethargic state, where I surfed the internet for hours on end, in which I paradoxically want to go back to school knowing that as soon as I do, I'll want to go back on break. The fact that I avoided this state for the first time I can remember since middle school is a major improvement for me. Additionally, the fact that I have managed to keep up the habit of diary-writing and meditating for a month so far is an achievement, knowing my past.

Also, even though I found How to Talk to Anyone mostly useless (it's written in a very white-collar, "how to network with the big winners" mindset that doesn't apply to my life), the one major Obvious In Retrospect thing I got from it was that in general I should never complain or criticize anyone. I used to think I was charmingly cynical. Since finishing it about four days ago, I have applied this advice, and I think, although it's very hard to tell, that I have made a person who previously harbored dislike for me view me as a someone pleasant to be around. Only one data point, but still.

I will admit that it is very possible that I am merely cultivating the "this is awesome" feeling. However, if reading scientifically minded self-help books isn't the solution, then what could possibly be? Meditation, but then what if that turns out to be a sham too? Therefore, I feel like it's rational to at least try the tactics that seem to have the highest chance of success before concluding that self-improvement is hopeless. Plus, I enjoy doing it.

Where do you live? Do you attend meetups?

I live in Columbus, OH, but I go to boarding school in a rural area. I will probably go to college in St. Louis next year. If there's ever a meetup nearby me, I would love to go.

Replies from: None
comment by [deleted] · 2013-04-05T15:00:53.975Z · LW(p) · GW(p)

Columbus, OH

I think you need to talk to daenerys, IIRC, she runs the Ohio stuff.

if reading scientifically minded self-help books isn't the solution, then what could possibly be?

Actually doing, for one, though it sounds like you're doing that too.

that doesn't apply to my life

yet. Some day you will want to take over the world, and then you will need to talk to big winners.

I ended up not really getting that much done, unfortunately. However, I managed to put in a median of... probably about five hours every day

I've had this problem, too (I've got so much free time, why is it all getting pissed away?). Have you tried beeminder? I cannot overstate how much that site is just conscientiousness in a can, so to speak.

So far I've read

Thanks for the list. A variety of evidence is making me want to check out the self-help community more closely.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-04-05T17:09:03.351Z · LW(p) · GW(p)

Actually doing, for one, though it sounds like you're doing that too.

I have yet to read a self-help book that doesn't emphatically state "If you do not take care to apply these principles as much as you can in your daily life, you will not gain anything from reading this book." So, yeah, I agree, and by "reading self-help" I mean "reading self-help and applying the knowledge".

Have you tried beeminder? I cannot overstate how much that site is just conscientiousness in a can, so to speak.

I've seen it, and checked it out a little, but I can't think of any way to quantify the stuff that I have problems getting done. Also I wish there was an option to donate money to charity, but I guess they have to make money somehow.

comment by someonewrongonthenet · 2013-04-08T02:51:21.031Z · LW(p) · GW(p)

I have gone from having a vaguely confused worldview to completely "buying into" most of the philosophy espoused here and on on other sites in the rationalist-sphere such as Overcoming Bias, blogs of top contributors, etc. (I'm not a racist yet though)

I have yet to see this. Which major LW contributor is advocating racism, and where can I read about it?

Replies from: gothgirl420666, TheOtherDave, None
comment by gothgirl420666 · 2013-04-08T18:30:13.848Z · LW(p) · GW(p)

I'm sorry, I can't really remember any specific links to discussions, and I don't really know exactly who believes in what ideas, but I feel like there are a lot of people here, and especially people who show up in the comments, who believe that certain races are inherently more or less intelligent/violent/whatever on average than others. I specifically remember nyan_sandwich saying that he believes this, calling himself a "proto-racist" but that's the only example I can recall.

The "reactionary" philosophy is discussed a lot here too, and I feel like most people who subscribe to this philosophy are racist. Mencius Moldbug is the biggest name in this, I believe. Also I've seen a lot of links to this site http://isteve.blogspot.com/ which seems to basically be arguing in favor of racism. This blog post http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/ contains a discussion of these issues.

Replies from: Nornagest, Kawoomba
comment by Nornagest · 2013-04-08T19:55:59.636Z · LW(p) · GW(p)

The one basically follows from the other, I think. This isn't a reactionary site by any means; the last poll showed single-digit support for the philosophy here, if it's fair to consider it a political philosophy exclusive with liberalism, libertarianism, and/or conservatism. However, neoreaction/Moldbuggery gets a less hostile reception here than it does on most non-reactionary sites, probably because it's an intensely contrarian philosophy and LW seems to have a cultural fondness for clever contrarians, and we have do have several vocal reactionaries among our commentariat. Among them, perhaps unfortunately, are most of the people talking about race.

It's also pretty hard to dissociate neoreaction from... let's say "certain hypotheses concerning race", since "racism" is too slippery and value-laden a term and most of the alternatives are too euphemistic. The reasons for this seem somewhat complicated, but I think we can trace a good chunk of them to just how much of a taboo race is among what Moldbug calls the Cathedral; if your basic theory is that there's this vast formless cultural force shaping what everyone can and can't talk about without being branded monstrous, it looks a little silly if that force's greatest bugbear turns out to be right after all.

(There do seem to be a few people who gravitate to neoreaction as an intellectual framework that justifies preexisting racism, but I don't think Moldbug -- or most of the neoreactionary commentators here -- fall into that category. I usually start favoring this theory when someone seems to be dwelling on race to the exclusion of even other facets of neoreaction.)

comment by Kawoomba · 2013-04-08T18:35:52.172Z · LW(p) · GW(p)

If someone were to correctly point out genetic differences between groups (let's assume correctness as a hypothetical), would that be - in your opinion - 1) racist and reprehensible, 2) racist but not reprehensible, or (in the hypothetical) 3) not racist?

Would your opinion differ if those genetic differences were relating to a) IQ, or b) lactose intolerance?

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-04-08T19:42:44.780Z · LW(p) · GW(p)

Yes to the second question, in that I would give the answer of 2 for A and 3 for B.

Racism has at least three definitions colloquially that I can think of

  • 1: A belief that there is a meaningful way to categorize human beings into races, and that certain races have more or less desirable characteristics than others. This is the definition that Wikipedia uses. Not that many educated people are racist according to this definition, I think.

  • 2: The tendency to jump to conclusions about people based on their skin color, which can manifest as a consequence of racism-1, or unconsciously believing in racism-1. Pretty much everyone is racist to some extent according to this definition.

  • 3: Contempt or dislike of people based on their skin color, i.e. "I hate Asians". You could further divide this into consciously and unconsciously harboring these beliefs if you wanted.

In the sexism debate, these three definitions are sort of given separate names: "belief in differences between the sexes", "sexism", and "misogyny" respectively.

Racism-3 seems to be pretty clearly evil, and racism-2 causes lots of suffering, but racism-1 basically by definition cannot be evil if it is a true belief and you abide by the Litany of Tarski or whatever. But because they have the same name, it gets confusing.

Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is. Own up to your beliefs.

(I am not racist-1, for the record.)

Replies from: wedrifid, None, CCC, MugaSofer, army1987, Osiris
comment by wedrifid · 2013-04-08T20:22:19.993Z · LW(p) · GW(p)

Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is.

"What it fucking is" is a straw man. ie. "and that certain races have more or less desirable characteristics than others" is not what the people you are disparaging are likely to say, for all that it is vaguely related.

Own up to your beliefs.

Seeing this exhortation used to try to shame people into accepting your caricature as their own position fills me with the same sort of disgust and contempt that you have for racism. Failure to "own up" and profess their actual beliefs is approximately the opposite of the failure mode they are engaging in (that of not keeping their mouth shut when socially expedient). In much the same way suicide bombers are not cowards.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-04-09T00:15:57.353Z · LW(p) · GW(p)

According to Wikipedia, "racism is usually defined as views, practices and actions reflecting the belief that humanity is divided into distinct biological groups called races and that members of a certain race share certain attributes which make that group as a whole less desirable, more desirable, inferior or superior."

This definition appears to exactly match the beliefs of the people I am talking about. I guess it's all in how you define superior, inferior, more desirable, etc. But most of the discourse revolves around intelligence which is a pretty important trait and I don't think these people believe that black people, for example, have traits that make up for their supposed lack of intelligence, or that Asians have flaws that make up for their supposed above-average intelligence (and no, dick size doesn't count). In particular, these people seem to believe that an innate lack of intelligence is to blame for the fact that so many African countries are in total chaos and unless you believe in a soul or something, it's hard to imagine that a race physically incapable of sustaining civilization is not in some meaningful way "inferior".

If you hold a belief that is described with a name that has negative connotations, you have two options. You can either hide behind some sort of euphemism, or you can just come out and say "yes I do believe that, and I am proud of it". I think the second choice is much more noble, and if I were to adopt these beliefs, I would just go ahead and describe myself as a racist. It's not really a major issue though and I probably shouldn't have used the word "fucking" in my previous post.

But anyway, since the term is completely accurate, the only reason I can think of to not call the people I'm describing racists is because it might offend them, which is deeply ironic.

Replies from: Viliam_Bur, army1987, khafra, army1987
comment by Viliam_Bur · 2013-04-14T09:31:48.537Z · LW(p) · GW(p)

If you hold a belief that is described with a name that has negative connotations, you have two options. You can either hide behind some sort of euphemism, or you can just come out and say "yes I do believe that, and I am proud of it".

There is also a third option: Keep your identity small and pick your battles. Just because the society happens to disagree with you in one specific topic, that is no reason to make that one topic central to your life, and to let all other people define you by that one topic regardless of what other traits or abilities you have -- which will probably happen if you are open about that disagreement.

Imagine that you live in a society where people believe that 2+2=5, and they also believe that anyone who says 2+2=4 is an evil person and must be killed. (There seems to be a good reason for that. Hundred years ago there was an evil robot who destroyed half of the planet, and it is know that the robot believed that 2+2=4. Because this is the most known fact about the robot, people concluded that beliving that 2+2=4 must be the source of all evil, and needs to be eradicated from the society. We don't want any more planetary destruction, do we?) What are your choices? You could say that 2+2=4 and get killed. Or you could say that 2+2=4.999, avoid being killed, only get a few suspicious looks and be rejected at a few job interviews; and hope that if people keep doing that long enough, at one moment it will become acceptable to say that 2+2=4.9, or even 4.5, and perhaps one day no one will be killed for saying that it equals 4.

The third option is to enjoy food and wine, and refuse to comment publicly on how much 2+2 is. Perhaps have a few trusted friends you can discuss maths with.

Replies from: gothgirl420666, army1987
comment by gothgirl420666 · 2013-04-14T14:03:42.796Z · LW(p) · GW(p)

Okay, but all I'm saying is that if you do decide to talk about your beliefs, you should use a more honest term for your belief system. I definitely agree with you that racists should not go around talking publicly about their beliefs! You seem to have inferred something from my post that I didn't mean, sorry about that.

Replies from: None
comment by [deleted] · 2013-04-27T06:42:00.570Z · LW(p) · GW(p)

Okay, but all I'm saying is that if you do decide to talk about your beliefs, you should use a more honest term for your belief system.

Interesting. I'm fond of using a negative-connotation framing of myself and my beliefs, but I wouldn't call it "honest".

In general, socially admitted "beliefs" are actually actions. I see no reason to optimize them for anything other than effectiveness.

(LW is different. There is enough openness here and epistemic rationality norms that it's actually a good idea to share your beliefs and get criticism.)

comment by A1987dM (army1987) · 2013-04-27T11:21:57.695Z · LW(p) · GW(p)

Of course, what I usually do is saying “2+2>3” when I want to sound politically correct and “2+2<6” when I want to sound meta-contrarian. (Translating back from the metaphor, those would be “for all we know, achievement gaps may be at least partly caused by nurture” and “for all we know, achievement gaps may be at least partly caused by nature” respectively.)

comment by A1987dM (army1987) · 2013-04-27T11:18:15.491Z · LW(p) · GW(p)

According to Wikipedia, "racism is usually defined as views, practices and actions reflecting the belief that humanity is divided into distinct biological groups called races and that members of a certain race share certain attributes which make that group as a whole less desirable, more desirable, inferior or superior."

I think that “group as a whole” is the key word. Men are taller than women in average, and being tall is usually considered desirable; is pointing that out sexist? I'd say that until you treat that fact as a reason to consider a gender “as a whole” more desirable than another, it isn't.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-27T11:40:37.374Z · LW(p) · GW(p)

Most people do consider a gender as a whole more desirable than another ... (and can also supply some "facts" on which that preference is based).

Replies from: Document, army1987
comment by Document · 2013-04-27T11:50:45.704Z · LW(p) · GW(p)

Possibly related: Overcoming Bias : Mate Racism.

comment by A1987dM (army1987) · 2013-04-27T12:15:19.837Z · LW(p) · GW(p)

Doesn't contradict what I said, because I never claimed that most people aren't sexist. (And BTW, I'm not sure whether what you mean by “desirable” is what was meant in WP's definition of racism. I'm not usually sexually attracted to males or Asians, but I consider this a fact about me, not about males or Asians, and I don't consider myself sexist or racist for that.)

(EDIT: to be more pedantic, one could say that the fact that I'm normally only attracted to people with characteristics X, Y, and Z is a fact about me and that the fact that males/Asians seldom have characteristics X, Y and Z is a fact about them, though.)

comment by khafra · 2013-04-09T18:26:18.538Z · LW(p) · GW(p)

the only reason I can think of to not call the people I'm describing racists is because it might offend them

If they believed you, consistency bias might make them lean more toward racist-2 and racist-3. Or it might shame them into lowering their belief in the entire reactionary memeplex, which would be epistemically sub-optimal. It might lower their status, or even their earning ability if justified accusations of racism became associated with their offline identities. There's many ways leveraging emotionally loaded terms can have negative effects.

comment by A1987dM (army1987) · 2013-04-14T20:50:38.894Z · LW(p) · GW(p)

(and no, dick size doesn't count)

I LOL'ed at that.

comment by [deleted] · 2013-04-27T06:30:28.516Z · LW(p) · GW(p)

(I am not racist-1, for the record.)

Why not?

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-04-29T01:13:15.857Z · LW(p) · GW(p)

Are you allowed to ask "why not"?

Isn't this one of those situations where the burden of proof lies on the claim?

Replies from: None
comment by [deleted] · 2013-04-29T23:19:01.493Z · LW(p) · GW(p)

It is known that human populations separately evolved for at least 15000 years, facing different selection pressures that have produced many differences in physiology, appearance, size, prevalence to deseises, even what foods are edible. It would take some serious reasoning to postulate that these differences are magically limited to things that don't affect people's abilities and quality of life.

It is generally accepted that ethiopians (or is it kenyans?) are good at marathons, and that ashkenazi jews have higher average IQ scores and win more nobel prizes. There's two well accepted racial differences in desirable traits right there, so we know it's possible. Unless there's some way to explain ashkenazi genius that removes the correlation with race?

Further, there's quite a variety of IQ surveys, life outcome data, and other such that seems to self-correlate really well and hold up under various controls, and correlates quite mysteriously with race.

So there's a-priori reason to believe in racial differences, and such differences are in fact observed.

If I left it at this, what would your response be? Would it be to dispute that such differences are innate and caused by genetics, as opposed to cultural forces? Forgive me if that's not your response; it's usually a good bet. If that is your response, note that the conversation is now about the details of the corellation, not whether it exists.

That is, the whether question is resolved in favor of racism. The open question is now how:

  • It could be genetic.
  • It could be cultural.
  • It could be imposed by expectations.

But whether some kid is smart because his ancestors are smart, or because he caught a memetic smartness in childhood, or because society tells him he should be smart because of his skin color, is irrelevant to someone who is simply wondering if a sample of kids who have the same background will be smart or not on average.

So why reject the above racism-1; that different races have different prevalence of desirable traits, so that learning about race can tell you about such traits? Racial differences are an observation to be explained, not even a question that could go either way.

comment by CCC · 2013-04-14T09:50:51.337Z · LW(p) · GW(p)

As far as racism-1 goes, I am told that high levels of melanin in the skin lead to an immunity to sunburn. So black people can't get sunburnt - that's a desirable characteristic, to my mind. (There's still negative effects - such as a headache - from being in the sun too long. Just not sunburn).

Replies from: Zaine, army1987, Morendil
comment by Zaine · 2013-04-27T03:07:57.743Z · LW(p) · GW(p)

Science:

Human skin is repeatedly exposed to ultraviolet radiation (UVR) that influences the function and survival of many cell types and is regarded as the main causative factor in the induction of skin cancer. It has been traditionally believed that skin pigmentation is the most important photoprotective factor, since melanin, besides functioning as a broadband UV absorbent, has antioxidant and radical scavenging properties. Besides, many epidemiological studies have shown a lower incidence for skin cancer in individuals with darker skin compared to those with fair skin. Skin pigmentation is of great cultural and cosmetic importance, yet the role of melanin in photoprotection is still controversial. This article outlines the major acute and chronic effects of UV radiation on human skin, the properties of melanin, the regulation of pigmentation and its effect on skin cancer prevention.

comment by A1987dM (army1987) · 2013-04-14T20:18:41.464Z · LW(p) · GW(p)

Race isn't exactly the same as skin colour. I wouldn't expect Colin Powell to be much more resistant to sunburn than myself.

comment by Morendil · 2013-04-14T16:52:30.594Z · LW(p) · GW(p)

I am told that high levels of melanin in the skin lead to an immunity to sunburn

Did you fact-check that?

Replies from: Kawoomba
comment by Kawoomba · 2013-04-14T18:00:15.590Z · LW(p) · GW(p)

Sunburn results when the amount of exposure to the sun or other ultraviolet light source exceeds the ability of the body's protective pigment, melanin, to protect the skin. Sunburn in a very light-skinned person may occur in less than 15 minutes of midday sun exposure, while a dark-skinned person may tolerate the same exposure for hours.

Source

Replies from: Morendil
comment by Morendil · 2013-04-14T18:30:18.097Z · LW(p) · GW(p)

That doesn't say "immunity to sunburn" (it also doesn't say much about "a meaningful way to categorize human beings into races", since the variable "levels of melanin in the skin" screens off the variable "race").

Replies from: CCC, Kawoomba
comment by CCC · 2013-04-14T19:08:50.908Z · LW(p) · GW(p)

Fact-checking, via sources similar to Kawoomba's, leads to the milder claim that melanin in the skin merely provides protection against sunburn, and not immunity. Levels of melanin in the skin are very strongly correlated with race; though it is not strictly equivalent (albinism is possible among black people) it is reasonable to say that black people, in general, are more resistant to sunburn than white people.

Replies from: Morendil
comment by Morendil · 2013-04-14T19:20:16.141Z · LW(p) · GW(p)

Levels of melanin in the skin are very strongly correlated with race

This smacks of circular reasoning - for a correlation to be demonstrated, you'd have to know that "there is a meaningful way to categorize human beings into races " to start with. So, this too needs a citation.

There is a largish argumentative gap from "some genes confer a desirable resilience to sunburn" (possibly conferring some less desirable traits at the same time) to "some races enjoy unalloyed advantages over others by virtue of heredity".

Replies from: army1987, CCC
comment by A1987dM (army1987) · 2013-04-14T20:21:40.597Z · LW(p) · GW(p)

Levels of melanin in the skin are very strongly correlated with race

This smacks of circular reasoning - for a correlation to be demonstrated, you'd have to know that "there is a meaningful way to categorize human beings into races " to start with. So, this too needs a citation.

What about this: levels of melanin in the skin are very strongly correlated with the geographic provenance of one's ancestors in the late 15th century?

Replies from: Morendil
comment by Morendil · 2013-04-14T21:01:06.769Z · LW(p) · GW(p)

Somewhat more specific; still not enough to support a coherent notion of "race", as geographic latitude becomes a confounder. For instance, there's mounting evidence that "similar skin colors can result from convergent adaptation rather than from genetic relatedness" (from WP).

Classifiers such as "black", "white", and so on do not carve nature at its joints.

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-20T17:55:07.628Z · LW(p) · GW(p)

"similar skin colors can result from convergent adaptation rather than from genetic relatedness"

Well... duh. I don't think anyone would have expected that the reason sub-Saharan Africans, south Indians, and Australian Aborigines are all dark-skinned, or Europeans, Ainu and Inuit are all pale-skinned, is that they're closely related.

Classifiers such as "black", "white", and so on do not carve nature at its joints.

Those labels aren't intended to be literal. Colin Powell is still generally considered “black”, despite being pale-ish.

comment by CCC · 2013-04-14T19:44:16.792Z · LW(p) · GW(p)

This smacks of circular reasoning - for a correlation to be demonstrated, you'd have to know that "there is a meaningful way to categorize human beings into races " to start with. So, this too needs a citation.

Well, there have been such categorisations in the past. Consider, for example, Apartheid - the entire legal system enshrined under that name depended on a categorisation along racial lines. However, it was far from a perfect classification; to quote from the linked section of the article:

The Apartheid bureaucracy devised complex (and often arbitrary) criteria at the time that the Population Registration Act was implemented to determine who was Coloured. ... Different members of the same family found themselves in different race groups.

(What was then done with that classification was racism in an extremely negative sense, a very conscious and institutionalised form of racism-3; however, the point of the citation is merely that there were laws laid down that served as a racial categorisation, however flawed).

There is a largish argumentative gap from "some genes confer a desirable resilience to sunburn" (possibly conferring some less desirable traits at the same time) to "some races enjoy unalloyed advantages over others by virtue of heredity".

Oh yes. Agreed. One very minor desirable feature does not make an unalloyed advantage, especially when paired with an unknown number of other traits, which may be positive or negative.

comment by Kawoomba · 2013-04-14T18:50:04.300Z · LW(p) · GW(p)

I was only responding to what you quoted, which is that "high levels of melanin in the skin lead to an immunity to sunburn". Immunity is - as could be expected - a poor choice of words and strictly speaking wrong, but "high degree of resilience / protection" would be valid.

Replies from: Morendil
comment by Morendil · 2013-04-14T19:06:02.407Z · LW(p) · GW(p)

a poor choice of words and strictly speaking wrong

That's the point of a fact-check - saying things that are strictly speaking true, rather than things that are strictly speaking wrong.

If you'll forgive me for quoting chapter and verse, "In argument strive for exact honesty, for the sake of others and also yourself: the part of yourself that distorts what you say to others also distorts your own thoughts."

Replies from: Kawoomba
comment by Kawoomba · 2013-04-14T19:13:31.924Z · LW(p) · GW(p)

One of life's crazy coincidences: I just at this very moment looked at that same page and took a quote from it for another comment I just now submitted, before reading yours.

That aside, my "strictly speaking wrong" was, unfortunately, also strictly speaking wrong. For example, the jargon "x gene variant confers a certain immunity versus y disease" is also in good use - otherwise the word "immunity" could never be used period. Vaccinations wouldn't be described by conferring immunity, when sometimes they just limit the extent of the infection to a subclinical level. So in some sense, "immunity to sunburn" isn't even wrong, strictly speaking, just an unfortunately chosen phrase in a forum such as this (which always checks for boundary cases and not for "true in a more general sense", a habit I myself indulge in too much).

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-14T20:29:17.108Z · LW(p) · GW(p)

I'd normally agree, but in this case CCC explicitly said “black people can't get sunburnt”.

OTOH, I only get sunburnt if I do something deliberate such as sunbathing for an hour around noon in July in Italy, and even then it's relatively mild, and I'm not quite black; I'd expect darker-skinned people to be even more resistant than that. So I'd say that whereas black people can get sunburnt in principle, for all practical purposes they can't. This is still a hell of an advantage compared to the pale northern Europeans I knew who got sunburned by walking around in November in Ireland.

comment by MugaSofer · 2013-04-08T21:57:20.159Z · LW(p) · GW(p)

Some people might object to calling racism-1 racism, and instead will decide to call it "human biodiversity" or "race realism". I think this is bullshit. Just fucking call it what it is. Own up to your beliefs.

Well, if you think races are a real thing, then calling this belief race realism seems fairly clear, and helps distinguish your belief from type-3 racism. Human biodiversity implies something more like support for eugenics, to me, since you're saying that humans are diverse, not that race is a functional Schelling point.

Replies from: Nornagest
comment by Nornagest · 2013-04-08T22:18:58.432Z · LW(p) · GW(p)

Stripped of connotations, "race realism" to me implies the belief that empirical clusters exist within the space of human diversity and that they map to the traditional racial classifications, but not necessarily that those clusters affect intellectual or ethical dimensions to any significant degree. I'm not sure if there's an non-euphemistic value-neutral term for racism-1 in the ancestor's typology, but that isn't it.

(The first thing that comes to mind is "scientific racism", which I'd happily use for ideas like this in a 19th- or early 20th-century context, but I have qualms about using it in a present-day context.)

Replies from: MugaSofer
comment by MugaSofer · 2013-04-09T13:00:03.115Z · LW(p) · GW(p)

Stripped of connotations, "race realism" to me implies the belief that empirical clusters exist within the space of human diversity and that they map to the traditional racial classifications, but not necessarily that those clusters affect intellectual or ethical dimensions to any significant degree.

Ah, good point.

comment by A1987dM (army1987) · 2013-04-27T10:35:57.724Z · LW(p) · GW(p)

But lactose intolerance arguably is a less desirable characteristic than lactose tolerance! :-)

comment by Osiris · 2013-04-27T05:31:55.263Z · LW(p) · GW(p)

I share considerably more of my heritage with Asians than I do with Caucasians. However, I do not have the same coloration.

So, if one is racist-1, how would one treat me? Am I white, for appearing white? Am I Asian, for the overwhelming number of my ancestors' coloration? In other words, what makes race? My genetics, or my skin? If it is my skin, then it would appear race is nothing more than a bit of culture, with no real advantages or disadvantages attached save those given by appearance.

For the record, I consider myself of no race save human, and expect others to see me as a human being.

Replies from: None
comment by [deleted] · 2013-04-27T06:24:37.964Z · LW(p) · GW(p)

So, if one is racist-1, how would one treat me?

Racist-1 reporting in. Believing that ethnicity is correlated with desirable or undesirable traits does not in itself warrant any particular kind of behavior. So how would I treat you? Like a person. If I had more evidence about you (your appearance, time spent with you, your interests, your abilities, etc), that would become more refined.

Am I white, for appearing white? Am I Asian, for the overwhelming number of my ancestors' coloration? In other words, what makes race? My genetics, or my skin? If it is my skin, then it would appear race is nothing more than a bit of culture, with no real advantages or disadvantages attached save those given by appearance.

Taboo "race". Categories aren't really meaningful in edge cases.

You are who you are, and there are many facts entangled with who you are.

expect others to see me as a human being.

Does this have any actual meaning? How does it square with the virtue of narrowness (a lot more can be said about this particular semi-Asian LWer called Osiris than can be said about "a human")?

How do you exclude race and stuff from what we are allowed to consider, without excluding things like your name and personality?

comment by TheOtherDave · 2013-04-08T18:58:55.255Z · LW(p) · GW(p)

If it helps, the LW user I most consistently associate with the "certain races are inherently more or less intelligent/violent/whatever on average than others" (as gothgirl420666 says below) is Eugine Nier. A quick Google search ("site:http://lesswrong.com Eugine_Nier rac intelligence") turns up just about any proxy measure of intelligence, from SAT scores, to results of IQ tests, to crime rates, will correlate with race, for example.

That said, were someone to describe Eugine Nier or their positions as "racist," I suspect they would respond that "racist" means lots of different things to different people and is not a useful descriptor.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-04-08T20:16:20.536Z · LW(p) · GW(p)

You and I both participated in a thread Eugine Nier about the dynamics involved in talking about race and sex differences two weeks ago - link although we didn't debate the issue itself. (I'm not sure so I'll ask - are we discouraged from debating so-called "mindkilling" topics here?)

anyway, I had assumed he was an outlier on lesswrong, and that most folks here would take an agnostic-leaning-not-racist stance in the issue.

I wonder if we can cram this question into the next demographic poll?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-08T20:48:29.157Z · LW(p) · GW(p)

There is a strong local convention against discussing topics for which certain positions are strongly enough affiliated with tribal identities that the identity-signalling aspects of arguments for/against those positions can easily interfere with the evidence-exploring aspects of those arguments. (Colloquially, "mindkilling" topics. as you say.)

That said, there's also a strong local convention against refraining from discussing topics just because such identity-signalling aspects exist.

So mostly, the tradition is we argue about what the tradition is.

For my own part, I prefer to avoid partisan-political discussions (sometimes "Blue/Green discussions" colloquially) here, but I don't mind policy-political discussions. In the US, race is more typically the former than the latter.

I had assumed he was an outlier on lesswrong,

I certainly agree that he's an outlier.

that most folks here would take an agnostic-leaning-not-racist stance in the issue.

There are interpretations of this sentence I would agree with, and interpretations I would not agree with, and I would expect the exercise of disentangling the various interpretations to be difficult.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-04-08T21:20:16.099Z · LW(p) · GW(p)

agnostic-leaning-not-racist

Analogous to "agnostic atheist". A person who, in absence of compelling evidence for or against the claim that racial differences in intelligence are genetic in origin, prefers to refrain from opining on the issue. If pressed for an answer such a person would guess that racial differences are probably not genetic, because they judge this to be the more parsimonious answer.

Well...on one hand, mindkilling, strong social pressure to signal non-racism, political undertones, potentially triggering topic for people on the receiving end of racism, etc.

On the other hand, working through this practical question is a great way to learn about a variety of topics which are of interest to this forum (factors which contribute to the traits we associate with intelligence, how we test intelligence, etc).

I suppose this is largely a question of how dispassionately people can handle this sort of issue. I think a early teenage version of me would probably have have gotten defensive at the allegation that my ethnic group was genetically inferior, but I think most people on Lesswrong seem to be able to maintain a level of abstraction that keeps things from getting heated. Although, I'm not sure that people could resist the temptation to debate, rather than to contribute relevant information and update accordingly.

Replies from: Prismattic, TheOtherDave
comment by Prismattic · 2013-05-04T03:05:22.190Z · LW(p) · GW(p)

My general impression is that there actually two different fault-lines about race -related questions on Lesswrong.

One is: Are there biologically-determined differences in politically-sensitive traits like intelligence between races? (One should note here that a) there is more to biology than genes and b) "race" is an amorphous term and the layman's use of it based on a rough eyeballing of skin color doesn't necessarily line up well with "genetic cohort"; a desire not to have to explain these nuances over and over again is another reason for people to take the agnostic-leaning against position.)

The second fault line I've observed is about statistical discrimination -- given an individual from a supposedly "inferior" cohort who has nevertheless provided independent bits of information about their intelligence (e.g. scored far above the mean on an IQ test) , should one argue that the more individualized bits screen off whatever information one might have argued just based on their cohort, or should one privilege the cohort information and assume the individual bits of information would just regress to the mean on further investigation. (Someone who is less inclined to the former position than I am might do a better job phrasing the latter; I suspect I'm strawmanning it somewhat but I leave it to its advocates to articulate it better.)

comment by TheOtherDave · 2013-04-08T21:29:12.531Z · LW(p) · GW(p)

working through this practical question

Confirming: the question you're referring to is "are racial differences in intelligence genetic in origin?"

is a great way to learn about a variety of topics which are of interest to this forum (factors which contribute to the traits we associate with intelligence, how we test intelligence, etc).

It would surprise me if the differential benefits to be gained from this, relative to instead exploring some other question with fewer mindkilling, signalling, blue/green, etc, aspects, was worth the differential costs.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-04-08T21:46:17.157Z · LW(p) · GW(p)

confirmed.

and yes, I think you're right. Although in any study of intelligence, cultural differences will inevitably become involved, and someone is eventually going to bring up population-level genetic differences as a confounding factor. Preferably, it's kept as a side issue, rather than the main one.

comment by [deleted] · 2013-04-27T06:32:41.933Z · LW(p) · GW(p)

Which major LW contributor is advocating racism, and where can I read about it?

Me! In this very thread!

comment by Nisan · 2013-04-11T21:47:00.172Z · LW(p) · GW(p)

Welcome! I'm unable to read while listening to music with words in it. I wonder how universal that is.

Replies from: malcolmocean, shminux, CCC, None
comment by MalcolmOcean (malcolmocean) · 2013-04-11T22:11:03.785Z · LW(p) · GW(p)

I know of at least three possible minds for this. Pretty sure we all assumed we were typical until talking about it.

  • One friend of mine is like you, and finds music horribly distracting to reading.
  • Another friend becomes practically deaf while reading, so music is just irrelevant.
  • I, on the third hand, can sing along to songs I know, while reading. I can possibly even do this for simple songs I don't know. I would suspect this is not optimal reading from a comprehension or speed perspective, but it's a lot of fun.
comment by Shmi (shminux) · 2013-04-11T22:05:02.027Z · LW(p) · GW(p)

Pretty much the same here. I can only read when I tune out the lyrics. Well, not quite true, I can certainly read, but the content just doesn't register.

comment by CCC · 2013-04-14T09:44:37.069Z · LW(p) · GW(p)

Similarly. I find myself following along to the words in the song instead of the words on the page.

comment by [deleted] · 2013-04-13T14:43:58.194Z · LW(p) · GW(p)
comment by MugaSofer · 2013-04-05T23:27:29.436Z · LW(p) · GW(p)

Welcome to LessWrong!

I'm somewhat of an emotional Luddite - whenever a new technology like Google Glass or something comes out I groan and I think about all the ways it's going to further detach people from reality. Transhumanism was disgusting to me before I found this site, while reading the sequences I started to buy into the philosophy, now a few months after reading the sequences for the first time I rationally know it is a very very good thing but still emotionally find it a little unappealing.

Interesting. If I may; what is it about technology/futurism you find so unappealing?

Also, I have to ask: why the username?

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-04-06T16:49:42.511Z · LW(p) · GW(p)

Interesting. If I may; what is it about technology/futurism you find so unappealing?

I think it would take a very long response to truly answer this, unfortunately. A lot of it has to do with exposing myself in the past through friends, media, and my surroundings to hippie-ish memeplexes that sort of reinforce this view. (Right now I go to school on a dairy farm, for example). Also in the past I had extremely irrational views on a lot of issues, one of which was a form of neo-luddism, and that idea is still in my brain somewhere.

Also, I have to ask: why the username?

I find it amusing, I guess. I use it on a few sites. It does sort of clash with the writing style I'm using here, I'll admit.

comment by aime15 · 2013-04-16T00:55:51.440Z · LW(p) · GW(p)

Hello, I'm E. I'll be entering university in September planning to study some subset of {math, computer science, economics}. I found Less Wrong in April 2012 through HPMoR and started seriously reading here after attending SPARC. I haven't posted because I don't think I can add too much to discussions, but reading here is certainly illuminating.

I'm interested in self-improvement. Right now, I'm trying to develop better social skills, writing skills, and work ethic. I'm also collecting some simple data from my day-to-day activities with the belief that having data will help me later. Some concrete actions I am currently taking:

  • Conditioning myself (focusing on smiling and positive thoughts) to enjoy social interaction. I don't dislike social interaction, but I'm definitely averse to talking to strangers. This aversion seems like it will hurt me long-term, so I'm trying to get rid of it.
  • Writing in a journal every night. Usually this is 200-300 words of my thoughts and summaries of the more important events that happened. I started this after noticing that I repeatedly tried and failed to recall my thoughts from a few months or years ago.
  • Setting daily schedules for myself. When I get sidetracked time seems to fly out the window, though when I'm in a state of flow I seem to be happier. Frequent reminders that I should be working seem to be reducing the amount of time I waste browsing Facebook/Reddit.
  • Data collection. Right now I'm recording my meals, amount of sleep, and severity of acne each day. I'm very open to suggestions about other things that are cheap to record but are not useless.

I've noticed that I hate when something disrupts my daily schedule. I plan out entire days and when a family or other social commitment interrupts this, I find it difficult to focus for the rest of the day. I think this is because I like rigidity. This is another thing I'm trying to de-program, in a less systematic way, by consciously thinking about being spontaneous and then doing more spontaneous things. It's hard to judge if this is working because I've been traveling a lot in the past few months, which naturally leads to more fluid/less planned days.

I've read the sequences "Mysterious Answers to Mysterious Questions" and "Map and Territory." I found this very encouraging as a lot of the material in those sequences were things I've thought about in the past, presented in a more coherent and logical manner. I read Part I of Gödel, Escher, Bach and also found a lot that aligned with my intuitions. I haven't had much time to read with college visits, school, and MOOCs taking up a lot of my time. Hopefully that will change by June.

I've been taking MOOCs from Coursera and edX since June 2012. My favorites have been Machine Learning, Networked Life, Game Theory, Principles of Economics for Scientists (all Coursera), CS188.1x, and PH207x (edX). These end up being pretty time-consuming but far more interesting and rewarding than my courses at school (an American public high school).

Some things that fall in the category of "things that seem interesting on the surface, but I don't currently have time to look at seriously/I am too lazy to look at seriously": AI, basic linguistics, logic.

Some things that fall in the category of "things that are probably important, but I'm too scared to think about seriously for sufficient time": things to do in college (including selecting classes). There's more in this list but nothing comes to mind right now, maybe because I continually punt them to the back of my mind whenever they come up.

Some things I hate about the world: documents that are not formatted nicely.

I've learned a lot from this community. I think the most important lesson I learned here was to look at things from both an outside view and an inside view. Looking forward to learning more from Less Wrong and contributing in the future.

Replies from: ModusPonies
comment by ModusPonies · 2013-04-17T14:17:01.494Z · LW(p) · GW(p)

Welcome! You sound remarkably driven.

I'll be entering university in September planning to study some subset of {math, computer science, economics}.

Math and CS are foundational fields which can be used for nearly anything, while economics past intro level is much more specialized. I'd suggest putting the least focus on economics unless/until you're sure you want to do something with it. (Warning: I am a programmer with an econ degree. I may be projecting, here.)

I'm very open to suggestions about other things that are cheap to record but are not useless.

Subjective happiness, maybe? The old "how good do you feel right now on a scale of 1-10" could be one way to quantify this.

Some things I hate about the world: documents that are not formatted nicely.

They are the worst thing.

Replies from: aime15
comment by aime15 · 2013-04-17T23:01:23.761Z · LW(p) · GW(p)

I'd suggest putting the least focus on economics unless/until you're sure you want to do something with it.

Thanks. I'll be at an engineering school that requires a focus in a social science/humanities area, so I'm planning on focusing on economics. I don't think I'd major in economics but of course this could change (especially since I know very little about the fields I mentioned).

comment by Adele_L · 2013-04-01T23:52:34.674Z · LW(p) · GW(p)

Hi everyone. I have been lurking on this site for a long time, and somewhat recently have made an account, but I still feel pretty new here. I've read most of the sequences by now, and I feel that I've learned a lot from them. I have changed myself in some small ways as a result, most notably by donating small amounts to whatever charity I feel is most effective at doing good, with the intention that I will donate much more once I am capable of doing so.

I'm currently working on a Ph.D. in Mathematics right now, and I am also hoping that I can steer my research activities towards things that will do good. Still not sure exactly how to do this, though.

I also had the opportunity to attend my local Less Wrong meetup, and I have to say it was quite enjoyable! I am looking forward toward future interactions with my local community.

Replies from: Pablo_Stafforini, Nisan, magfrump
comment by Pablo (Pablo_Stafforini) · 2013-04-02T02:12:39.312Z · LW(p) · GW(p)

I'm currently working on a Ph.D. in Mathematics right now, and I am also hoping that I can steer my research activities towards things that will do good. Still not sure exactly how to do this, though.

Hi Adele. Given what you write in your introduction, it's likely that you have already heard of this organization, but if this is not the case: you may want to check out 80,000 Hours. They provide evidence-based career advice for people that want to make a difference.

Replies from: Adele_L
comment by Adele_L · 2013-04-02T03:20:53.233Z · LW(p) · GW(p)

Thank you. I have been meaning to look into that more, so thanks for the reminder!

comment by Nisan · 2013-04-02T00:20:22.674Z · LW(p) · GW(p)

Welcome! I like your username.

EDIT: I know several people in this community who dropped out of math grad school, and most of them were happy with the decision. I'm choosing to graduate with a PhD in a useless field because I find myself in a situation where I can get one in exchange for a few months of work. I know someone who switched to algebraic statistics, which is a surprisingly useful field that involves algebraic geometry.

Replies from: John_Maxwell_IV, Adele_L, Adele_L
comment by John_Maxwell (John_Maxwell_IV) · 2013-04-02T09:01:04.832Z · LW(p) · GW(p)

I haven't looked at this issue in detail, but I seem to recall that not getting more education was one of the more common regrets among "Terman's geniuses", whoever those are. Link.

comment by Adele_L · 2013-04-02T03:37:11.585Z · LW(p) · GW(p)

I know several people in this community who dropped out of math grad school, and most of them were happy with the decision.

What is their reasoning?

Replies from: Nisan
comment by Nisan · 2013-04-02T04:03:20.502Z · LW(p) · GW(p)

I can't speak for them, but I expect it's something like this: One can make more money, do more good, have a more fun career, and have more freedom in where one lives by dropping out than by going into academia. And having a PhD when hunting for non-academic jobs is not worth spending several years as a grad student doing what one feels is non-valuable work for little pay.

You'd have to speak to someone who successfully dropped out to get more details; and of course even if all their judgments are correct, they may not be correct for you.

comment by Adele_L · 2013-04-02T00:29:21.598Z · LW(p) · GW(p)

Thanks, it's just my name and last initial.

Replies from: Nisan
comment by Nisan · 2013-04-02T00:35:06.375Z · LW(p) · GW(p)

Ah, I thought it was a math-flavored pseudonym. Also, I added an addendum to my comment above.

comment by magfrump · 2013-04-03T22:49:33.882Z · LW(p) · GW(p)

There are several people on LW (myself included) who continue to be in graduate school in mathematics. If you're interested in just talking math, there'll be an audience for that. I would personally be interested in more academic networking happening here--even if most people on LW will end up leaving mathematics as such.

Replies from: Adele_L
comment by Adele_L · 2013-04-04T03:50:32.971Z · LW(p) · GW(p)

I would personally be interested in more academic networking happening here--even if most people on LW will end up leaving mathematics as such.

Oh yeah, of course! I currently intend on getting my Ph.D. at least, although I am less certain about remaining in academia after that. I'm not sure LW is the place to talk about math that isn't of more general interest, but I am happy to talk more about it in PMs (I'm also a number theorist).

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2013-04-29T08:45:38.129Z · LW(p) · GW(p)

I consulted the Magic ∞-Ball (a neighborhood of semantic space with oracular properties) and it said: "The adelic cohomology of constructive quantum ordinals is technically essential to proving that induction plus reflection is asymptotically optimal in all Ω-logical worlds, and that's the key theorem in seed AI. So number theory is very important."

comment by Jennifer_H · 2013-10-23T04:19:48.731Z · LW(p) · GW(p)

Hello!

I'm Jennifer; I'm currently a graduate student in medieval literature and a working actor. Thanks to homeschooling, though, I do have a solid background and abiding interest in quantum physics/pure mathematics/statistics/etc., and 'aspiring rationalist' is probably the best description I can provide! I found the site through HPMoR.

Current personal projects: learning German and Mandarin, since I already have French/Latin/Spanish/Old English/Old Norse taken care of, and much as I personally enjoy studying historical linguistics and old dead languages, knowing Mandarin would be much more practical (in terms of being able to communicate with the greatest number of people when travelling, doing business, reading articles, etc.)

Replies from: Adele_L, komponisto, shminux
comment by Adele_L · 2013-10-23T04:54:33.669Z · LW(p) · GW(p)

Hey, another homeschooled person! There seem to be a lot of us here. How was your experience? Mine was the crazy religious type, but I still consider it to have been an overall good thing for my development relative to other feasible options.

Replies from: lavalamp, Jennifer_H
comment by lavalamp · 2014-01-01T07:08:18.982Z · LW(p) · GW(p)

Me three-- I thought I was the only one, where are we all hiding? :)

comment by Jennifer_H · 2013-10-23T05:12:29.143Z · LW(p) · GW(p)

My experience was, overall, excellent - although my parents are definitely highly religious. (To be more precise, my father is a pastor, so biology class certainly contained some outdated ideas!) However, I'm in complete agreement - relative to any other possible options, I don't think I could have gotten a better education (or preparation for postsecondary/graduate studies) any other way.

Replies from: Adele_L
comment by Adele_L · 2013-10-24T03:46:37.714Z · LW(p) · GW(p)

Yeah, I got taught young earth creationism instead of evolution. But despite this, i think I was better prepared academically than most of my peers.

comment by komponisto · 2014-01-01T00:32:57.269Z · LW(p) · GW(p)

Your self-description is one of the best arguments for homeschooling I have ever seen or could imagine being made. (See also: Lillian Pierce.)

Welcome to LW, and please keep existing.

comment by Shmi (shminux) · 2013-10-23T04:39:21.786Z · LW(p) · GW(p)

Impressive! How do you plan to learn Mandarin? Immersion? Rosetta Stone?

Replies from: Jennifer_H
comment by Jennifer_H · 2013-10-23T05:00:37.441Z · LW(p) · GW(p)

Combination of methods based on what has worked for me in the past with other languages! I've used Rosetta Stone before, for French & Spanish, and while it's definitely got advantages, I (personally - I also know people who love it!) also found it very time-consuming for very little actual learning, and it's also expensive for what it is.

Basically:

a) I have enough friends who are either native or fluent speakers of Mandarin that once I'm a little more confident with the basics, I will draft them to help me practice conversation skills :)

b) My university offers inexpensive part-time courses to current students.

c) Lots of reading, textbook exercises, watching films, listening to music, translating/reading newspapers, etc. in the language.

d) I'm planning to go to China to teach English in the not-too-distant future, so while I'd like to have basic communication skills down before I go, immersion will definitely help!

comment by David_Chapman · 2013-09-14T18:04:59.532Z · LW(p) · GW(p)

Hi!

I’ve been interested in how to think well since early childhood. When I was about ten, I read a book about cybernetics. (This was in the Oligocene, when “cybernetics” had only recently gone extinct.) It gave simple introductions to probability theory, game theory, information theory, boolean switching logic, control theory, and neural networks. This was definitely the coolest stuff ever.

I went on to MIT, and got an undergraduate degree in math, specializing in mathematical logic and the theory of computation—fields that grew out of philosophical investigations of rationality.

Then I did a PhD at the MIT AI Lab, continuing my interest in what thinking is. My work there seems to have been turned into a surrealistic novel by Ken Wilber, a woo-ish pop philosopher. Along the way, I studied a variety of other fields that give diverse insights into thinking, ranging from developmental psychology to ethnomethodology to existential phenomenology.

I became aware of LW gradually over the past few years, mainly through mentions by people I follow on Twitter. As a lurker, there’s a lot about the LW community I’ve loved. On the other hand, I think some fundamental, generally-accepted ideas here are limited and misleading. I began considering writing about that recently, and posted some musings about whether and how it might be useful to address these misconceptions. (This was perhaps ruder than it ought to have been.) It prompted a reply post from Yvain, and much discussion on both his site and mine.

I followed that up with a more constructive post on aspects of how to think well that LW generally overlooks. In comments on that post, several frequent LW contributors encouraged me to re-post that material here. I may yet do that!

For now, though, I’ve started a sequence of LW articles on the difference between uncertainty and probability. Missing this distinction seems to underlie many of the ways I find LW thinking limited. Currently my outline for the sequence has seven articles, covering technical explanations of this difference, with various illustrations; the consequences of overlooking the distinction; and ways of dealing with uncertainty when probability theory is unhelpful.

(Kaj Sotala has suggested that I ask for upvotes on this self-introduction, so I can accumulate enough karma to move the articles from Discussion to Main. I wouldn’t have thought to ask that myself, but he seems to know what he’s doing here! :-)

O&BTW, I also write about contemporary trends in Buddhism, on several web sites, including a serial, philosophical, tantric Buddhist vampire romance novel.

comment by lll · 2013-04-02T01:24:38.060Z · LW(p) · GW(p)

Hey everyone!

I'm ll, my real name is Lukas. I am a student at a technical university in the US and a hobbyist FOSS programmer.

I discovered Harry Potter and the Methods of Rationality accidentally one night, and since then I've been completely hooked on it. After I caught up, I decided to check out the Less Wrong community. I've been lurking since then, reading the essays, comments, hanging out in the IRC channel.

Replies from: EvelynM
comment by EvelynM · 2013-04-02T02:09:50.698Z · LW(p) · GW(p)

Welcome to Less Wrong III!

Replies from: Kindly, lll
comment by Kindly · 2013-04-02T03:13:22.107Z · LW(p) · GW(p)

It's not III, it's lll.

Replies from: Manfred, army1987, EvelynM, lll
comment by Manfred · 2013-04-06T02:33:16.494Z · LW(p) · GW(p)

We can just call him CL for short, to distinguish him from IIV.

comment by A1987dM (army1987) · 2013-04-02T16:11:57.381Z · LW(p) · GW(p)

Damn sans-serif fonts...

comment by EvelynM · 2013-04-02T16:29:32.967Z · LW(p) · GW(p)

If I were reading this in inconsolata, I'd have known that. Thanks.

comment by lll · 2013-04-02T13:33:00.934Z · LW(p) · GW(p)

It seems like my username is already sparking some controversies. It's three lowercase L letters.

My initial is LL, but I can't have a two letter username, so LLL, but I thought uppercase would be too much, so lll it is.

comment by lll · 2013-04-02T02:13:48.335Z · LW(p) · GW(p)

Thank you!

I am definitely enjoying this community. I am a recent Reddit expat, too, so I will focus my internet browsing time here. I don't think I will miss Reddit at all.

Replies from: VCavallo
comment by VCavallo · 2013-04-04T18:56:37.951Z · LW(p) · GW(p)

If your Reddit time commitment was anything like that of other people I know, you should be able to blow through all the sequences in about a day or two : )

comment by Roman_Yampolskiy · 2013-09-16T22:35:39.397Z · LW(p) · GW(p)

Hey, my name is Roman. You can read my detailed bio here, as well as some research papers I published on the topics of AI and security. I decided to attend a local LW meet up and it made sense to at least register on the site. My short term goal is to find some people in my geographic area (Louisville, KY, USA) to befriend.

Replies from: shminux, Wei_Dai, lukeprog
comment by Shmi (shminux) · 2013-09-16T23:29:03.984Z · LW(p) · GW(p)

Nice to see more AI experts here.

comment by Wei Dai (Wei_Dai) · 2013-09-17T14:04:40.143Z · LW(p) · GW(p)

Hi Roman. Would you mind answering a few more questions that I have after reading your interview with Luke? Carl Shulman and Nick Bostrom have a paper coming out arguing that embryo selection can eventually (or maybe even quickly) lead to IQ gains of 100 points or more. Do you think Friendly AI will still be an unsolvable problem for IQ 250 humans? More generally, do you see any viable path to a future better than technological stagnation short of autonomous AGI? What about, for example, mind uploading followed by careful recursive upgrading of intelligence?

Replies from: Roman_Yampolskiy, shminux
comment by Roman_Yampolskiy · 2013-09-17T15:19:41.304Z · LW(p) · GW(p)

Hey Wei, great question! Agents (augmented humans) with IQ of 250 would be superintelligent with respect to our current position on the intelligence curve and would be just as dangerous to us, unaugment humans, as any sort of artificial superintelligence. They would not be guaranteed to be Friendly by design and would be as foreign to us in their desires as most of us are from severely mentally retarded persons. For most of us (sadly?) such people are something to try and fix via science not something for whom we want to fulfill their wishes. In other words, I don’t think you can rely on unverified (for safety) agent (event with higher intelligence) to make sure that other agents with higher intelligence are designed to be human-safe. All the examples you give start by replacing humanity with something not-human (uploads, augments) and proceed to ask the question of how to safe humanity. At that point you already lost humanity by definition. I am not saying that is not going to happen, it probably will. Most likely we will see something predicted by Kurzweil (merger of machines and people).

Replies from: Wei_Dai, None, Moss_Piglet, shminux
comment by Wei Dai (Wei_Dai) · 2013-09-17T16:14:43.011Z · LW(p) · GW(p)

I think if I became an upload (assuming it's a high fidelity emulation) I'd still want roughly the same things that I want now. Someone who is currently altruistic towards humanity should probably still be altruistic towards humanity after becoming an upload. I don't understand why you say "At that point you already lost humanity by definition".

Replies from: Dr_Manhattan, Roman_Yampolskiy, Bugmaster
comment by Dr_Manhattan · 2013-09-17T21:05:45.832Z · LW(p) · GW(p)

Someone who is currently altruistic towards humanity should

Wei, the question here is would rather than should, no? It's quite possible that the altruism that I endorse as a part of me is related to my brain's empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang's "Understand" - http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen's Dr. Manhattan.

Replies from: Eliezer_Yudkowsky, Bugmaster
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-17T22:24:39.618Z · LW(p) · GW(p)

Logical fallacy: Generalization from fictional evidence.

A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.

If you start doing code modification, of course, some but not all bets are off.

Replies from: Dr_Manhattan, MugaSofer, Bugmaster
comment by Dr_Manhattan · 2013-09-18T02:42:08.755Z · LW(p) · GW(p)

Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.

I agree on the first-minute point, but do not see why it's relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I'd make value preservation my first order of business, but since an upload is still evolution's spaghetti code it might be a race against time.

comment by MugaSofer · 2013-09-20T18:42:03.269Z · LW(p) · GW(p)

Perhaps the idea is that the sensory experience of no longer falling into the category of "human" would cause the brain to behave in unexpected ways?

I don't find that especially likely, mind, although I suppose long-term there might arise a self-serving "em supremacy" meme.

comment by Bugmaster · 2013-09-18T03:46:30.144Z · LW(p) · GW(p)

their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.

I don't see why this is necessarily true, unless you treat "altruism toward humanity" as a terminal goal.

When I was a very young child, I greatly valued my brightly colored alphabet blocks; but today, I pretty much ignore them. My mind had developed to the point where I can fully visualize all the interesting permutations of the blocks in my head, should I need to do so for some reason.

Replies from: somervta
comment by somervta · 2013-09-18T08:30:35.241Z · LW(p) · GW(p)

I don't see why this is necessarily true, unless you treat "altruism toward humanity" as a terminal goal.

Well, yes. I think that's the point. I certainly don't only value other humans for the way that they interest me - If that were so, I probably wouldn't care about most of them at all. Humanity is a terminall value to me - or, more generally, the existence and experiences of happy, engaged, thinking sentient beings. Humans qualify, regardless of whether or not uploads exist (and, of course, also qualify.

Replies from: Bugmaster
comment by Bugmaster · 2013-09-19T22:37:30.044Z · LW(p) · GW(p)

How do you know that "the existence and experiences of happy, engaged, thinking sentient beings" is indeed one of your terminal values, and not an instrumental value ?

comment by Bugmaster · 2013-09-17T21:31:49.968Z · LW(p) · GW(p)

+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !

comment by Roman_Yampolskiy · 2013-09-17T20:22:36.148Z · LW(p) · GW(p)

We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.

Replies from: Nornagest, TheOtherDave
comment by Nornagest · 2013-09-17T20:44:11.028Z · LW(p) · GW(p)

Insofar as Maslow's pyramid accurately models human psychology (a point of which I have my doubts), I don't think the majority of people you're likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security -- you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you've experienced domestic violence, or fought in a war) but in the long run they're not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them.

If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don't see any reason why we shouldn't extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.

Replies from: Roman_Yampolskiy
comment by Roman_Yampolskiy · 2013-09-17T22:21:30.660Z · LW(p) · GW(p)

I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.

Replies from: Nornagest
comment by Nornagest · 2013-09-17T22:48:27.995Z · LW(p) · GW(p)

It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I'm not sure how significant I want to call sleep) meeting immediate physical needs, but those don't factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn't a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can't eat or excrete normally because of one medical condition or another, but I don't see them as proportionally less human.

I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it's so far removed from its ultimate goal that I don't feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don't think about the low-level logistics that much; it's not my job. And I'm a financially independent adult; I'd expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.

comment by TheOtherDave · 2013-09-18T00:43:45.220Z · LW(p) · GW(p)

Well, yes, a lot depends on what we assume the upload includes, and how important the missing stuff is.
If Dave!upload doesn't include X1, and X2 defines Dave!original's humanity, and X1 contains X2, then Dave!upload isn't human... more or less tautologically.

We can certainly argue about whether our experiences of hunger, thirst, fatigue, etc. qualify as X1, X2, or both... or, more generally, whether anything does. I'm not nearly as confident as you sound about either of those things.

But I'm not sure that matters.

Let's posit for the sake of comity that there exists some set of experiences that qualify for X2. Maybe it's hunger, thirst, fatigue, etc. as you suggest. Maybe it's curiosity. Maybe it's boredom. Maybe human value is complex and X2 actually includes a carefully balanced brew of a thousand different things, many of which we don't have words for.

Whatever it is, if it's important to us that uploads be human, then we should design our uploads so that they have X2. Right?

But you seem to be taking it for granted that whatever X2 turns out to be, uploads won't experience X2.
Why?

Replies from: Roman_Yampolskiy
comment by Roman_Yampolskiy · 2013-09-19T16:12:03.214Z · LW(p) · GW(p)

Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-19T16:42:41.358Z · LW(p) · GW(p)

Ah.

So when you say "most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent" you're making a definitional claim that whatever the new agent experiences, it won't be a human feeling, because (being software) the agent definitionally won't be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won't be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.

Yes? Do I understand you now?

FWIW, I agree that there are definitions of "human being" and "software" by which a piece of software is definitionally not a human being, though I don't think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I'm willing to use your definitions when talking to you.

You go on to say that this agent, not being human, will not want the same things as a human.
Well, OK; that follows from your definitions.

One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?

Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn't reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?

Because if it's important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?

Replies from: Roman_Yampolskiy
comment by Roman_Yampolskiy · 2013-09-19T18:39:44.329Z · LW(p) · GW(p)

So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads "At that point you already lost humanity by definition". Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.

Replies from: TheOtherDave, CCC, hairyfigment, shminux
comment by TheOtherDave · 2013-09-19T19:42:05.590Z · LW(p) · GW(p)

You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes.

I'm also asking, should we care?
More generally, I'm asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?

The original question Wei Dai was asking me was about my statement that if we becomes uploads "At that point you already lost humanity by definition".

Fair enough. I've already agreed that this is true for the definitions you've chosen, so if that's really all you're talking about, then I guess there's nothing more to say. As I said before, I don't think those are useful definitions, and I don't use them myself.

Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario?

Source code? Maybe not; it depends on whether that code is ever compiled.
Object code? Yes, it makes a huge difference.

I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.

Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we've lost, and how much do I value what we've gained?
My answer depends on the specifics of the simulation, and is based on what I value about humanity.

The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.

We face these questions every day; they aren't some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.

Replies from: Roman_Yampolskiy
comment by Roman_Yampolskiy · 2013-09-21T20:48:38.598Z · LW(p) · GW(p)

Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I >value what we've lost, and how much do I value what we've gained?

I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-21T22:20:26.853Z · LW(p) · GW(p)

OK. Thanks for clarifying your position.

comment by CCC · 2013-09-19T19:51:47.175Z · LW(p) · GW(p)

We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.

At the very lesat, by this point we've killed a lot of people. the fact that they've been backed up doesn't make the murder less henious.

Whether or not 'humanity' gets destroyed in this scenario depends on the definition that you aply to the word 'humanity'. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them - the cyberspace backup still have those things (presuming that they're actually working cyberspace backups).

comment by hairyfigment · 2013-09-19T19:00:48.864Z · LW(p) · GW(p)

Well, if nothing else happens our new computer substrate will stop working. But if we remove that problem - in what sense has this not already happened?

If you like, we can assume that Eliezer is wrong about that. In which case, I'll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.

comment by Shmi (shminux) · 2013-09-19T18:56:33.196Z · LW(p) · GW(p)

We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.

Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.

comment by Bugmaster · 2013-09-17T18:54:05.602Z · LW(p) · GW(p)

Have you ever had the unfortunate experience of hanging out with really boring people; say, at a party ? The kind of people whose conversations are so vapid and repetitive that you can practically predict them verbatim in your head ? Were you ever tempted to make your excuses and duck out early ?

Now imagine that it's not a party, but the entire world; and you can't leave, because it's everywhere. Would you still "feel altruistic toward humanity" at that point ?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-17T20:41:47.490Z · LW(p) · GW(p)

It's easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them).

I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can't see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience.

OTOH, I don't expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that's kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences.

If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.

Replies from: Bugmaster
comment by Bugmaster · 2013-09-17T21:05:37.211Z · LW(p) · GW(p)

It's easy to conflate uploads and augments, here...

Wait, why shouldn't they be conflated ? Granted, an upload does not necessarily have to possess augmented intelligence, but IMO most if not all of them would obtain it in practice.

I can't see why that experience would change upon a substrate change, such as uploading.

Agreed, though see above.

If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences. If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.

I agree completely; that was my point as well.

Edited to add:

I believe that, however incomprehensible one's new values might be after augmentation, I am reasonably certain that they would not include "an altruistic attitude toward humanity" (as per our current understanding of the term). By analogy, I personally neither love nor hate individual insects; they are too far beneath me.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T00:24:54.818Z · LW(p) · GW(p)

Mostly, I prefer not to conflate them because our shared understanding of upload is likely much better-specified than our shared understanding of augment.

I agree completely; that was my point as well.

Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn't contain.

By analogy, I personally neither love nor hate individual insects; they are too far beneath me.

Turning that analogy around.... I suspect that if I remembered having been an insect and then later becoming a human being, and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects' favor.

With respect to altruism and vast intelligence gulfs more generally... I dunno. Five-day-old infants are much stupider than I am, but I generally prefer that they not suffer. OTOH, it's only a mild preference; I don't really seem to care all that much about them in the abstract. OTGH, when made to think about them as specific individuals I end up caring a lot more than I can readily justify over a collection. OT4H, I see no reason to expect any of that to survive what we're calling "intelligence augmentation", as I don't actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly. OT5H, there are things we might call "intelligence augmentation", like short-term-memory buffer-size increases, that might well be modular in this way.

Replies from: Bugmaster
comment by Bugmaster · 2013-09-18T00:53:57.019Z · LW(p) · GW(p)

Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn't contain.

More specifically, I have confidence only about one specific thing that these values would not contain. I have no idea what the values would contain; this still renders them incomprehensible, as far as I'm concerned, since the potential search space is vast (if not infinite).

I suspect that if I remembered having been an insect and then later becoming a human being...

I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday. The situation may be more analogous to remembering what it was like being a newborn.

Most people don't remember what being a newborn baby was like; but even if you could recall it with perfect clarity, how much of that information would you find really useful ? A newborn's senses are dull; his mind is mostly empty of anything but basic desires; his ability to affect the world is negligible. There's not much there that is even worth remembering... and, IMO, there's a good chance that a transhuman intelligence would feel the same way about its past humanity.

... and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects' favor.

I agree with your later statement:

OT4H, I see no reason to expect any of that to survive what we're calling "intelligence augmentation", as I don't actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly.

To expand upon it a bit:

I agree with you regarding the pragmatic stance, but disagree about the "intrinsic value" part. As an adult human, you care about babies primarily because you have a strong built-in evolutionary drive to do so. And yet, even that powerful drive is insufficient to overcome many people's minds; they choose to distance themselves from babies in general, and refuse to have any of their own, specifically. I am not convinced that an augmented human would retain such a built-in drive at all (only targeted at unaugmented humans instead/in addition to infants), and even if they did, I see no reason to believe that it would have a stronger hold over transhumans than over ordinary humans.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T01:15:57.103Z · LW(p) · GW(p)

Like you, I am unconvinced that a "sufficiently augmented" human would continue to value unaugmented humans, or infants.

Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants.

Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all. It might turn out that all "sufficiently augmented" human minds promptly turn themselves off. It might turn out that they value unaugmented humans more than anything else in the universe. Or insects. Or protozoa. Or crystal lattices. Or the empty void of space. Or paperclips.

More generally, when I say I expect my augmented self's values to be incomprehensible to me, I actually mean it.

I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday.

Mostly, I think that will depend on what kinds of augmentations we're talking about. But I don't think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of "vastly augmented" and analogies to insects and protozoa, so I'm content to posit either that it does, or that it doesn't, whichever suits you.

My own intuition, FWIW, is that some such minds will remember their true origins, and others won't, and others will remember entirely fictionalized accounts of their origins, and still others will combine those states in various ways.

There's not much there that is even worth remembering.

You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It's not at all clear to me why. I am perfectly happy to take your word for it that you don't value anything about your hypothetical memories of infancy, but generalizing that to other minds seems unjustified.

For my own part... well, my mom is not a particularly valuable person, as people go. There's no reason you should choose to keep her alive, rather than someone else; she provides no pragmatic benefit relative to a randomly selected other person. Nevertheless, I would prefer that she continue to live, because she's my mom, and I value that about her.

My memories of my infancy might similarly not be particularly valuable as memories go; I agree. Nevertheless, I might prefer that I continue to remember them, because they're my memories of my infancy.

And then again, I might not. (Cf incomprehensible values of augments, above.)

Replies from: Bugmaster
comment by Bugmaster · 2013-09-18T03:37:58.686Z · LW(p) · GW(p)

Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants. Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all.

Even if you don't buy my arguments, given the nearly infinite search space of things that it could end up valuing, what would its probability of valuing any one specific thing like "unaugmented humans" end up being ?

But I don't think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of "vastly augmented" and analogies to insects and protozoa, so I'm content to posit either that it does, or that it doesn't, whichever suits you.

Fair enough, though we could probably obtain some clues by surveying the incredibly smart -- though merely human -- geniuses that do exist in our current world, and extrapolating from there.

My own intuition, FWIW, is that some such minds will remember their true origins...

It depends on what you mean by "remember", I suppose. Technically, it is reasonably likely that such minds would be able to access at least some of their previously accumulated experiences in some form (they could read the blog posts of their past selves, if push comes to shove), but it's unclear what value they would put on such data, if any.

You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It's not at all clear to me why.

Maybe it's just me, but I don't think that my own, personal memories of my own, personal infancy would differ greatly from anyone else's -- though, not being a biologist, I could be wrong about that. I'm sure that some infants experienced environments with different levels of illumination and temperature; some experienced different levels of hunger or tactile stimuli, etc. However, the amount of information that an infant can receive and process is small enough so that the sum total of his experiences would be far from unique. Once you've seen one poorly-resolved bright blob, you've seen them all.

By analogy, I ate a banana for breakfast yesterday, but I don't feel anything special about it. It was a regular banana from the store; once you've seen one, you've seen them all, plus or minus some minor, easily comprehensible details like degree of ripeness (though, of course, I might think differently if I was a botanist).

IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you've seen one human, you've seen them all, plus or minus some minor details...

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T03:57:20.766Z · LW(p) · GW(p)

what would its probability of valuing any one specific thing like "unaugmented humans" end up being ?

Vanishingly small, obviously, if we posit that its pre-existing value system is effectively uncorrelated with its post-augment value system, which it might well be. Hence my earlier claim that I am unconvinced that a "sufficiently augmented" human would continue to value unaugmented humans. (You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we're simply not understanding one another.)

we could probably obtain some clues by surveying the incredibly smart -- though merely human -- geniuses that do exist in our current world, and extrapolating from there.

Sure, we could do that, which would give us an implicit notion of "vastly augmented intelligence" as something like naturally occurring geniuses (except on a much larger scale). I don't think that's terribly likely, but as I say, I'm happy to posit it for discussion if you like.

it's unclear what value they would put on such data, if any. [...] I don't think that my own, personal memories of my own, personal infancy would differ greatly from anyone else's [...] IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you've seen one human, you've seen them all, plus or minus some minor details...

I agree that it's unclear.

To say that more precisely, an augmented mind would likely not value its own memories (relative to some roughly identical other memories), or any particular ordinary human, any more than an adult human values its own childhood blanket rather than some identical blanket, or values one particular and easily replaceable goldfish.

The thing is, some adult humans do value their childhood blankets, or one particular goldfish.

And others don't.

Replies from: Bugmaster
comment by Bugmaster · 2013-09-19T22:36:37.994Z · LW(p) · GW(p)

You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we're simply not understanding one another.

That's correct; for some reason, I was thinking that you believed that a human's preference for the well-being his (formerly) fellow humans is likely to persist after augmentation. Thus, I did misunderstand your position; my apologies.

The thing is, some adult humans do value their childhood blankets, or one particular goldfish.

I think that childhood blankets and goldfish are different from an infant's memories, but perhaps this is a topic for another time...

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-20T00:35:02.548Z · LW(p) · GW(p)

I'm not quite sure what other time you have in mind, but I'm happy to drop the subject. If you want to pick it up some other time feel free.

comment by [deleted] · 2013-09-17T20:25:00.809Z · LW(p) · GW(p)

I'm going to throw out some more questions. You are by no means obligated to answer.

In your AI Safety Engineering paper you say, "We propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI."

But would we really want to do this today? I mean, in the near future--say the next five years--AGI seems pretty hard to imagine. So might this be unnecessary?

Or, what if later on when AGI could happen, some random country throws the rules out? Do you think that promoting global cooperation now is a useful way to address this problem, as I assert in this shamelessly self-promoted blog post?

The general question I am after is, How do we balance the risks and benefits of AI research?

Finally you say in your interview, "Conceivable yes, desirable NO" on the question of relinquishment. But are you not essentially proposing relinquishment/prevention?

Replies from: Roman_Yampolskiy
comment by Roman_Yampolskiy · 2013-09-17T22:12:39.271Z · LW(p) · GW(p)

Just because you can’t imaging AGI in the next 5 years, doesn’t mean that in four years someone will not propose a perfectly workable algorithm for achieving it. So yes, it is necessary. Once everyone sees how obvious AGI design is, it will be too late. Random countries don’t develop cutting edge technology; it is always done by the same Superpowers (USA, Russia, etc.). I didn’t read your blog post so can’t comment on “global cooperation”. As to the general question you are asking, you can get most conceivable benefits from domain expert AI without any need for AGI. Finally, I do think that relinquishment/delaying is a desirable thing, but I don’t think it is implementable in practice.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T00:53:20.984Z · LW(p) · GW(p)

you can get most conceivable benefits from domain expert AI without any need for AGI.

Is there a short form of where you see the line between these two types of systems? For example, what is the most "AGI-like" AI you can conceive of that is still "really a domain-expert AI" (and therefore putatively safe to develop), or vice-versa?

My usual sense is that these are fuzzy terms people toss around to point to very broad concept-clusters, which is perfectly fine for most uses, but if we're really getting to the point of trying to propose policy based on these categories, it's probably good to have a clearer shared understanding of what we mean by the terms.

That said, I haven't read your paper; if this distinction is explained further there, that's fine too.

Replies from: Roman_Yampolskiy
comment by Roman_Yampolskiy · 2013-09-18T21:06:38.939Z · LW(p) · GW(p)

Great question. To me a system is domain specific if it can’t be switched to a different domain without re-designing it. I can’t take Deep Blue and use it to sort mail instead. I can’t take Watson and use it to drive cars. An AGI (for which I have no examples) would be capable of switching domains. If we take humans as an example of general intelligence, you can take an average person and make them work as a cook, driver, babysitter, etc, without any need for re-designing them. You might need to spend some time teaching that person a new skill, but they can learn efficiently and perhaps just by looking at how it should be done. I can’t do this with domain expert AI. Deep Blue will not learn to sort mail regardless of how many times I demonstrate that process.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T22:24:54.407Z · LW(p) · GW(p)

(nods) That's fair. Thanks for clarifying.

comment by Moss_Piglet · 2013-09-17T16:04:07.883Z · LW(p) · GW(p)

I've heard repeatedly that the correlation between IQ and achievement after about 120 (z = 1.33) is pretty weak, possibly even with diminishing returns up at the very top. Is moving to 250 (z = 10) passing a sort of threshold of intelligence at some point where this trend reverses? Or is the idea that IQ stops strongly predicting achievement above 120 wrong?

This is something I've been curious about for a while, so I would really appreciate your help clearing the issue up a bit.

Replies from: ESRogs, wedrifid, Vaniver, Lumifer
comment by ESRogs · 2013-09-17T18:20:04.629Z · LW(p) · GW(p)

In agreement with Vaniver's comment, there is evidence that differences in IQ well above 120 are predictive of success, especially in science. For example:

  • IQs of a sample of eminent scientists were much higher than the average for science PhDs (~160 vs ~130)

  • Among those who take the SAT at age 13, scorers in the top .1% end up outperforming the top 1% in terms of patents and scientific publications produced as adults

I don't think I have good information on whether these returns are diminishing, but we can at least say that they are not vanishing. There doesn't seem to be any point beyond which the correlation disappears.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-18T20:15:44.031Z · LW(p) · GW(p)

I just read the "IQ's of eminent scientists" and realized I really need to get my IQ tested.

I've been relying on my younger brother's test (with the knowledge that older brothers tend to do slightly better but usually within an sd) to guesstimate my own IQ but a) it was probably a capped score like Feynman's since he took it in middle school and b) I have to know if there's a 95% chance of failure going into my field. I'd like to think I'm smart enough to be prominent, but it's irrational not to check first.

Thanks for the information; you might have just saved me a lot of trouble down the line, one way or the other.

Replies from: EHeller, ESRogs
comment by EHeller · 2013-09-19T15:13:25.925Z · LW(p) · GW(p)

I just read the "IQ's of eminent scientists" and realized I really need to get my IQ tested.

I'd be very careful generalizing from that study to the practice of science today. Science in the 1950s was VERY different, the length of time to the phd was shorter, postdocs were very rare, and almost everyone stepped into a research faculty position almost immediately.

In today's world, staying in science is much harder- there are lots of grad students competing for many postdocs competing for few permanent science positions. In today's world, things like conscientiousness, organization skills,etc (grant writing is now a huge part of the job) play a much larger role in eventually landing a job in the past, and luck is a much bigger driver (whether a given avenue of exploration pays off requires a lot of luck. Selecting people whose experiments ALWAYS work is just grabbing people who have been both good AND lucky). It would surprise me if the worsening science career hasn't changed the make up of an 'eminent scientist'.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-19T16:21:02.723Z · LW(p) · GW(p)

At the same time, all of those points except the luck one could be presented as evidence that the IQ required to be eminent has increased rather than the converse. Grant writing and schmoozing are at least partially a function of verbal IQ, IQ in general strongly predicts academic success in grad school, and competition tends to winnow out the poor performers a lot more than the strong.

Not that I really disagree, I just don't see it as particularly persuasive.

whether a given avenue of exploration pays off requires a lot of luck. Selecting people whose experiments ALWAYS work is just grabbing people who have been both good AND lucky

That's just one of the unavoidable frustrations of human nature though; an experiment which dis-confirms it's hypothesis worked perfectly, it just isn't human nature to notice negatives.

Replies from: EHeller
comment by EHeller · 2013-09-19T18:20:37.244Z · LW(p) · GW(p)

At the same time, all of those points except the luck one could be presented as evidence that the IQ required to be eminent has increased rather than the converse.

I disagree for several reasons. Mostly, conscientiousness, conformity,etc are personality traits that aren't strongly correlated with IQ (conscientiousness may even be slightly negatively correlated).

IQ in general strongly predicts academic success in grad school, and competition tends to winnow out the poor performers a lot more than the strong.

Would it surprise you to know that the most highly regarded grad students in my physics program all left physics? They had a great deal of success before and in grad school (I went to a top 5 program) , but left because they didn't want to deal with the administrative/grant stuff, and because they didn't want to spend years at low pay.

I'd argue that successful career in science is selecting for some threshhold IQ and then much more strongly for a personality type.

Replies from: Kawoomba
comment by Kawoomba · 2013-09-19T18:22:31.823Z · LW(p) · GW(p)

conscientiousness may even be slightly negatively correlated

No kidding.

comment by ESRogs · 2013-09-19T05:55:19.497Z · LW(p) · GW(p)

Are you American? If you've taken the SAT, you can get a pretty good estimate of your IQ here.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-19T15:59:43.000Z · LW(p) · GW(p)

Mensa apparently doesn't consider the SAT to have a high-enough g loading to be useful as an intelligence test after 1994. Although the website's figure are certainly encouraging, it's probably best to take them with a bit of salt.

Replies from: ESRogs
comment by ESRogs · 2013-09-20T00:11:13.777Z · LW(p) · GW(p)

True, but note that, in contrast with Mensa, the Triple Nine Society continued to accept scores on tests taken up through 2005, though with a higher cutoff (of 1520) than on pre-1995 tests (1450).

Also, SAT scores in 2004 were found to have a correlation of about .8 with a battery of IQ tests, which I believe is on par with the correlations IQ tests have with each other. So the SAT really does seem to be an IQ test (and an extremely well-normed one at that if you consider their sample size, though perhaps not as highly g-loaded as the best, like Raven's).

But yeah, if you want to have high confidence in a score, probably taking additional tests would be the best bet. Here's a list of high-ceiling tests, though I don't know if any of them are particularly well-normed or validated.

comment by wedrifid · 2013-09-17T17:10:28.115Z · LW(p) · GW(p)

I've heard repeatedly that the correlation between IQ and achievement after about 120 (z = 1.33) is pretty weak, possibly even with diminishing returns up at the very top.

Is this what you intended to say? "Diminishing returns" seems to apply at the bottom the scale you mention. You've already selected the part where returns have started diminishing. Sometimes it is claimed that that at the extreme top the returns are negative. Is that what you mean?

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-18T20:02:33.064Z · LW(p) · GW(p)

Yeah, that's just me trying to do everything in one draft. Editing really is the better part of clear writing.

I meant something along the lines of "I've heard it has diminishing returns and potentially [, probably due to how it affects metabolic needs and rate of maturation] even negative returns at the high end."

comment by Vaniver · 2013-09-17T16:36:00.930Z · LW(p) · GW(p)

Or is the idea that IQ stops strongly predicting achievement above 120 wrong?

Most IQ tests are not very well calibrated above 120ish, because the number of people in the reference sample that scored much higher is rather low. It's also the case that achievement is a function of several different factors, which will probably become the limiting factor for most people at IQs higher than 120. That said, it does seem that in physics, first-tier physicists score better on cognitive tests than second-tier physicists, which suggests that additional IQ is still useful for achievement in the most cognitively demanding fields. It seems likely that augmented humans who do several times better than current humans on cognitive tests will also be able to achieve several times as much in cognitively demanding fields.

comment by Lumifer · 2013-09-18T20:33:18.309Z · LW(p) · GW(p)

Is moving to 250 (z = 10) passing a sort of threshold of intelligence

First, IQ tests don't go to 250 :-) Generally speaking standard IQ tests have poor resolution in the tails -- they cannot reliably identify whether you have the IQ of, say, 170 or 190. At some point all you can say is something along the lines of "this person is in the top 0.1% of people we have tested" and leave it at that.

Second, "achievement" is a very fuzzy word. People mean very different things by it. And other than by money it's hard to measure.

comment by Shmi (shminux) · 2013-09-17T16:50:19.261Z · LW(p) · GW(p)

with IQ of 250

Technicality: there is no such thing as IQ of 250, since IQ is a score on a test, and there is no test calibrated for 1 in 10^22 humans. What you probably mean is the mysterious hypothetical g-factor), partly responsible for the IQ scores, or maybe some other intelligence marker.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-17T17:05:54.272Z · LW(p) · GW(p)

If you understand the point there's no reason to make a comment like this except as an attempt to show off. Changing "250 IQ" to "+10 sd out from the mean intelligence" only serves to make the original point less accessible to people not steeped in psychometry.

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2013-09-17T18:24:56.403Z · LW(p) · GW(p)

You don't have to be steeped in psychometry to understand what a standard deviation is.
And if we're going to talk about intelligence at all, it is often helpful to keep in mind the difference between IQ and intelligence.

comment by Shmi (shminux) · 2013-09-17T18:01:19.533Z · LW(p) · GW(p)

If you understand the point there's no reason to make a comment like this except as an attempt to show off

Rebuke denied. While IQ 250 makes intuitive sense, as "smarter than we can possibly imagine", attaching numbers with two sigfigs to it is misleading at best. I don't know how one can tell IQ 250 from IQ 500.

comment by Shmi (shminux) · 2013-09-17T16:42:29.592Z · LW(p) · GW(p)

Carl Shulman and Nick Bostrom have a paper coming out arguing that embryo selection can eventually (or maybe even quickly) lead to IQ gains of 100 points or more.

I wonder how they propose to avoid the standard single-trait selective breeding issues, like accumulation of undesirable traits. For example, those geniuses might end up being sickly and psychotic.

Replies from: arundelo, None
comment by arundelo · 2013-09-17T17:01:03.862Z · LW(p) · GW(p)

It seems to me that this would not be a problem with iterated embryo selection, but I might be wrong.

See also Yvain's "modal human" post.

comment by [deleted] · 2013-09-17T16:46:17.255Z · LW(p) · GW(p)

Would it matter? C.f. goldmage.

comment by lukeprog · 2013-09-16T23:07:04.630Z · LW(p) · GW(p)

Note also that Roman co-authored 3 of the papers on MIRI's publications page.

Replies from: shminux
comment by Shmi (shminux) · 2013-09-16T23:35:09.477Z · LW(p) · GW(p)

His paper http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf seriously discusses ways to confine a potentially hostile superintelligence, a feat MIRI seems to consider hopeless. Did you guys have a good chat about it?

Replies from: CarlShulman, lukeprog
comment by CarlShulman · 2013-09-17T00:10:07.587Z · LW(p) · GW(p)

I think most everyone at MIRI and FHI thinks boxing is a good thing, even if many would say not enough on its own. I don't think you will find many who think that open internet connections are a matter of indifference for AI developers working with powerful AGI.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-17T22:28:55.345Z · LW(p) · GW(p)

High-grade common sense (the sort you'd get by asking any specialist in computer security) says that you should design an AI which you would trust with an open Internet connection, then put it in the box you would use on an untrusted AI during development. (No, the AI will not be angered by this lack of trust and resent you. Thank you for asking.) I think it's safe to say that for basically everything in FAI strategy (I can't think of an exception right now) you can identify at least two things supporting any key point, such that either alone was designed to be sufficient independently of the other's failing, including things like "indirect normativity works" (you try to build in at least some human checks around this which would shut down any scary AI independently of your theory of indirect normativity being remotely correct, while also not trusting the humans to steer the AI because then the humans are your single point of failure).

Replies from: Gurkenglas
comment by Gurkenglas · 2013-09-18T07:44:19.049Z · LW(p) · GW(p)

But didn't you say that if you find that your FAI design needs to be patched up to resolve some issue, you should rather abandon it and look for another? This redundancy of security looks like a patchup to me. As long as neither the AIs intent nor the boxing is proven to be secure, we are not ready to start development; once one is, we do not need the other.

Replies from: wedrifid
comment by wedrifid · 2013-09-18T12:35:54.144Z · LW(p) · GW(p)

Redundancy isn't a design failure or a 'patch'.

comment by lukeprog · 2013-09-16T23:43:08.139Z · LW(p) · GW(p)

See my interview with Roman here.

Replies from: shminux
comment by Shmi (shminux) · 2013-09-17T06:39:14.795Z · LW(p) · GW(p)

Thanks. Pretty depressing, though.

comment by Serendipity · 2013-08-04T10:39:45.688Z · LW(p) · GW(p)

Hi everyone, my name is Sara!

I am 21, live in Switzerland and study psychology. I am fascinated with the field of rationality and therefore wrote my Bachelor thesis on why and how critical thinking should be taught in schools. I started out with the plan to get my degree in clinical- and neuropsychology but will now change to developmental psychology for I was able to fascinate my supervising tutor and secure his full support. This will allow me to base my Master project on the development and enhancing of critical thinking and rationality, too. Do you have any recommendations?

After my Master's degree I still intend on getting an education as therapist (money reasons) or going into research (pushing the experimental research on rationality) and on giving a lot of money to the most effective charities around. I wonder if as therapist it would be smarter to concentrate on children or adults; both fields will be open for me after my university education (which will take me about 2.5-3 more years). I speak German, Swiss German, Italian, French and English (and understand some more languages), which will give me some freedom in the choice where to actually work in future.

...but I'm not only looking for advice here. I'm (mainly) interested in educating myself (and possibly other people around me). In fact, I am part of a Swiss group that translates less wrong articles into German, making the content available for more people in our surroundings (Switzerland, Germany, Austria).

I've learned a lot from this community and it has strongly shaped who I have become. There's no way I'd want to go back to my even more biased past self :)

Indeed, I am looking forward to learning more!

Replies from: Tenoke
comment by Tenoke · 2013-08-04T11:55:19.881Z · LW(p) · GW(p)

Hello, Sara.

This will allow me to base my Master project on the development and enhancing of critical thinking and rationality,

Do you have any specific ideas for this. Are you aiming at enhancing rationality in adults or children? I don't have specific recommendations except perhaps people whose work is relevant, however, you would have encountered those around the site.

P.S. I am mainly commenting here because this is the second time I see you on the internet within the last 4 hours.

Replies from: Serendipity
comment by Serendipity · 2013-08-04T12:08:02.041Z · LW(p) · GW(p)

Hello, Tenoke.

I am aiming on enhancing rationality in children but indeed had to often fall back on research with older people. Until now I've been concentrating on the work of Stanovich, Facione, van Gelder and Twardy. Whose work do you think would be relevant, too?

Thank you for your answer!

Replies from: Tenoke
comment by Tenoke · 2013-08-04T21:53:06.095Z · LW(p) · GW(p)

Well, Kahneman (and Tversky) would be the most obvious example out of those not mentioned. Otherwise Dennet, Gilovich, Slovic, Pinker, Taleb and Thaler would be some examples of people whose work has varying degrees of relevance to the subject. Those are the people who I can think of off the top of my head but the best way to systematically find researchers of interest would be to look at the reverse citations of Kahneman and Tversky's work or something of the sort.

Replies from: Serendipity
comment by Serendipity · 2013-08-05T16:11:53.513Z · LW(p) · GW(p)

Ah, how could I forget them! Biases and heuristics play a big role in my interests for critical thinking of course. I'm a bit surprised: how come you included Dennett and Pinker? I know these two for work that's (very interesting but) mostly unrelated to my addressed topic. I'm curious, seems like I missed something important.

Replies from: Tenoke
comment by Tenoke · 2013-08-06T12:09:44.856Z · LW(p) · GW(p)

I was writing on auto-pilot, you are right that their work is significantly less relevant to the topic than the others'.

comment by Peteris · 2013-04-10T21:53:14.686Z · LW(p) · GW(p)

Hi,

I'm a final year Mathematics student at Cambridge coming from an IOI, IMO background. I've written software for a machine learning startup, a game dev startup and Google. I was recently interested in programming language theory esp. probabilistic and logic programming (some experiments here http://peteriserins.tumblr.com/archive).

I'm interested in many aspects of startups (including design) and hope to move into product management, management consulting or venture capital. I love trying to think rationally about business processes and have started to write about it at http://medium.com/@p_e .

I found out about LW from a friend and have since started reading the sequences. I hope to learn more about practical instrumental rationality, I am less interested in philosophy and the meta theory. So far I've learned more about practical application of mathematics from data science and consulting, but expect rationality to take it further and with more rigor.

Great meeting y'all

Replies from: Nisan
comment by Nisan · 2013-04-10T23:39:08.705Z · LW(p) · GW(p)

Welcome! You may want to consider participating in a CFAR workshop. I think it's 1000% as effective for learning instrumental rationality as reading Less Wrong. They're optimized for teaching practical skills, and they tend to attract entrepreneurs.

Also, I think you'd be a valuable addition to the community around CFAR, in addition to the online community around the Less Wrong website.

Replies from: beoShaffer
comment by beoShaffer · 2013-04-11T19:48:55.080Z · LW(p) · GW(p)

As someone who has done a CFAR workshop, and a lot of online rationality stuff (including, but not limited to reading ~90% of the sequences) I second this. I'll also add that do think having a strong theoretical background going in enhances the practical training.

comment by aphyer · 2013-04-23T16:43:26.982Z · LW(p) · GW(p)

Hi, I'm Andrew, a college undergrad in computer science. I found this site through HPMOR a few years ago.

comment by nonplussed · 2013-04-07T18:50:22.971Z · LW(p) · GW(p)

Hi everyone, I'm Chris. I'm a physics PhD student from Melbourne, Australia. I came to rationalism slowly over the years by having excellent conversations with like minded friends. I was raised a catholic and fully bought into the faith, but became an atheist in early high school when I realised that scientific explanations made more sense.

About a year ago I had a huge problem with the collapse postulate of quantum mechanics. It just didn't make sense and neither did anything anyone was telling me about it. This led me to discover that many worlds wasn't as crazy as it had been made out to be, and led me to this very community. My growth as a rationalist has made me distrust the consensus opinions of more and more groups, and realising that physicists could get something so wrong was the final nail in the coffin for my trust of the scientific establishment. Of course science is still the best way to figure things out, but as soon as opinions become politicised or tied to job prospects, I don't trust scientists as far as I can throw them. Related to this is my skepticism that climate change is a big deal.

I am frustrated more by the extent of unreason in educated circles than I am in uneducated circles, as people should know better. For example, utilitarian morality should be much more widespread in these circles than it is. But moral issues are often politicised, and you know what they say about politics here.

I'm pretty social and would love to meet more rationalist friends, but I have the perception that if I went to a meetup most people would be less extroverted than me, and it might not be much fun for me. Also since I do physics and am into heavy metal, my social circles at the moment are like 95% male, and it seems pretty silly to invest effort in developing a new social group unless it does something about that number, which I'm pretty sure less wrong meetups will not. So I'm probably not going to look into this, even though I enjoy the communities writings online.

Though I find the writing style to sometimes be a bit dense and not self contained (requiring reading a lot of past posts to make sense of.) I find myself preferring the writing style of a rationalist blog like slatestarcodex (or its previous incarnation), and if the same issue is being discussed in two places I'll generally read it there instead because I prefer the more casual writing style.

Replies from: ModusPonies, Nisan
comment by ModusPonies · 2013-04-12T14:33:32.387Z · LW(p) · GW(p)

I'm pretty social and would love to meet more rationalist friends, but I have the perception that if I went to a meetup most people would be less extroverted than me, and it might not be much fun for me.

My experience at meetups has been pretty social. After all, meetups select for people outgoing enough to go out of the house in the first place. I'd encourage you to go once, if there's a convenient meetup around. The value of information is high; if the meetup sucks, that costs one afternoon, but if it's good, you gain a new group of friends.

Replies from: nonplussed
comment by nonplussed · 2013-04-12T20:16:31.790Z · LW(p) · GW(p)

meetups select for people outgoing enough to go out of the house in the first place

Excellent point, I know that effect makes a huge difference in other contexts, so that resonates with me. Ok, well I'll give it a shot. There are no meetups near where I am in Germany at the moment, but I'll be back in Melbourne later in the year where there seems to be some regular stuff going on.

comment by Nisan · 2013-04-11T21:48:54.830Z · LW(p) · GW(p)

Welcome! What do you think of the Born probabilities?

Replies from: nonplussed
comment by nonplussed · 2013-04-12T20:34:25.148Z · LW(p) · GW(p)

I haven't gone through any of the supposed derivations, but I'm led to believe that the Born rule is convincingly derivable within many worlds. I have a book called "Many Worlds? Everett, quantum theory and reality", which contains such a derivation, I've been meaning to read it for a while and will get around to it some day. It claims:

An agent who arranges his preferences among various branching scenarios—quantum games—in accordance with certain principles of rationality, must act as if maximizing his expected utilities, as computed from the Born rule.

Which I think is a nice angle to view it from. At any rate, the Born rule is a fairly natural result to have, since the probabilities are simply the vector product of the wavefunction with itself, which is how you normally define the sizes of vectors in vector spaces. So I'm expecting the argument in the book to be related to the criteria that mathematicians use to define inner products, and how those criteria map to assumptions about the universe (ie no preferred spatial direction, that sort of thing). Maybe if I understand it I'll post something here about it for those who are interested — I'm yet to see a blog-style summary of where the Born rule comes from.

At any rate it doesn't come from anywhere in the way we're taught quantum mechanics at uni, it's simply an axiom that one doesn't question. So any derivation, however assumption laden and weak would be an improvement over standard Copenhagen.

comment by tanagrabeast · 2014-01-02T02:56:50.301Z · LW(p) · GW(p)

Greetings.

I'm a long-time singularitarian and (intermediate) rationalist looking be a part of the conversation again. By day I am an English teacher in a suburban American high school. My students have been known to Google me. Rather than self-censor I am using a pseudonym so that I will feel free to share my (anonymized) experiences as a rationalist high school teacher.

I internet-know a number of you in this community from early years of the Singularity Institute. I fleetingly met at a few in person once, perhaps. I used to write on singularity-related issues, and was a proud "sniper" of the SL4 mailing list for a time. For the last 6-7 years I've mostly dropped off the radar by letting "life" issues consume me, though I have continued to follow the work of the key actors from afar with interest. I allow myself some pride for any small positive impact I might have once had during a time of great leverage for donors and activists, while recognizing that far too much remains undone. (If you would like to confirm your suspicions of my identity, I would love to hear from you with a PM. I just don't want Google searches of my real name pulling up my LW activity.)

High school teaching has been a taxing path, along with parenting, and it has been all too easy to use these as excuses to neglect my more challenging (yet rewarding) interests. I let my inaction and guilt reinforce each other until I woke up one day, read HPMoR, and realized I had long-ago regressed into an NPC.

Screw that.

Other background tidbits: I'm one of those atheist ex-mormons that seem so plentiful on this page (since 2000ish). I'm a self-taught "hedge coder" who has successfully used inelegant-but-effective programming in the service of my career. I feel effective in public education, which is not without its rewards. But on some important levels teaching at an American public high school is also a bit like working security at Azkaban, and I'm not sure how many more years I'll be able to keep my patronus going.

I've been using GTD methodologies for the last eight years or so, which has been great for letting me keep my mind clear to work on important tasks at hand; however, my dearest personal goals (which involve writing, both fiction and non) live among some powerful Ugh Fields. If I had been reading LW more closely, I probably would've discovered the Pomodoro method a lot sooner. This is helping.

My thanks to all who share their insights and experiences on this forum.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-01-02T03:51:52.771Z · LW(p) · GW(p)

Welcome to Less Wrong!

Is your user name a reference to "Darmok"?

Replies from: tanagrabeast
comment by tanagrabeast · 2014-01-02T04:25:34.695Z · LW(p) · GW(p)

Yes. It's amazing how memorable people find that one episode. Props to the writers.

comment by Axion · 2013-07-03T03:05:50.811Z · LW(p) · GW(p)

Hi Less Wrong. I found a link to this site a year or so ago and have been lurking off and on since. However, I've self identified as a rationalist since around junior high school. My parents weren't religious and I was good at math and science, so it was natural to me to look to science and logic to solve everything. Many years later I realize that this is harder than I hoped.

Anyway, I've read many of the sequences and posts, generally agreeing and finding many interesting thoughts. It's fun reading about zombies and Newcomb's problem and the like.

I guess this sounds heretical, but I don't understand why Bayes theorem is placed on such a pedestal here. I understand Bayesian statistics, intuitively and also technically. Bayesian statistics is great for a lot of problems, but I don't see it as always superior to thinking inspired by the traditional scientific method. More specifically, I would say that coming up with a prior distribution and updating can easily be harder than the problem at hand.

I assume the point is that there is more to what is considered Bayesian thinking than Bayes theorem and Bayesian statistics, and I've reread some of the articles with the idea of trying to pin that down, but I've found that difficult. The closest I've come is that examining what your priors are helps you to keep an open mind.

Replies from: Viliam_Bur, Vaniver, jsteinhardt, Jiro
comment by Viliam_Bur · 2013-09-18T08:43:36.468Z · LW(p) · GW(p)

Bayesian theorem is just one of many mathematical equations, like for example Pythagorean theorem. There is inherently nothing magical about it.

It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: "I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!"

(I guess I am exaggerating a bit here, but many people 'doing science' would not understand immediately what is wrong with this. And that would be those who even bother to calculate the p-value. Not everyone who is employed as a scientist is necessarily good at math. Many people get paid for doing bad science.)

This kind of thinking has the following problem: Even if you invent hundred completely stupid hypotheses; if you design experiments that would prove a false hypothesis only with p = 0.05, that means five of them would be proved by the experiment. If you show someone else all hundred experiments together, they may understand what is wrong. But you are more likely to send only the successful five ones to the journal, aren't you? -- But how exactly is the journal supposed to react to this? Should they ask: "Did you do many other experiments, even ones completely irrelevant to this specific hypothesis? Because, you know, that somehow undermines the credibility of this one."

The current scientific publishing process has a bias. Bayesian theorem explains it. We care about science, and we care about science being done correctly.

Replies from: Lumifer
comment by Lumifer · 2013-09-18T19:29:20.150Z · LW(p) · GW(p)

It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: "I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!"

That's not neglecting base rates, that's called selection bias combined with incentives to publish. Bayes theorem isn't going to help you with this.

http://xkcd.com/882/

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-19T07:58:07.442Z · LW(p) · GW(p)

Uhm, it's similar, but not the same.

If I understand it correctly, selection bias is when 20 researchers make an experiment with green jelly beans, 19 of them don't find significant correlation, 1 of them finds it... and only the 1 publishes, and the 19 don't. The essence is that we had 19 pieces of evidence against the green jelly beans, only 1 piece of evidence for the green jelly beans, but we don't see those 19 pieces, because they are not published. Selection = "there is X and Y, but we don't see Y, because it was filtered out by the process that gives us information".

But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here? (Perhaps selection across Everett branches or Tegmark universes. But we can't blame the scientific publishing process for not giving us information from the parallel universes, can we?)

In this case, base rate neglect means ignoring the fact that "if you take a random thing, the probability that this specific thing causes acne is very low". Therefore, even if the experiment shows a connection with p = 0.05, it's still more likely that the result just happened randomly.

The proper reasoning could be something like this (all number pulled out of the hat) -- we already have pretty strong evidence that acne is caused by food; let's say there is a 50% probability for this. With enough specificity (giving each fruit a different category, etc.), there are maybe 2000 categories of food. It is possible that more then one of them cause acne, and our probability distribution for that is... something. Considering all this information, we estimate a prior probability let's say 0.0004 that a random food causes acne. -- Which means that if the correlation is significant on level p = 0.05, that per se means almost nothing. (Here one could use the Bayes theorem to calculate that the p = 0.05 successful experiment shows the true cause of acne with probablity cca 1%.) We need to increase it to p = 0.0004 just to get a 50% chance of being right. How can we do that? We should use a much larger sample, or we should repeat the experiment many times, record all the successed and failures, and do a meta-analysis.

Replies from: Lumifer
comment by Lumifer · 2013-09-19T16:19:57.956Z · LW(p) · GW(p)

But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here?

That's a different case -- you have no selection bias here, but your conclusions are still uncertain -- if you pick p=0.05 as your threshold, you're clearly accepting that there is a 5% chance of a Type I error: the green jelly beans did nothing, but the noise happened to be such that you interpreted it as conclusive evidence in favor of your hypothesis.

But that all is fine -- the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data).

base rate neglect means ignoring the fact that "if you take a random thing, the probability that this specific thing causes acne is very low"

People rarely take entirely random things and test them for causal connection to acne. Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate).

As an exercise, try to be specific. For example, let's say I want to check if the tincture made from the bark of a certain tree helps with acne. How would I go about calculating my base rate / prior? Can you walk me through an estimation which will end with a specific number?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-19T19:02:12.072Z · LW(p) · GW(p)

the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less

And this is the base rate neglect. It's not "results significant to p=0.05 will be wrong about 5% of time". It's "wrong results will be significant to p=0.05 about 5% of time". And most people will confuse these two things.

It's like when people confuse "A => B" with "B => A", only this time it is "A => B (p=0.05)" with "B => A (p=0.05)". It is "if wrong, then in 5% significant". It is not "if significant, then in 5% wrong".

Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate).

Yes, you are right. Establishing the prior is pretty difficult, perhaps impossible. (But that does not make "A => B" equal to "B => A".) Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.

Replies from: Lumifer
comment by Lumifer · 2013-09-19T19:13:43.191Z · LW(p) · GW(p)

Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.

Um, what "strict limits" are you talking about, what will they look like, and who will be doing the imposing?

To get back to my example, let's say I'm running experiments to check if the tincture made from the bark of a certain tree helps with acne -- what strict limits would you like?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-19T20:15:52.167Z · LW(p) · GW(p)

what "strict limits" are you talking about

p = 0.001, and if at the end of the year too many researches fail to replicate, keep decreasing. (let's say that "fail to replicate" in this context means that the replication attempt cannot prove it even with p = 0.05 -- we don't want to make replications too expensive, just a simple sanity check)

let's say I'm running experiments to check if the tincture made from the bark of a certain tree helps with acne -- what strict limits would you like?

a long answer would involve a lot of handwaving again (it depends on why do you believe the bark is helpful; in other words, what other evidence do you already have)

a short answer: for example, p = 0.001

Replies from: Lumifer
comment by Lumifer · 2013-09-19T20:59:19.356Z · LW(p) · GW(p)

p = 0.001

Well, and what's magical about this particular number? Why not p=0.01? why not p=0.0001? Confidence thresholds are arbitrary, do you have a compelling argument why any particular one is better than the rest?

Besides, you're forgetting the costs. Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false. By your strict criteria you're rejecting all of them -- you're rejecting 95 correct papers. There is a cost to that, is there not?

Replies from: Viliam_Bur, Nornagest, nshepperd
comment by Viliam_Bur · 2013-09-20T06:52:02.602Z · LW(p) · GW(p)

Lumifer, please update that at this moment you don't grok the difference between "A => B (p=0.05)" and "B => A (p = 0.05)", which is why you don't understand what p-value really means, which is why you don't understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.

And don't feel bad about it. Until recently I didn't understand it too, and I had a gold medal from international mathematical olympiad. Somehow it is not explained correctly at most schools, perhaps because the teachers don't get it themselves, or maybe they just underestimate the difficulty of proper understanding and the high chance of getting it wrong. So please don't contibute to the confusion.

Imagine that there are 1000 possible hypotheses, among which 999 are wrong, and 1 is correct. (That's just a random example to illustrate the concept. The numbers in real life can be different.) You have an experiment that says "yes" to 5% of the wrong hypotheses (this is what p=0.05 means), and also to the correct hypothesis. So at the end, you have 50 wrong hypotheses and 1 correct hypothesis confirmed by the experiment. So in the journal, 98% of the published articles would be wrong, not 5%. It is "wrong => confirmed (p=0.05)", not "confirmed => wrong (p=0.05)".

Replies from: Lumifer, Vaniver
comment by Lumifer · 2013-09-24T15:49:56.797Z · LW(p) · GW(p)

LOL. Yeah, yeah, mea culpa, I had a brain fart and expressed myself very poorly.

I do understand what p-value really means. The issue was that I had in mind a specific scenario (where in effect you're trying to see if the difference in means between two groups is significant) but neglected to mention it in the post :-)

comment by Vaniver · 2013-09-20T17:33:30.347Z · LW(p) · GW(p)

Lumifer, please update that at this moment you don't grok the difference between "A => B (p=0.05)" and "B => A (p = 0.05)", which is why you don't understand what p-value really means, which is why you don't understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.

I feel like this could use a bit longer explanation, especially since I think you're not hearing Lumifer's point, so let me give it a shot. (I'm not sure a see a meaningful difference between base rate neglect and selection bias in this circumstance.)

The word "grok" in Viliam_Bur's comment is really important. This part of the grandparent is true:

Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false.

But it's like saying "well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability." While true, it's totally out of touch with reality- we can't assume the diagnosis is correct, and a huge part of being a doctor is responding correctly to that uncertainty.

Earlier, Lumifer said this, which is an almost correct explanation of using Bayes in this situation:

But that all is fine -- the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data).

The part that makes it the "almost" is the "5% of the times, more or less." This implies that it's centered around 5%, with random chance determining what this instance is. But selection bias means it will almost certainly be more, and generally much more. In fields that study phenomena that don't exist, 100% of the papers published will be of false results that were significant by chance. In many real fields, rates of failure to replicate are around 30%. Describing 30% as "5%, more or less" seems odd, to say the least.

But the proposal to reduce the p value doesn't solve the underlying problem (which was Lumifer's response). If we set the p value threshold lower, at .01 or .001 or wherever, we reducing the risk of false positives at the cost of increasing the risk of false negatives. A study design which needs to determine an effect at the .001 level is much more expensive than a study design which needs to determine an effect at the .05 level, and so we will have many less studies attempted, and many many less published studies.

Better to drop p entirely. Notice that stricter p thresholds go in the opposite direction as the publication of negative results, which is the real solution to the problem of selection bias. By calling for stricter p thresholds, you implicitly assume that p is a worthwhile metric, when what we really want is publication of negative results and more replications.

Replies from: Lumifer
comment by Lumifer · 2013-09-24T16:02:07.239Z · LW(p) · GW(p)

But it's like saying "well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability." While true, it's totally out of touch with reality

My grandparent post was stupid, but what I had in mind was basically a stage-2 (or -3) drug trial situation. You have declared (at least to the FDA) that you're running a trial, so selection bias does not apply at this stage. You have two groups, one receives the experimental drug, one receives a placebo. Assume a double-blind randomized scenario and assume there is a measurable metric of improvement at the end of the trial.

After the trial you have two groups with two empirical distributions of the metric of choice. The question is how confident you are that these two distributions are different.

Better to drop p entirely.

Well, as usual it's complicated. Yes, the p-test is suboptimal in most situations where it's used in reality. However it fulfils a need and if you drop the test entirely you need a replacement for the need won't go away.

comment by Nornagest · 2013-09-20T04:38:06.299Z · LW(p) · GW(p)

Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct...

That's not how p-values work. p=0.05 doesn't mean that the hypothesis is 95% likely to be correct, even in principle; it means that there's a 5% chance of seeing the same correlation if the null hypothesis is true. Pull a hundred independent data sets and we'd normally expect to find a p=0.05 correlation or better in at least five or so of them, no matter whether we're testing, say, an association of cancer risk with smoking or with overuse of the word "muskellunge".

This distinction's especially important to keep in mind in an environment where running replications is relatively low-status or where negative results tend to be quietly shelved -- both of which, as it happens, hold true in large chunks of academia. But even if this weren't the case, we'd normally expect replication rates to be less than one minus the claimed p-value, simply because there are many more promising ideas than true ones and some of those will turn up false positives.

comment by nshepperd · 2013-09-20T02:43:22.018Z · LW(p) · GW(p)

Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false.

No, they won't. You're committing base rate neglect. It's entirely possible for people to publish 2000 papers in a field where there's no hope of finding a true result, and get 100 false results with p 0.05).

comment by Vaniver · 2013-07-03T03:55:51.730Z · LW(p) · GW(p)

I guess this sounds heretical, but I don't understand why Bayes theorem is placed on such a pedestal here. I understand Bayesian statistics, intuitively and also technically. Bayesian statistics is great for a lot of problems, but I don't see it as always superior to thinking inspired by the traditional scientific method.

I know a few answers to this question, and I'm sure there are others. (As an aside, these foundational questions are, in my opinion, really important to ask and answer.)

  1. What separates scientific thought and mysticism is that scientists are okay with mystery. If you can stand to not know what something is, to be confused, then after careful observation and thought you might have a better idea of what it is and have a bit more clarity. Bayes is the quantitative heart of the qualitative approach of tracking many hypotheses and checking how concordant they are with reality, and thus should feature heavily in a modern epistemic approach. The more precisely and accurately you can deal with uncertainty, the better off you are in an uncertain world.
  2. What separates Bayes and the "traditional scientific method" (using scare quotes to signify that I'm highlighting a negative impression of it) is that the TSM is a method for avoiding bad beliefs but Bayes is a method for finding the best available beliefs. In many uncertain situations, you can use Bayes but you can't use the TSM (or it would be too costly to do so), but the TSM doesn't give any predictions in those cases!
  3. Use of Bayes focuses attention on base rates, alternate hypotheses, and likelihood ratios, which people often ignore (replacing the first with maxent, the second with yes/no thinking, and the latter with likelihoods).
  4. I honestly don't think the quantitative aspect of priors and updating is that important, compared to the search for a 'complete' hypothesis set and the search for cheap experiments that have high likelihood ratios (little bets).

I think that the qualitative side of Bayes is super important but don't think we've found a good way to communicate it yet. That's an active area of research, though, and in particular I'd love to hear your thoughts on those four answers.

Replies from: Lumifer, Axion
comment by Lumifer · 2013-09-18T19:31:03.300Z · LW(p) · GW(p)

I think that the qualitative side of Bayes is super important

What is the qualitative side of Bayes?

Replies from: Vaniver
comment by Vaniver · 2013-09-18T19:43:41.878Z · LW(p) · GW(p)

Unfortunately, the end of that sentence is still true:

but [I] don't think we've found a good way to communicate it yet.

I think that What Bayesianism Taught Me is a good discussion on the subject, and my comment there explains some of the components I think are part of qualitative Bayes.

I think that a lot of qualitative Bayes is incorporating the insights of the Bayesian approach into your System 1 thinking (i.e. habits on the 5 second level).

Replies from: Lumifer
comment by Lumifer · 2013-09-18T20:20:20.772Z · LW(p) · GW(p)

Well, yes, but most of the things there are just useful ways to think about probabilities and uncertainty, proper habits, things to check, etc. Why Bayes? He's not a saint whose name is needed to bless a collection of good statistical practices.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-09-18T20:54:03.904Z · LW(p) · GW(p)

It's more or less the same reason people call a variety of essentialist positions 'platonism' or 'aristotelianism'. Those aren't the only thinkers to have had views in this neighborhood, but they predated or helped inspire most of the others, and the concepts have become pretty firmly glued together. Similarly, the phrases 'Bayes' theorem' and 'Bayesian interpretation of probability' (whence, jointly, the idea of Bayesian inference) have firmly cemented the name Bayes to the idea of quantifying psychological uncertainty and correctly updating on the evidence. The Bayesian interpretation is what links these theorems to actual practice.

Bayes himself may not have been a 'Bayesian' in the modern sense, just as Plato wasn't a 'platonist' as most people use the term today. But the names have stuck, and 'Laplacian' or 'Ramseyan' wouldn't have quite the same ring.

Replies from: Vaniver, Lumifer
comment by Vaniver · 2013-09-18T21:19:41.131Z · LW(p) · GW(p)

But the names have stuck, and 'Laplacian' or 'Ramseyan' wouldn't have quite the same ring.

I like Laplacian as a name better, but it's already a thing.

comment by Lumifer · 2013-09-18T21:05:37.719Z · LW(p) · GW(p)

If I were to pretend that I'm a mainstream frequentist and consider "quantifying psychological uncertainty" to be subjective mumbo-jumbo with no place anywhere near real science :-D I would NOT have serious disagreements with e.g. Vaniver's list. Sure, I would quibble about accents, importances, and priorities, but there's nothing there that would be unacceptable from the mainstream point of view.

Replies from: RobbBB, Vaniver
comment by Rob Bensinger (RobbBB) · 2013-09-18T22:44:19.337Z · LW(p) · GW(p)

My biggest concern with the label 'Bayesianism' isn't that it's named after the Reverend, nor that it's too mainstream. It's that it's really ambiguous.

For example, when Yvain speaks of philosophical Bayesianism, he means something extremely modest -- the idea that we can successfully model the world without certainty. This view he contrasts, not with frequentism, but with Aristotelianism ('we need certainty to successfully model the world, but luckily we have certainty') and Anton-Wilsonism ('we need certainty to successfully model the world, but we lack certainty'). Frequentism isn't this view's foil, and this philosophical Bayesianism doesn't have any respectable rivals, though it certainly sees plenty of assaults from confused philosophers, anthropologists, and poets.

If frequentism and Bayesianism are just two ways of defining a word, then there's no substantive disagreement between them. Likewise, if they're just two different ways of doing statistics, then it's not clear that any philosophical disagreement is at work; I might not do Bayesian statistics because I lack skill with R, or because I've never heard about it, or because it's not the norm in my department.

There's a substantive disagreement if Bayesianism means 'it would be useful to use more Bayesian statistics in science', and if frequentism means 'no it wouldn't!'. But this methodological Bayesianism is distinct from Yvain's philosophical Bayesianism, and both of those are distinct from what we might call 'Bayesian rationalism', the suite of mantras, heuristics, and exercises rationalists use to improve their probabilistic reasoning. (Or the community that deems such practices useful.) Viewing the latter as an ideology or philosophy is probably a bad idea, since the question of which of these tricks are useful should be relatively easy to answer empirically.

Replies from: Randaly, Jayson_Virissimo
comment by Randaly · 2013-09-18T23:53:54.700Z · LW(p) · GW(p)

Frequentism isn't this view's foil

Err, actually, yes it is. The frequentist interpretation of probability makes the claim that probability theory can only be used in situations involving large numbers of repeatable trials, or selection from a large population. William Feller:

There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines "out of infinitely many worlds one is selected at random..." Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.

Or to quote from the essay coined the term frequentist:

The essential distinction between the frequentists and the [Bayesians] is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not.

Frequentism is only relevant to epistemological debates in a negative sense: unlike Aristotelianism and Anton-Wilsonism, which both present their own theories of epistemology, frequentism's relevance is almost only in claiming that Bayesianism is wrong. (Frequentism separately presents much more complicated and less obviously wrong claims within statistics and probability; these are not relevant, given that frequentism's sole relevance to epistemology is its claim that no theory of statistics and probability could be a suitable basis for an epistemology, since there are many events they simply don't apply to.)

(I agree that it would be useful to separate out the three versions of Bayesianism, whose claims, while related, do not need to all be true or false at the same time. However, all three are substantively opposed to one or both of the views labelled frequentist.)

Replies from: satt, RobbBB, Lumifer
comment by satt · 2013-09-19T02:01:13.670Z · LW(p) · GW(p)

Err, actually, yes it is. The frequentist interpretation of probability makes the claim that probability theory can only be used in situations involving large numbers of repeatable trials, or selection from a large population.

Depends which frequentist you ask. From Aris Spanos's "A frequentist interpretation of probability for model-based inductive inference":

It is argued that the proposed frequentist interpretation, not only achieves this objective, but contrary to the conventional wisdom, the charges of ‘circularity’, its inability to assign probabilities to ‘single events’, and its reliance on ‘random samples’ are shown to be unfounded.

and

The error statistical perspective identifies the probability of an event A—viewed in the context of a statistical model (x), xR^n_X—with the limit of its relative frequency of occurrence by invoking the SLLN. This frequentist interpretation is defended against the charges of [i] ‘circularity’ and [ii] inability to assign ‘single event’ probabilities, by showing that in model-based induction the defining characteristic of the long-run metaphor is neither its temporal nor its physical dimension, but its repeatability (in principle) which renders it operational in practice.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-09-19T15:48:02.566Z · LW(p) · GW(p)

Depends which frequentist you ask. From Aris Spanos's "A frequentist interpretation of probability for model-based inductive inference":

For those who can't access that through the paywall (I can), his presentation slides for it are here. I would hate to have been in the audience for the presentation, but the upside of that is that they pretty much make sense on their own, being just a compressed version of the paper.

While looking for those, I also found "Frequentists in Exile", which is Deborah Mayo's frequentist statistics blog.

I am not enough of a statistician to make any quick assessment of these, but they look like useful reading for anyone thinking about the foundations of uncertain inference.

comment by Rob Bensinger (RobbBB) · 2013-09-19T00:12:29.583Z · LW(p) · GW(p)

The frequentist interpretation of probability makes the claim that probability theory can only be used in situations involving large numbers of repeatable trials

I don't understand what this "probability theory can only be used..." claim means. Are they saying that if you try to use probability theory to model anything else, your pencil will catch fire? Are they saying that if you model beliefs probabilistically, Math breaks? I need this claim to be unpacked. What do frequentists think is true about non-linguistic reality, that Bayesians deny?

Replies from: Desrtopa, nshepperd
comment by Desrtopa · 2013-09-19T01:53:07.154Z · LW(p) · GW(p)

I don't understand what this "probability theory can only be used..." claim means. Are they saying that if you try to use probability theory to model anything else, your pencil will catch fire? Are they saying that if you model beliefs probabilistically, Math breaks?

I think they would be most likely to describe it as a category error. If you try to use probability theory outside the constraints within which they consider it applicable, they'd attest that you'd produce no meaningful knowledge and accomplish nothing but confusing yourself.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-09-19T05:02:49.559Z · LW(p) · GW(p)

Can you walk me through where this error arises? Suppose I have a function whose arguments are the elements of a set S, whose values are real numbers between 0 and 1, and whose values sum to 1. Is the idea that if I treat anything in the physical world other than objects' or events' memberships in physical sequences of events or heaps of objects as modeling such a set, the conclusions I draw will be useless noise? Or is there something about the word 'probability' that makes special errors occur independently of the formal features of sample spaces?

Replies from: Desrtopa
comment by Desrtopa · 2013-09-19T13:44:19.815Z · LW(p) · GW(p)

As best I can parse the question, I think the former option better describes the position.

comment by nshepperd · 2013-09-19T02:18:58.346Z · LW(p) · GW(p)

IIRC a common claim was that modeling beliefs at all is "subjective" and therefore unscientific.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-09-19T05:00:12.723Z · LW(p) · GW(p)

Do you have any links to this argument? I'm having a hard time seeing why any mainstream scientist who thinks beliefs exist at all would think they're ineffable....

Replies from: nshepperd
comment by nshepperd · 2013-09-21T12:52:31.228Z · LW(p) · GW(p)

Hmm, I thought I had read it in Jaynes' PT:TLoS, but I can't find it now. So take the above with a grain of salt, I guess.

comment by Lumifer · 2013-09-19T16:53:47.384Z · LW(p) · GW(p)

The frequentist interpretation of probability makes the claim that probability theory can only be used in situations involving large numbers of repeatable trials, or selection from a large population.

Yes, but frequentists have zero problems with hypothetical trials or populations.

Do note that for most well-specified statistical problems the Bayesians and the frequentists will come to the same conclusions. Differently expressed, likely, but not contradicting each other.

comment by Jayson_Virissimo · 2013-09-19T02:08:32.600Z · LW(p) · GW(p)

For example, when Yvain speaks of philosophical Bayesianism, he means something extremely modest...

Yes, it is my understanding that epistemologists usually call the set of ideas Yvain is referring to "probabilism" and indeed, it is far more vague and modest than what they call Bayesianism (which is more vague and modest still than the subjectively-objective Bayesianism that is affirmed often around these parts.).

If frequentism and Bayesianism are just two ways of defining a word, then there's no substantive disagreement between them. Likewise, if they're just two different ways of doing statistics, then it's not clear that any philosophical disagreement is at work; I might not do Bayesian statistics because I lack skill with R, or because I've never heard about it, or because it's not the norm in my department.

BTW, I think this is precisely what Carnap was on about with his distinction between probability-1 and probability-2, neither of which did he think we should adopt to the exclusion of the other.

comment by Vaniver · 2013-09-18T21:43:49.832Z · LW(p) · GW(p)

I would NOT have serious disagreements with e.g. Vaniver's list.

I think they would have significant practical disagreement with #3, given the widespread use of NHST, but clever frequentists are as quick as anyone else to point out that NHST doesn't actually do what its users want it to do.

Sure, I would quibble about accents, importances, and priorities, but there's nothing there that would be unacceptable from the mainstream point of view.

Hence the importance of the qualifier 'qualitative'; it seems to me that accents, importances, and priorities are worth discussing, especially if you're interested in changing System 1 thinking instead of System 2 thinking. The mainstream frequentist thinks that base rate neglect is a mistake, but the Bayesian both thinks that base rate neglect is a mistake and has organized his language to make that mistake obvious when it occurs. If you take revealed preferences seriously, it looks like the frequentist says base rate neglect is a mistake but the Bayesian lives that base rate neglect is a mistake.

Now, why Bayes specifically? I would be happy to point to Laplace instead of Bayes, personally, since Laplace seems to have been way smarter and a superior rationalist. But the trouble with naming methods of "thinking correctly" is that everyone wants to name their method "thinking correctly," and so you rapidly trip over each other. "Rationalism," for example, refers to a particular philosophical position which is very different from the modal position here at LW. Bayes is useful as a marker, but it is not necessary to come to those insights by way of Bayes.

(I will also note that not disagreeing with something and discovering something are very different thresholds. If someone has a perspective which allows them to generate novel, correct insights, that perspective is much more powerful than one which merely serves to verify that insights are correct.)

Replies from: Lumifer
comment by Lumifer · 2013-09-19T02:12:33.642Z · LW(p) · GW(p)

...but clever frequentists

Yeah, I said if I were pretend to be a frequentist -- but that didn't involve suddenly becoming dumb :-)

it seems to me that accents, importances, and priorities are worth discussing

I agree, but at this point context starts to matter a great deal. Are we talking about decision-making in regular life? Like, deciding which major to pick, who to date, what job offer to take? Or are we talking about some explicitly statistical environment where you try to build models, fit them, evaluate them, do out-of-sample forecasting, all that kind of things?

I think I would argue that recognizing biases (Tversky/Kahneman style) and trying to correct for them -- avoiding them altogether seems too high a threshold -- is different from what people call Bayesian approaches. The Bayesian way of updating on the evidence is part of "thinking correctly", but there is much, much more than just that.

Replies from: Vaniver
comment by Vaniver · 2013-09-19T08:10:11.353Z · LW(p) · GW(p)

I think I would argue that recognizing biases (Tversky/Kahneman style) and trying to correct for them -- avoiding them altogether seems too high a threshold -- is different from what people call Bayesian approaches.

At least one (and I think several) of biases identified by Tversky and Kahneman is "people do X, a Bayesian would do Y, thus people are wrong," so I think you're overstating the difference. (I don't know enough historical details to be sure, but I suspect Tversky and Kahneman might be an example of the Bayesian approach allowing someone to discover novel, correct insights.)

The Bayesian way of updating on the evidence is part of "thinking correctly", but there is much, much more than just that.

I agree, but it feels like we're disagreeing. It seems to me that a major Less Wrong project is "thinking correctly," and a major part of that project is "decision-making under uncertainty," and a major part of uncertainty is dealing with probabilities, and the Bayesian way of dealing with probabilities seems to be the best, especially if you want to use those probabilities for decision-making.

So it sounds to me like you're saying "we don't just need stats textbooks, we need Less Wrong." I agree; that's why I'm here as well as reading stats textbooks. But it also sounds to me like you're saying "why are you naming this Less Wrong stuff after a stats textbook?" The easy answer is that it's a historical accident, and it's too late to change it now. Another answer I like better is that much of the Less Wrong stuff comes from thinking about and taking seriously the stuff from the stats textbook, and so it makes sense to keep the name, even if we're moving to realms where the connection to stats isn't obvious.

Replies from: Lumifer
comment by Lumifer · 2013-09-19T16:39:03.178Z · LW(p) · GW(p)

Hm... Let me try to unpack my thinking, in particular my terminology which might not match exactly the usual LW conventions. I think of:

Bayes theorem as a simple, conventional, and an entirely uncontroversial statistical procedure. If you ask a dyed-in-the-wool rabid frequentist whether the Bayes theorem is true he'll say "Yes, of course".

Bayesian statistics as an approach to statistics with three main features. First is the philosophical interpretation of (some) probability as subjective belief. Second is the focus on conditional probabilities. Third is the strong preferences for full (posterior) distributions as answers instead of point estimates.

Cognitive biases (aka the Kahneman/Tversky stuff) as certain distortions in the way our wetware processes information about reality, as well as certain peculiarities in human decision-making. Yes, a lot of it it is concerned with dealing with uncertainty. Yes, there is some synergy with Bayesian statistics. No, I don't think this synergy is the defining factor here.

I understand that historically the in the LW community Bayesian statistics and cognitive biases were intertwined. But apart from historical reasons, it seems to me these are two different things and the degree of their, um, interpenetration is much overstated on LW.

it sounds to me like you're saying "we don't just need stats textbooks, we need Less Wrong."

Well, we need for which purpose? For real-life decision making? -- sure, but then no one is claiming that stats textbooks are sufficient for that.

much of the Less Wrong stuff comes from thinking about and taking seriously the stuff from the stats textbook

Some, not much. I can argue that much of LW stuff comes from thinking logically and following chains of reasoning to their conclusion -- or actually just comes from thinking at all instead of reacting instinctively / on the basis of a gut feeling or whatever.

I agree that thinking in probabilities is a very big step and it *is* tied to Bayesian statistics. But still it's just one step.

Replies from: Vaniver
comment by Vaniver · 2013-09-19T17:16:40.295Z · LW(p) · GW(p)

I agree with your terminology.

I can argue that much of LW stuff comes from thinking logically ... I agree that thinking in probabilities is a very big step

When contrasting LW stuff and mainstream rationality, I think the reliance on thinking in probabilities is a big part of the difference. ("Thinking logically," for the mainstream, seems to be mostly about logic of certainty.) When labeling, it makes sense to emphasize contrasting features. I don't think that's the only large difference, but I see an argument (which I don't fully endorse) that it's the root difference.

(For example, consider evolutionary psychology, a moderately large part of LW. This seems like a field of science particularly prone to uncertainty, where "but you can't prove X!" would often be a conversation-stopper. For the Bayesian, though, it makes sense to update in the direction of evo psych, even though it can't be proven, which is then beneficial to the extent that evo psych is useful.)

Replies from: Lumifer
comment by Lumifer · 2013-09-19T17:40:10.999Z · LW(p) · GW(p)

When contrasting LW stuff and mainstream rationality, I think the reliance on thinking in probabilities is a big part of the difference. ("Thinking logically," for the mainstream, seems to be mostly about logic of certainty.)

Yes, I think you're right.

For the Bayesian, though, it makes sense to update in the direction of evo psych, even though it can't be proven

Um, I'm not so sure about that. The main accusation against evolutionary psychology is that it's nothing but a bunch of just-so stories, aka unfalsifiable post-hoc narratives. And a Bayesian update should be on the basis of evidence, not on the basis of an unverifiable explanation.

Replies from: Vaniver
comment by Vaniver · 2013-09-19T19:39:27.231Z · LW(p) · GW(p)

The main accusation against evolutionary psychology is that it's nothing but a bunch of just-so stories, aka unfalsifiable post-hoc narratives.

It seems to me that if you think in terms of likelihoods, you look at a story and say "but the converse of this story has high enough likelihood that we can't rule it out!" whereas if you think in terms of likelihood ratios, you say "it seems that this story is weakly more plausible than its converse."

I'm thinking primarily of comments like this. I think it is a reasonable conclusion that anger seems to be a basic universal emotion because ancestors who had the 'right' level of anger reproduced more than those who didn't. Boris just notes that it could be the case that anger is a byproduct of something else, but doesn't note anything about the likelihood of anger being universal in a world where it is helpful (very high) and the likelihood of anger being universal in a world where it is neutral or unhelpful (very low). We can't rule out anger being spurious, but asking to rule that out is mistaken, I think, because the likelihood ratio is so significant. It doesn't make sense to bet against anger being reproductively useful in the ancestral environment (but I think it makes sense to assign a probability to that bet, even if it's not obvious how one would resolve it).

Replies from: Lumifer
comment by Lumifer · 2013-09-19T20:11:24.681Z · LW(p) · GW(p)

It seems to me that if you think in terms of likelihoods, you look at a story and say "but the converse of this story has high enough likelihood that we can't rule it out!" whereas if you think in terms of likelihood ratios, you say "it seems that this story is weakly more plausible than its converse."

I have several problems with this line of reasoning. First, I am unsure what it means for a story to be true. It's a story -- it arranges a set of facts in a pattern pleasing to the human brain. Not contradicting any known facts is a very low threshold (see the Russell's teapot), to call something "true" I'll need more than that and if a story makes no testable predictions I am not sure on which basis I should evaluate its truth and what does it even mean.

Second, it seems to me that in such situations the likelihoods and so, necessarily, their ratios are very very fuzzy. My meta uncertainty -- uncertainty about probabilities -- is quite high. I might say "story A is weakly more plausible than story B" but my confidence in my judgment about plausibility is very low. This judgment might not be worth anything.

Third, likelihood ratios are good when you know you have a complete set of potential explanations. And you generally don't. For open-ended problems the explanation "something else" frequently looks like the more plausible one, but again, the meta uncertainty is very high -- not only you don't know how uncertain you are, you don't even know what you are uncertain about! Nassim Taleb's black swans are precisely the beasties that appear out of "something else" to bite you in the ass.

Replies from: Vaniver
comment by Vaniver · 2013-09-19T20:44:24.153Z · LW(p) · GW(p)

First, I am unsure what it means for a story to be true.

Ah, by that I generally mean something like "the causal network N with a particular factorization F is the underlying causal representation of reality," and so a particular experiment measures data and then we calculate "the aforementioned causal network would generate this data with probability P" for various hypothesized causal networks.

For situations where you can control at least one of the nodes, it's easy to see how you can generate data useful for this. For situations where you only have observational data (like the history of human evolution, mostly), then it's trickier to determine which causal network(s) is(are) best, but often still possible to learn quite a bit more about the underlying structure than is obvious at first glance.

So suppose we have lots of historical lives which are compressed down to two nodes, A which measures "anger" (which is integer-valued and non-negative, say) and C which measures "children" (which is also integer valued and non-negative). The story "anger is spurious" is the network where A and C don't have a link between them, and the story "anger is reproductively useful" is the network where A->C and there is some nonzero value a^* of A which maximizes the expected value of C. If we see a relationship between A and C in the data, it's possible that the relationship was generated by the "anger is spurious" network which said those variables were independent, but we can calculate the likelihoods and determine that it's very very low, especially as we accumulate more and more data.

Third, likelihood ratios are good when you know you have a complete set of potential explanations. And you generally don't.

Sure. But even if you're only aware of two hypotheses, it's still useful to use the LR to determine which to prefer; the supremacy of a third hidden hypothesis can't swap the ordering of the two known hypotheses!

Nassim Taleb's black swans are precisely the beasties that appear out of "something else" to bite you in the ass.

Yes, reversal effects are always possible, but I think that putting too much weight on this argument leads to Anton-Wilsonism (certainty is necessary but impossible). I think we do often have a good idea of what our meta uncertainty looks like in a lot of cases, and that's generally enough to get the job done.

Replies from: Lumifer
comment by Lumifer · 2013-09-19T21:07:01.415Z · LW(p) · GW(p)

I have only glanced at Pearl's work, not read it carefully, so my understanding of causal networks is very limited. But I don't understand on the basis of which data will you construct the causal network for anger and children (and it's actually more complicated because there are important society-level effects). In what will you "see a relationship between A and C"? On the basis of what will you be calculating the likelihoods?

Replies from: Vaniver
comment by Vaniver · 2013-09-19T21:22:52.640Z · LW(p) · GW(p)

In what will you "see a relationship between A and C"? On the basis of what will you be calculating the likelihoods?

Ideally, you would have some record. I'm not an expert in evo psych, so I can't confidently say what sort of evidence they actually rely on. I was hoping more to express how I would interpret a story as a formal hypothesis.

I get the impression that a major technique in evolutionary psychology is making use of the selection effect due to natural selection: if you think that A is heritable, and that different values of A have different levels of reproductive usefulness, then in steady state the distribution of A in the population gives you information about the historic relationship between A and reproductive usefulness, without even measuring relationship between A and C in this generation. So you can ask the question "what's the chance of seeing the cluster of human anger that we have if there's not a relationship between A and reproduction?" and get answers that are useful enough to focus most of your attention on the "anger is reproductively useful" hypothesis.

comment by Axion · 2013-07-06T03:16:16.201Z · LW(p) · GW(p)

I guess the distinction in my mind is that in a Bayesian approach one enumerates the various hypothesis ahead of time. This is in contrast to coming up with a single hypothesis and then adding in more refined versions based on results. There are trade-offs between the two. Once you get going with a Bayesian approach you are much better protected against bias; however if you are missing some hypothesis from your prior you don't find it.

Here are some specific responses to the 4 answers:

  1. If you have a problem for which it is easy to enumerate the hypotheses, and have statistical data, then Bayes is great. If in addition you have a good prior probability distribution then you have the additional advantage that it is much easier to avoid bias. However if you find you are having to add in new hypotheses as you investigate then I would say you are using a hybrid method.

  2. Even without Bayes one is supposed to specifically look for alternate hypothesis and search for the best answer.
    On the Less Wrong welcome page the link next to the Bayesian link is a reference to the 2 4 6 experiment. I'd say this is an example of a problem poorly suited to Bayesian reasoning. It's not a statistical problem, and it's really hard to enumerate the prior for all rules for a list of 3 numbers ordered by simplicity. There's clearly a problem with confirmation bias, but I would say the thing to do is to step back and do some careful experimentation along traditional lines. Maybe Bayesian reasoning is helpful because it would encourage you to do that?

  3. I would agree that a rationalist needs to be exposed to these concepts.

  4. I wonder about this statement the most. It's hard to judge qualitative statements about probabilities. For example, I can say that I had a low prior belief in cryonics, and since reading articles here I have updated and now have a higher probability. I know I had some biases against the idea. However, I still don't agree and it's difficult to tell how much progress I've made in understanding the arguments.

comment by jsteinhardt · 2013-07-03T05:47:49.637Z · LW(p) · GW(p)

Regarding Bayes, you might like my essay on the topic, especially if you have statistical training.

Replies from: Axion
comment by Axion · 2013-07-06T03:44:20.919Z · LW(p) · GW(p)

That paper did help crystallize some of my thoughts. At this point I'm more interested in wondering if I should be modifying how I think, as opposed to how to implement AI.

comment by Jiro · 2013-07-04T09:30:25.588Z · LW(p) · GW(p)

You are not alone in thinking the use of Bayes is overblown. It can;t be wrong, of course, but it can be impractical to use and in many real life situations we might not have specific enough knowledge to be able to use it. In fact, that's probably one of the biggest criticisms of lesswrong.

comment by pushcx · 2013-04-02T11:38:20.336Z · LW(p) · GW(p)

Hi folks, I'm Peter. I read a lot of blogs and saw enough articles on Overcoming Bias a few years ago that I was aware of Yudkowsky and some of his writing. I think I wandered from there to his personal site because I liked the writing and from there to Less Wrong, but it's long enough ago I don't really remember. I've read Yudkowsky's Sequences and found lots of good ideas or interesting new ways to explain things (though I bounced off QM as it assumed a level of knowledge in physics I don't have). They're annoyingly disorganized - I realize they were originally written as an interwoven hypertext, but for long material I prefer reading linear silos, then I can feel confident I've read everything without getting annoyed at seeing some things over and over. Being confused by their organization when nobody else seems to be also contributes to the feeling in my last paragraph below.

I signed up because I had a silly solution to a puzzle, but I've otherwise hesitated to get involved. I feel I've skipped across the surface of LessWrong; I subscribe to a feed that only has a couple posts per week and haven't seen anything better. I'm aware there are pages with voting, but I'm wary of the time sink of getting pulled into a community or being a filter rather than keeping up with curated content.

I'm also wary of a community so tightly focused around one guy. I have only good things to say about Yudkowsky or his writing, but a site where anyone is far and away the most active and influential writer sets off alarm bells. Despite the warning in the death spiral sequence, this community heavily revolves around him. Maybe every other time hundreds of people rally around one revelatory guy it's bad news and it's fine here because there are lots of arguments against things like revelation here, but things like the sequence reruns are really off-putting. It fits a well-trod antipattern; even if I can't see anything wrong in the middle of the story I know it ends badly. (Yes, I know, I'm not.)

Replies from: Nornagest, magfrump, MugaSofer, Michelle_Z
comment by Nornagest · 2013-04-02T21:43:36.895Z · LW(p) · GW(p)

I'm also wary of a community so tightly focused around one guy. I have only good things to say about Yudkowsky or his writing, but a site where anyone is far and away the most active and influential writer sets off alarm bells. Despite the warning in the death spiral sequence, this community heavily revolves around him.

Yeah, it's a problem. I'd even go so far as to say that it's a cognitive hazard, not just a PR or recruitment difficulty: if you've got only one person at the clear top of a status hierarchy covering some domain, then halo effects can potentially lead to much worse consequences for that domain than if you have a number of people of relatively equal status who occasionally disagree. Of course there's also less potential for infighting, but that doesn't seem to outweigh the potential risks.

There was a long gap in substantive posts from EY before the epistemology sequence, and I'd hoped that a competitor might emerge from that vacuum. Instead the community seems to have branched; various people's personal blogs have grown in relative significance, but LW has stayed Eliezer's turf in practice. I haven't fully worked out the implications, but they don't seem entirely good, especially since most of the community's modes of social organization are outgrowths of LW.

Replies from: ModusPonies
comment by ModusPonies · 2013-04-04T13:06:42.949Z · LW(p) · GW(p)

if you've got only one person at the clear top of a status hierarchy covering some domain

For what it's worth, if Yudkowsky and gwern gave me conflicting advice on some arbitrary topic, then all else equal I'd go with gwern's opinion. The two of them focus on different things, though, so I don't know if this matters in practice.

comment by magfrump · 2013-04-03T23:02:19.376Z · LW(p) · GW(p)

I think a part of the problem with other people filling the "vacuum" left by Eliezer is that when he was writing the sequences it was a large amount of informal material. Since then we've established a lot of very formal norms for main-level posts; the "blog" is now about discussions with a lot of shared background rather than about trying to use a bunch of words to get some ideas out.

That is, most of the point of the sequences is laying out ground rules. There's no vacuum left over for anyone to fill, and LW isn't really a "blog" any more, so much as a community or discussion board.

And for me, personally, at least, a lot of the attraction of LW and the sequences is not that Eliezer did a bunch of original creative work, but that he verbalized and worked out a bit more detail on a variety of ideas that were already familiar, and then created a community where people have to accept that and are therefore trustworthy. What this "feels like on the inside" is that the community is here because they share MY ideas about epistemology or whatever, rather than because they share HIS ideas, even if he was the one to write them down.

Of course YMMV and none of this is a controlled experiment; I could be making up bad post hoc explanations.

Replies from: itaibn0
comment by itaibn0 · 2013-04-04T21:04:03.355Z · LW(p) · GW(p)

Just to be clear, what you say does not contradict the argument you are responding to. You gave a good explanation for why EY has a big influence on the community. It still isn't clear that this is a good thing.

Replies from: magfrump
comment by magfrump · 2013-04-05T05:26:00.401Z · LW(p) · GW(p)

Yes, I'm not arguing that it is a good thing. I'm simply putting forward an explanation for why no one else has stepped in to "fill the vacuum" as some have hoped in other comments; I don't believe there is a vacuum to fill.

Also I meant to endorse the idea that Eliezer is like Pythagoras: someone who wrote down and canonized a set of knowledge already mostly present, which is at least LESS DANGEROUS than a group following a set of personal dogma.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-05T06:29:45.896Z · LW(p) · GW(p)

Actually, I think that the sequences have a fair number of original ideas. They were enumerated about a year or so ago by Eliezer and Luke in separate posts.

Replies from: magfrump
comment by magfrump · 2013-04-05T06:49:14.681Z · LW(p) · GW(p)

I do remember that and I agree I oversimplified. I mostly mean that much of the basis of his ideas that aren't controversial here aren't controversial elsewhere either, they just aren't seen as his ideas elsewhere. This all makes it seem like Eliezer is more of a figurehead than I feel he actually is.

comment by MugaSofer · 2013-04-09T18:46:34.189Z · LW(p) · GW(p)

I've read Yudkowsky's Sequences and found lots of good ideas or interesting new ways to explain things (though I bounced off QM as it assumed a level of knowledge in physics I don't have)

This seems to be a common problem. It certainly happened to me.

comment by Michelle_Z · 2013-04-02T21:15:22.411Z · LW(p) · GW(p)

Apply skepticism evenly? I mean, you don't have to do/participate in something just because a bunch of other people are doing it. TBH, I'd like to see a type of "sequence review" of stuff from other major writers on this site. It's useful in that I'll occasionally read one if I don't remember having read it before, so I can't knock it.

comment by Kendra · 2013-07-19T19:54:39.418Z · LW(p) · GW(p)

Hi, I'm Denise from Germany, I just turned 19 and study maths at university. Right now, I spend most of my time with that and caring for my 3-year-old daughter. I know LessWrong for almost two years now, but never got around to write. However, I'm more or less involved with parts of the LessWrong and the Effective Altruism community, most of them originally found me via Okcupid (I stated I was a LessWrongian), and from there, it expanded.

I grew up in a small village in the middle of nowhere in Germany, very isolated without any people to talk to. I skipped a grade and did extremely well at school, but was mostly very unhappy during my childhood/teen years. Though I had free internet access, I had almost no access to education until I was 15 years old (and pregnant, and no, that wasn't unplanned), because I had no idea what to look for. I dropped out of school then and prepared for the exams -when I had time (I was mostly busy with my child)- I needed to do to be allowed to attend university. In Germany that's extremely unusual and most people don't even know you can do it without going to school.

When I was 15, I discovered enviromentalism (during pregnancy, via people who share my parenting values) and feminism. Since then, I seriously cared about making the world „a better place“. I was already very nerdy in my special fields of interest then, though still very uneducated and lacking basic concepts. Thankfully, I found LessWrong when I was just 17 and became very taken with it. I started to question my beliefs, became a utilitarian, adopted a somewhat transhumanist mindset and the usual, but the breakthrough only came last year after I started spending time with people from the community. Since then I am totally focused. Most people who have met me this year or at the end of 2012 are very surprised by this, I noticed that a lot of people completely overestimate my past selves (which is somewhat relieving, though I still feel like everyone from the LW/EA who is usually quite taken with me overestimates me). Until the beginning of this year, I even considered enviromentalism the most important problem (which is completely ridiculous for me now). Well, I had been a serious enviromentalist for three years, then I talked half an hour with another LessWrongian about it, who explained to me why it isn't the most important problem, so I dropped it on the same day. After thinking about it myself and talking to several LW/EAs (e.g. 80,000hours) I decided it's the best thing for me to study maths (my minor will be in computer science). People always tell me I worry too much about my future and I am already at a very good position, being so driven, etc. but I often think I have lost so many years now and there is so much to read and so much I don't know and so little time. Especially considering that I lose about 70% of my time awake to caring for my daughter (which people do never take into account at all. They just have no idea. Before last October, it was even 90%). I often felt extremely incompetent and lazy because other people get so much done in comparison to me. Well, I do feel a bit better after actually thinking about how big my disadvantages are, but it's still quite bad. Several people have asked me to consider internships, etc., but I mostly still feel too incompetent, and the even bigger problem, too socially awkward.

Rationality was very helpful in the past with personal problems (e.g., I have a very static mindset, which hasn't really been a problem so far because I always was able to do things despite of it, without having to work for them, but now, doing my maths degree, it doesn't work as well as in the past) and has heavily reduced them, though enough still remain. My productivity has increased a lot. There are a lot of things to do waiting for me, I can't afford losing time to personal inconveniences. (Though anyway most of my time and energy goes into my child and there isn't really much I can do about that.)

I'm very happy that I found LessWrong and like-minded people. If you have reading recommendations, please tell me. I am familiar with all the basic material (the Sequences, of course, the EA stuff, the self-improvement stuff, Bostrom's work, Kahneman...). If you have any other advice, I would also love to hear it.

Replies from: Kawoomba, army1987, Gunnar_Zarncke, vollmer
comment by Kawoomba · 2013-07-19T20:10:20.969Z · LW(p) · GW(p)

As another LW'er with kids in Germany, welcome!

comment by A1987dM (army1987) · 2013-07-20T23:21:22.041Z · LW(p) · GW(p)

„a better place“

It isn't customary that kind of quotation marks in English; “these ones” are usually used in typeset materials, but most people just use "the ones on the keyboard" on-line.

comment by Gunnar_Zarncke · 2014-01-22T22:49:58.688Z · LW(p) · GW(p)

Hi Denise/Kendra,

sich um ein kleines Kind alleine zu kümmern ist schon viel. Wenn Du dann auch noch studierst und EA und LW Meetups machst ist das schon ziemlich viel. Ich bewundere Deine Leistung. Ich habe einiges Material zu rationaler Erziehung auf meiner Homepage verlinkt, das Du Dir evtl. mal ansehen möchtest: http://lesswrong.com/user/Gunnar_Zarncke

Ein Tipp (obwohl Du vermutlich weißt und nur nicht umsetzen konntest): Die Synergieeffekte bei der Kindererziehung sind beträchtlich. Es ist erheblich einfacher für zwei Eltern für zwei Kinder zu sorgen als 2x alleinerziehend mit Kind. Entsprechend in größeren Gruppen (das sieht man natürlich meist nur wenn sich mehrere Familien treffen). Hast Du keine Möglichkeit das zu nutzen?

Du darfst mir gerne jederzeit Fragen stellen.

Gruß aus Hamburg

Gunnar

Replies from: Creutzer
comment by Creutzer · 2014-01-22T23:04:34.449Z · LW(p) · GW(p)

I don't think it's appropriate to write a comment in a language other than English. You could have sent a PM.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-22T23:05:32.853Z · LW(p) · GW(p)

A yes. You are probably right.

How comes you did notice so quickly this being such an old thread?

Replies from: blacktrance
comment by blacktrance · 2014-01-22T23:15:11.790Z · LW(p) · GW(p)

There's a Recent Comments column to the right.

comment by vollmer · 2013-08-02T04:32:17.039Z · LW(p) · GW(p)

Welcome Denise! :)

comment by Shmidley · 2013-04-02T21:11:32.701Z · LW(p) · GW(p)

.

Replies from: Manfred
comment by Manfred · 2013-04-06T02:30:48.032Z · LW(p) · GW(p)

Welcome!

The two most useful things I gleaned from this site are the ability to point out when people

The really valuable times are when you get to say those things to yourself - you're the only person you can force to listen :D

comment by rationalnoob · 2013-04-10T07:47:24.316Z · LW(p) · GW(p)

Hi,

i have been lurking around here mostly for (rational) self help. Some info about me.

Married. Work at India office of a top tier tech company. 26 y/o

between +2 and +2.5 SD IQ . crystallized >> fluid . Extremely introspective and self critical. ADHD / Mildly depressed most of my life. Have hated 'work' most of my life.

Zero visual working memory (One - Two items with training). Therefore struggling with programming computers and not enjoying it. Can write short programs and solve standard interview type questions. Can't build big functional pieces of software

Tried to self medicate two years back .Overdosed on modafinil + piracetam. in ER. 130+ heart rate for 8 hours. induced panic disorder. As of today, Stimulant use out of question therefore.

Familiar with mindfulness meditation and spiritual philosophy.

Its quite clear that i can't build large pieces of software. Unsure as to what productive use i can be with these attributes.

Thanks

Replies from: ModusPonies, private_messaging, hg00
comment by ModusPonies · 2013-04-12T14:16:34.370Z · LW(p) · GW(p)

Unsure as to what productive use i can be with these attributes.

That depends on what your goal is. Making enough money to fund a relaxed and happy life? Making tremendous amounts of money? Job satisfaction? Something else entirely?

Replies from: rationalnoob
comment by rationalnoob · 2013-04-13T08:28:16.780Z · LW(p) · GW(p)

in terms of goals, i hadn't formalized things but my mental calculations generally revolve around.

A) making a lot of money. B) not burning out (due to competitive stress e.g.) doing so.

these seems highly improbable in my current environment as i don't have the natural characteristics for this to happen. so either

a) i adapt (major , almost miraculous changes needed in conscientiousness/ working memory etc) to succeed at top tier software product development or any other similar high pay career track. b) settle for low quality / low challenge work and low pay (IT services ? teaching? government bureaucracy?)

jobs in the b) category pay < 20K USD in india so it won't be a very relaxed existence financially.

therefore had been trying to get a) to work somehow. minor successes overall. my working memory and conscientiousness are atleast bottom quartile/ if not bottom decile in my peer group.

stuck big time in life therefore.

Replies from: private_messaging
comment by private_messaging · 2013-04-13T09:17:09.350Z · LW(p) · GW(p)

You may be able to work as a programmer, given some management so that you only work on small pieces at a time.

It seems to me that it is actually quite uncommon to be able to comprehend projects of significant size, in programming or elsewhere.

Also, maybe you're not that different from other high-IQ individuals. I've always suspected that top scientists, programmers, etc. are at (just an illustrative example) 1 in 1000 on [metric most directly measured by IQ and similar tests] and 1 in 1000 on combination of things like integration of knowledge/memory, working space, etc. Whereas high IQ individuals in general aren't very far from average on the other factors and can't usefully access massive body of knowledge, for example.

Replies from: rationalnoob
comment by rationalnoob · 2013-04-13T11:25:37.198Z · LW(p) · GW(p)

the only trouble is that one is expected to mature and tackle larger and larger problems or alternatively manage a large (and always increasing) business scope with years under the belt.

both of those capacities are constrained significantly by conscientiousness / working memory / attention deficits.

comment by private_messaging · 2013-04-11T08:34:33.785Z · LW(p) · GW(p)

between +2 and +2.5 SD IQ

Zero visual working memory (One - Two items with training). Therefore struggling with programming computers and not enjoying it. Can write short programs and solve standard interview type questions. Can't build big functional pieces of software

That's fairly interesting. It seem to be often under-appreciated that IQ (and similar tests) fail to evaluate important aspects of cognition.

Replies from: rationalnoob
comment by rationalnoob · 2013-04-11T09:02:29.318Z · LW(p) · GW(p)

yes. cognitive ability is quite varied and i am highly stunted in the visuo spatial area.

could never read fiction (no characters visuals in my head). the lack of this faculty is also a major bottleneck in comprehension of technical material.

i like syntax / discrete math / logic etc, things which which depend more on verbal facility.

comment by hg00 · 2013-04-10T10:56:33.680Z · LW(p) · GW(p)

Welcome!

Overdosed on modafinil + piracetam.

What was your dosage?

Replies from: rationalnoob
comment by rationalnoob · 2013-04-10T11:33:38.301Z · LW(p) · GW(p)

immediate dose : 200 mg modafinil + 800 mg piracetam around 10 am.

OD symptoms within 2/3 hours.

there was probably significant drug buildup of modafinil over the prior week i guess. was taking mostly 200mg (once 400 mg) a day the preceeding week. so i am guessing 300-500 mg built up.

effectively then

500 - 700 mg modafinil + 800mg piracetam.

resulted in 170/90 BP + 130-150 HR + severe anxiety for around 8-9 hours. ER docs didn't know what to do. I refused to get admitted to ICU.

Subsided by 10pm night. instigated a panic disorder and a drug phobia

cured by 25mg sertraline for 6 months. panic free (more or less) since.

has left me vigilant about drug interactions and adverse drug effects.

Replies from: hg00
comment by hg00 · 2013-04-11T07:13:25.875Z · LW(p) · GW(p)

Thanks!

Replies from: rationalnoob
comment by rationalnoob · 2013-04-16T07:04:27.031Z · LW(p) · GW(p)

my experience could be useful to LWers experiementing with noo tropics in warning of the dangers of

a) drug interactions

there is need to be very careful while titrating doses up , especially when drugs are in combination. your body may manifest novel problems not seen by anyone else.

b) drug buildup :

need to be very careful while estimating effective doses to take drug buildup into account. even though superficially i was ingesting 200mg of modafinil, i was effectively on 500mg + of the drug.

comment by MenosErrado · 2013-04-07T23:39:22.923Z · LW(p) · GW(p)

Hello, Less Wrong; I'm so glad I found you.

A few years ago a particularly fruitful wikiwalk got me to a list of cognitive biases (also fallacies). I read it voraciously, then followed the sources, found out about Kahneman and Tversky and all the research that followed. The world has never quite been the same.

Last week Twitter got me to this sad knee-jerk post on Slate, which in a few message-board-quality paragraphs completely missed the point of this thought experiment by Steve Landsburg, dealing with the interesting question of crimes in which the only harm to the victims is the pain from knowing that they happened. The discussion there, however, was refreshingly above average, and I'll be forever grateful to LessWronger "Henry", who posted a link to the worst argument in the world - which turned out to be a practical approach to a problem I had been thinking about and trying to condense into something useful in a discussion (I was going toward something like "'X-is-horrible-and-is-called-racism' turning into 'We-call-Y-racism-therefore-it's-horrible'").

Since then I've been looking around and it feels... feels like I've finally found my species after a lifetime among aliens. I have heartily agreed with everything I've seen Eliezer write (so far), which I suspect is almost as unusual to him as it is to me. It's simply relieving to see minds working properly. Looking around I've found that I'm not too far behind, but still find something to think and learn in nearly every post, which looks like the perfect spot to be. "Insight porn", somebody said here - that seems about right.

As for my "theme":

I'm Brazilian (btw, are there others here?), currently studying Law. Specifically, I've been trying to apply the heuristics and biases approach to research about day to day decision making by judges and juries. I mean to do empirical research after graduation if possible, but right now I'm attempting a review of the available literature. Research in Portuguese proved futile, but that was expected (sadly, it seems I wouldn't have a problem if searching for psychoanalysis...).

So I humbly ask: if you know of research about cognitive biases in a legal setting, would you kindly direct me to it?

Replies from: MugaSofer, MenosErrado
comment by MugaSofer · 2013-04-08T00:02:49.450Z · LW(p) · GW(p)

Since then I've been looking around and it feels... feels like I've finally found my species after a lifetime among aliens. I have heartily agreed with everything I've seen Eliezer write (so far), which I suspect is almost as unusual to him as it is to me. It's simply relieving to see minds working properly.

Know that feeling. I wonder how common a reaction it is, actually ...

Replies from: RogerS
comment by RogerS · 2013-04-10T11:56:29.723Z · LW(p) · GW(p)

Maybe it's just that EY is very persuasive! I'm reminded of what was said about some other polymath (Arthur Koestler I think) that the critics were agreed that he was right on almost everything - except, of course, for the topic that the critic concerned was expert in, where he was completely wrong!

So my problem is, whether to just read the sequences, or to skim through all the responses as well. The latter takes an awful lot longer, but from what I've seen so far there's often a response from some expert in the field concerned that, at the least, puts the post into a whole different perspective.

Replies from: MenosErrado, shminux, MugaSofer
comment by MenosErrado · 2013-04-11T02:20:44.692Z · LW(p) · GW(p)

After looking around a little more, I should clarify what I meant perhaps.

The part about agreeing with EY (so far) was about psychology, ethics, morality, epistemology, even the little of politics I saw. The "so far" is doing heavy work there, I've only been around for a week, and focusing first on the topics most immediately relevant to my work and studies. More importantly, I haven't touched the physics yet (which from what I've seen in this page is something I should have mentioned), and I'm not qualified to "take sides" if I had.

The paragraph was not prompted (only) by EY, but by my marvel at the quality of discussions here. No caveats there, this community has really impressed me. The way it works, not the conclusions, although they're certainly correlated.

I'm used to having to defend rationality in a very relevant portion of the discussions I have, before it's possible to move on to anything productive (of course, those tend not to move on at all). This is a breath of fresh air.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-07-10T01:27:53.901Z · LW(p) · GW(p)

Oi, eu venho tentando juntar brasileiros capazes de pensar faz algum tempo. Dirijo o www.ierfh.org e ja visitei a parte MIRI do pessoal desse site por um mês.
Se achar o conteúdo do FAQ do site interessante, envie mensagem para o IERFH, tem comunidade no facebook também, e etc...

comment by Shmi (shminux) · 2013-04-10T14:50:27.624Z · LW(p) · GW(p)

I recommend reading each post, then writing a draft response with your thoughts on the matter, then checking if anyone had already commented on it. If not, hit "Comment", for others to read. And for yourself, some time later.

comment by MugaSofer · 2013-04-10T14:43:11.738Z · LW(p) · GW(p)

I was thinking more of "finally, someone who isn't being stupid about this" rather than "well, I'm persuaded"; although, to be fair, they probably go together a good deal.

comment by MenosErrado · 2013-04-15T05:01:37.155Z · LW(p) · GW(p)

Decided to check out HPMOR yesterday. Now I know what I'll be doing with my free time in the next week.

Also pointed about 15 people to it... I hope it'll get through to at least a couple of them (it's kind of fun trying to figure out which ones). That does seem more likely to work than any strategy I've tried before.

comment by WedgeOfCheese (DiamondSoul) · 2013-04-21T01:20:59.656Z · LW(p) · GW(p)

I'm a college student studying music composition and computer science. You can hear some of my compositions on my SoundCloud page (it's only a small subset of my music, but I made sure to put a few that I consider my best at the top of the page). In the computer science realm, I'm into game development, so I'm participating in this thing called One Game A Month whose name should be fairly self-explanatory (my February submission is the one that's most worth checking out - the other 2 are kind of lame...).

For pretty much as long as I can remember, I've enjoyed pondering difficult/philosophical/confusing questions and not running away from them, which, along with having parents well-versed in math and science, led me to gradually hone my rationality skills over a long period of time without really having a particular moment of "Aha, now I'm a rationalist!". I suppose the closest thing to such a moment would be about a year ago when I discovered HPMoR (and, shortly thereafter, this site). I've found LW to be pretty much the only place where I am consistently less confused after reading articles about difficult/philosophical/confusing questions than I am before.

Replies from: Vaniver, Osiris
comment by Vaniver · 2013-04-21T21:00:28.433Z · LW(p) · GW(p)

Welcome!

I'm a college student studying music composition and computer science.

Have you done any algorithmic composition?

Replies from: DiamondSoul
comment by WedgeOfCheese (DiamondSoul) · 2013-04-21T22:59:33.174Z · LW(p) · GW(p)

I did this and I might try doing a few more pieces like it. You have to click somewhere on the screen to start/stop it.

Replies from: Vaniver
comment by Vaniver · 2013-04-22T00:08:27.486Z · LW(p) · GW(p)

Fascinating, thanks!

A project that's been kicking around in the back of my head for a while is emotional engineering through algorithmic music; it would be great to have a way to generate somewhat novel happy high-energy music during coding that won't sap any attention (I'm sort of reluctant to talk to musicians about it, though, because it feels like telling a chef you'd like a way to replace them with a machine that dispenses a constant stream of sugar :P).

Replies from: DaFranker, DiamondSoul
comment by DaFranker · 2013-04-23T18:41:43.777Z · LW(p) · GW(p)

it would be great to have a way to generate somewhat novel happy high-energy music during coding that won't sap any attention (I'm sort of reluctant to talk to musicians about it, though, because it feels like telling a chef you'd like a way to replace them with a machine that dispenses a constant stream of sugar :P).

I would also love this. I'm in constant deficit of high-energy music for coding or other similar activities, and often it can take more work finding good music for it than all the coding work I want to do while listening to it (or, conversely, it can take much longer to find good music than the music lasts).

comment by WedgeOfCheese (DiamondSoul) · 2013-04-22T00:20:43.966Z · LW(p) · GW(p)

One thing I think would be cool would be some sort of audio-generating device/software/thing that allows arbitrary levels of specificity. So, on one extreme, you could completely specify a fully deterministic stream of sound, and, on the other extreme, you could specify nothing and just say "make some sound". Or you could go somewhere in between and specify something along the lines of "play music for X minutes, in a manner evoking emotion Y, using melody Z as the main theme of the piece".

Replies from: DaFranker
comment by DaFranker · 2013-04-23T18:56:16.505Z · LW(p) · GW(p)

Now that you mention this, I do remember reading some years ago about a machine-learning composition project that had the algorithm generate random streams and learn what music people liked by crowd-sourcing feedback.

I think what you've described is a great idea, and I would pay for it.

Ideally, it would let me have different-styled streams dependent on what I want to do with the music / what activity I'm doing while listening. Triple bonus points if it can consume an existing piece of music to learn more about some particular style of stream that I want.

Replies from: gwern
comment by gwern · 2013-04-23T20:59:44.468Z · LW(p) · GW(p)

Now that you mention this, I do remember reading some years ago about a machine-learning composition project that had the algorithm generate random streams and learn what music people liked by crowd-sourcing feedback.

There have been a lot o' such projects. I like some of the tracks produced by DarwinTunes.

comment by Osiris · 2013-04-21T07:53:56.372Z · LW(p) · GW(p)

Welcome, fellow new person! You've got some wonderful music. Any particular things that interest you in the "confusing question" genre?

Replies from: DiamondSoul
comment by WedgeOfCheese (DiamondSoul) · 2013-04-22T00:11:57.386Z · LW(p) · GW(p)

Thanks! As for "confusing questions", some thing I've had long-term interests in are: ethics, consciousness, and trying to wrap my mind around some of the less intuitive concepts in math/physics. Apart from that, it varies quite a bit. Recently, I've become rather interested in personality modeling. The Big-5 model has great empirically tested descriptive power, but is rather lacking in explanatory power (i.e. it can't, afaik, answer questions like "what's going on in person X's mind that causes them to behave in manner Y?" or "how could person X be made more ethical/rational/happy/whatever without fundamentally changing their personality?"). At the same time, the Myers-Briggs model (and, more importantly, the underlying Jungian cognitive function theory) has the potential to more effectively answer such questions, but also has rather limited/sketchy empirical support. So I've been thinking mainly of how M-B might be tweaked so that the theory matches reality better.

comment by volya · 2013-10-07T13:17:08.142Z · LW(p) · GW(p)

Hi, I am Olga, female, 40, programmer, mother of two. Got here from HPMoR. Can not as yet define myself as a rationalist, but I am working on it. Some rationality questions, used in real life conversations, have helped me to tackle some personal and even family issues. It felt great. In my "grown-up" role I am deeply concerned to bring up my kids with their thoughts process as undamaged as I possibly can and maybe even to balance some system-taught stupidity. I am at the start of my reading list on the matter, including LW sequences.

Replies from: army1987
comment by A1987dM (army1987) · 2013-10-07T19:42:25.838Z · LW(p) · GW(p)

Welcome!

Can not as yet define myself as a rationalist, but I am working on it.

Many people here call themselves aspiring rationalists.

comment by GTLisa · 2013-07-04T06:33:56.869Z · LW(p) · GW(p)

Hello, my name is Lisa. I found this site through HPMOR.

I'm a Georgia Tech student double majoring in Industrial Engineering and Psychology. I know I want to further my education after graduation, probably through a PhD. However, I'm not entirely sure what field I would want to focus on.

I've been lurking for awhile and am slowly making my way through the sequences, though I'm currently studying abroad so I'm not reading particularly quickly. I'm particularly interested in behavioral economics, statistics, evolutionary psychology, and in education policy, especially in higher education.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-07-11T06:41:38.704Z · LW(p) · GW(p)

education policy, especially in higher education.

Fun fact: my high level of interest in education policy quickly evaporated as soon as I was no longer going to school.

comment by Rafe · 2013-05-21T13:05:06.770Z · LW(p) · GW(p)

Hello everyone!

I've read occasional OB and LW articles and other Yudkowsky writings for many years, but never got into it in a big way until now.

My goal at the moment is to read the Quantum Physics sequence, since quantum physics has always seemed mysterious to me and I want to find out if its treatment here will dispel some of my confusion. I've spent the last few days absorbing the preliminaries and digressing into many, many prior articles. Now the tabs are finally dwindling and I am almost up to the start of the sequence!

Anyway, I have a question I didn't see in the FAQ. Given that I went on a long, long, long wiki walk and still haven't read very much of the core material, how big is Less Wrong? Has anyone done word counts on the sequences, or anything like that?

Replies from: sceaduwe
comment by sceaduwe · 2013-05-26T04:13:29.497Z · LW(p) · GW(p)

The sequences come close to a million words.

Replies from: Rafe
comment by Rafe · 2013-05-30T12:20:03.860Z · LW(p) · GW(p)

Thanks! That's not quite up to date, though, is it?

comment by Osiris · 2013-04-19T03:53:18.671Z · LW(p) · GW(p)

Hello there, everyone! I am Osiris, and I came here at the request of a friend of mine. I am familiar with Harry Potter and the Methods of Rationality, and spent some time reading through the articles here. Everythin' here is so interesting! I studied to become a Russian Orthodox Priest in the early nineties, and moved to the USA from the Russian Federation at the beginning of the W. Bush Administration. The change of scenery inspired me, and within the first year, I had become an atheist and learned everything I could about biology, physics, and modern philosophy. Today, I am a philosophy/psychology major at a local college, and work to change the world one little bit at a time.

Though I tend to be a bit of a poet, I hope I can find a place here. In particular, I am interested in thinking of morality and the uses of mythology in daily life.

I value maintaining and increasing diversity, and plan on posting a few things which relate to this as soon as possible. I am curious to see how everyone will react to my style of presentation and beliefs.

Replies from: Jayson_Virissimo, orthonormal
comment by Jayson_Virissimo · 2013-04-20T06:24:01.239Z · LW(p) · GW(p)

I value maintaining and increasing diversity, and plan on posting a few things which relate to this as soon as possible. I am curious to see how everyone will react to my style of presentation and beliefs.

Diversity of what, exactly?

Replies from: Osiris
comment by Osiris · 2013-04-20T07:11:09.447Z · LW(p) · GW(p)

Thanks for commenting!

The easy answer is everything. All things that are and can coexist. This is, of course, because I want humanity to survive and thrive as much and as well as possible.

You could say it is an attempt at being a bit more like the dreaded paperclip maximizer, which is a fierce beast indeed, and worth learning from(any reference to kung fu movies is intentional).

comment by orthonormal · 2013-04-20T04:37:57.050Z · LW(p) · GW(p)

Hi Osiris, and welcome!

If you're looking for awesome things that a poet can offer Less Wrong, there are people looking to create meaningful rationalist holidays with a sense of ritual to them.

Replies from: Osiris
comment by Osiris · 2013-04-20T07:11:47.206Z · LW(p) · GW(p)

Thank you! I will go and take a look!

comment by HumanitiesResearcher · 2013-04-17T01:14:57.966Z · LW(p) · GW(p)

Hi everyone,

I'm a humanities PhD who's been reading Eliezer for a few years, and who's been checking out LessWrong for a few months. I'm well-versed in the rhetorical dark arts, due to my current education, but I also have a BA in Economics (yet math is still my weakest suit). The point is, I like facts despite the deconstructivist tendency of humanities since the eighties. Now is a good time for hard-data approaches to the humanities. I want to join that party. My heart's desire is to workshop research methods with the LW community.

It may break protocol, but I'd like to offer a preview of my project in this introduction. I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence. The Big Book had multiple unknown sites of production, which we'll call Print Shop(s) [1-n]. I'm interested in pinning down which parts of the Big Book were made in which Print Shop. Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop had which Tools. Even worse, multiple sets of Tools can leave similar Marks.

The most obvious solution that I can see is

  • to catalog all Marks in the Big Book by sheet (a unit of print production, as opposed to the page), then
  • sort sheets by patterns of Marks, then
  • make some associations between the patterns of Marks and Print Shops, and then
  • propose Print Shops [x,y,z] to be the sites of production for the Big Book.

If nothing else, this method can define the n-number of Print Shops responsible for the Big Book.

The Bayesian twist on the obvious solution is to add some testing onto the associations, above. Specifically,

  • find some books strongly associated with Print Shops [x,y,z], in order to

  • assign probability of patterns of Marks to each Print Shop, then

  • revise initial associations between Print Shops [x,y,z] and the Big Book proportionally.

I'm far from an expert in Bayesian methods, but it seems already that there's something missing here. Is there some stage where I should take a control sample? Also, how can I find a logical basis for the initial association step, when there are many potential Print Shops? Lastly, how can I account for the decay of Tools, thus increasing Marks, over time?

Replies from: gwern, Vaniver, EHeller, PrawnOfFate
comment by gwern · 2013-04-17T02:10:50.268Z · LW(p) · GW(p)

I'm interested in associating the details of print production with an unnamed aesthetic object, which we'll presently call the Big Book, and which is the source of all of our evidence.

It's the Bible, isn't it.

Print Shop 1 has Tools (1), and those Tools (1) leave unintended Marks in the Big Book. Likewise with Print Shop 2 and their Tools (2). Unfortunately, people in the present don't know which Print Shop had which Tools. Even worse, multiple sets of Tools can leave similar Marks.

How can you possibly get off the ground if you have no information about any of the Print Shops, much less how many there are? GIGO.

I'm far from an expert in Bayesian methods, but it seems already that there's something missing here.

Have you considered googling for previous work? 'Bayesian inference in phylogeny' and 'Bayesian stylometry' both seem like reasonable starting points.

Replies from: Vaniver, HumanitiesResearcher
comment by Vaniver · 2013-04-17T14:26:50.736Z · LW(p) · GW(p)

How can you possibly get off the ground if you have no information about any of the Print Shops, much less how many there are? GIGO.

Not quite. You can get quite a bit of insight out of unsupervised clustering.

Replies from: gwern
comment by gwern · 2013-04-17T15:43:00.637Z · LW(p) · GW(p)

'No free lunches', right? If you're getting anything out of your unsupervised methods, that just means they're making some sort of assumptions and proceeding based on those.

Replies from: Vaniver
comment by Vaniver · 2013-04-17T16:20:38.133Z · LW(p) · GW(p)

Right, but this isn't a free lunch so much as "you can see a lot by looking."

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-18T05:29:38.831Z · LW(p) · GW(p)

Sorry to interrupt a perfectly lovely conversation. I just have a few things to add:

  • I may have overstated the case in my first post. We have some information about print shops. Specifically, we can assign very small books to print shops with a high degree of confidence. (The catch is that small books don't tend to survive very well. The remaining population is rare and intermittent in terms of production date.)

  • There are some hypotheses that could be treated as priors, but they're very rarely quantified (projects like this are rare in today's humanities).

comment by HumanitiesResearcher · 2013-04-17T13:47:45.404Z · LW(p) · GW(p)

Interesting feedback.

It's the Bible, isn't it.

Ha, I wish. No, it's more specific to literature.

How can you possibly get off the ground if you have no information about any of the Print Shops, much less how many there are? GIGO.

We have minimal information about Print Shops. I wouldn't say the existing data are garbage, just mostly unquantified.

Have you considered googling for previous work?

Yes, but thanks to you I know the shibboleth of "Bayesian stylometry." Makes sense, and I've already read some books in a similar vein, but there are some problems. Most fundamentally, I have trouble translating the methods to a different type of data: from textual data like word length to the aforementioned Marks. Otherwise, my understanding of most stylometric analysis was that it favors frequentist methods. Can you clear any of this up?

EDIT: I have a follow-up question regarding GIGO: How can you tell what data are garbage? Are the degrees of certainty based on significant digits of measurement, or what?

Replies from: gwern
comment by gwern · 2013-04-17T15:47:18.139Z · LW(p) · GW(p)

Most fundamentally, I have trouble translating the methods to a different type of data: from textual data like word length to the aforementioned Marks.

Have to define your features somehow.

Otherwise, my understanding of most stylometric analysis was that it favors frequentist methods.

Really? I was under the opposite impression, that stylometry was, since the '60s or so with the Bayesian investigation of Mosteller & Wallace into the Federalist papers, one of the areas of triumph for Bayesianism.

I have a follow-up question regarding GIGO: How can you tell what data are garbage? Are the degrees of certainty based on significant digits of measurement, or what?

No, not really. I think I would describe GIGO in this context as 'data which is equally consistent with all theories'.

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-18T00:52:41.067Z · LW(p) · GW(p)

Have to define your features somehow.

I don't understand what this means. Can you say more?

Replies from: gwern
comment by gwern · 2013-04-18T01:04:52.336Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Feature_%28machine_learning%29 A specific concrete variable you can code up, like 'total number of commas'.

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-18T05:12:18.493Z · LW(p) · GW(p)

I have just such a thing, referred to as "Marks." I haven't yet included that in the code, because I wanted to explore the viability of the method first. So to retreat to the earlier question, why does my proposal strike you as a GIGO situation?

Replies from: gwern
comment by gwern · 2013-04-18T16:26:36.185Z · LW(p) · GW(p)

So to retreat to the earlier question, why does my proposal strike you as a GIGO situation?

You claimed to not know what printers there were, how many there were, and what connection they had to 'Marks'. In such a situation, what on earth do you think you can infer at all? You have to start somewhere: 'we have good reason to believe there were not more than 20 printers, and we think the London printer usually messed up the last page. Now, from this we can start constructing these phylogenetic trees indicating the most likely printers for our sample of books...' There is no view from nowhere, you cannot pick yourself up by your bootstraps, all observation is theory-laden, etc.

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-21T16:27:25.145Z · LW(p) · GW(p)

This all sounds good to me. In fact, I believe that researchers in the humanities are especially (perhaps overly) sensitive to the reciprocal relationship between theory and observation.

I may have overstated the ignorance of the current situation. The scholarly community has already made some claims connecting the Big Book to Print Shops [x,y,z]. The problem is that those claims are either made on non-quantitative bases (eg, "This mark seems characteristic of this Print Shop's status.") or on a very naive frequentist basis (eg, "This mark comes up N times, and that's a big number, so it must be from Print Shop X"). My project would take these existing claims as priors. Is that valid?

Replies from: gwern
comment by gwern · 2013-04-21T17:14:54.023Z · LW(p) · GW(p)

I have no idea. If you want answers like that, you should probably go talk to a statistician at sufficient length to convey the domain-specific knowledge involved or learn statistics yourself.

comment by Vaniver · 2013-04-17T14:36:17.180Z · LW(p) · GW(p)

This is a problem that machine learning can tackle. Feel free to contact me by PM for technical help.

To make sure I understand your problem:

We have many copies of the Big Book. Each copy is a collection of many sheets. Each sheet was produced by a single tool, but each tool produces many sheets. Each shop contains many tools, but each tool is owned by only one shop.

Each sheet has information in the form of marks. Sheets created by the same tool at similar times have similar marks. It may be the case that the marks monotonically increase until the tool is repaired.

Right now, we have enough to take a database of marks on sheets and figure out how many tools we think there were, how likely it is each sheet came from each potential tool, and to cluster tools into likely shops. (Note that a 'tool' here is probably only one repair cycle of an actual tool, if they are able to repair it all the way to freshness.)

We can either do this unsupervised, and then compare to whatever other information we can find (if we have a subcollection of sheets with known origins, we can see how well the estimated probabilities did), or we can try to include that information for supervised learning.

Replies from: HumanitiesResearcher, DaFranker
comment by HumanitiesResearcher · 2013-04-17T15:42:14.434Z · LW(p) · GW(p)

That's a hell of a summary, thanks!

I'm glad you mentioned the repair cycle of tools. There are some tools that are regularly repaired (let's just call them "Big Tools") and some that aren't ("Little Tools"). Both are expensive at first and to repair, but it seems the Print Shops chose to repair Big Tools because they were subject to breakage that significantly reduced performance.

I should add another twist since you mentioned sheets of known origins: Assume that we can only decisively assign origins to single sheets. There are two problems stemming from this assumption: first, not all relevant Marks are left on such sheets; second, very few single sheet publications survive. Collations greater than one sheet are subject to all of the problems of the Big Book.

I'm most interested in the distinction between unsupervised and supervised learning. And I will very likely PM you to learn more about machine learning. Again, thanks for your help!

EDIT: I just noticed a mistake in your summary. Each sheet is produced by a set of tools, not a single tool. Each mark is produced by a single tool.

Replies from: Vaniver
comment by Vaniver · 2013-04-17T16:20:25.129Z · LW(p) · GW(p)

I just noticed a mistake in your summary. Each sheet is produced by a set of tools, not a single tool. Each mark is produced by a single tool.

Okay. Are the classes of marks distinct by tool type- that is, if I see a mark on a sheet, I know whether it came from tool type X or tool type Y- or do we need to try and discover what sort of marks the various tools can leave?

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-18T00:54:07.897Z · LW(p) · GW(p)

Fortunately, we know which tool types leave which marks. We also have a very strong understanding of the ways in which tools break and leave marks.

Thanks again for entertaining this line of inquiry.

comment by DaFranker · 2013-04-19T13:33:07.110Z · LW(p) · GW(p)

This is a problem that machine learning can tackle. Feel free to contact me by PM for technical help.

Good point!

Also yay combining multiple fields of knowledge and expertise! applause

Seriously though, the world does need more of it, and I felt the need to explicitly reward and encourage this.

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-22T19:58:27.723Z · LW(p) · GW(p)

Thanks! I feel explicitly encouraged.

comment by EHeller · 2013-04-17T01:28:24.223Z · LW(p) · GW(p)

Any time you are doing statistical analysis, you always want a sample of data that you don't use to tune the model and where you know the right answer. (a 'holdout' sample)

In this case, you should have several books related to the various print shops that you don't feed into your Bayesian algorithm. You can then assess the algorithm by seeing if it gets these books correct.

To account for the decay of the books, you need books that you know not only came from print shop x,y or z, but also you'd need to know how old the tools wee that made those books. Either that, or you'd have to have some understanding of how the tools decay from a theoretical model.

Replies from: HumanitiesResearcher, Vaniver
comment by HumanitiesResearcher · 2013-04-17T13:57:49.918Z · LW(p) · GW(p)

Very helpful points, thanks. The scholarly community already has a pretty good working knowledge of the Tools, and thus the theoretical model of Tool breakage ("breakage" may be more accurate than "decay," since the decay is non-incremental and stochastic). We know the order in which parts of the Tools break, and we have some hypotheses correlating breakage to gross usage. The twist is that we don't know when any Print Shops produced the Big Book, so we can only extrapolate a timeline based on Tool breakage

Can you say more about the holdout sample? Should the holdout sample be a randomly selected sample of data, or something suspected to be associated with Print Shops [x,y,z] ? Print Shops [a,b,c] ?

comment by Vaniver · 2013-04-17T14:48:32.161Z · LW(p) · GW(p)

To account for the decay of the books, you need books that you know not only came from print shop x,y or z, but also you'd need to know how old the tools wee that made those books. Either that, or you'd have to have some understanding of how the tools decay from a theoretical model.

If you assume that the marks result from defects in the tool that accumulate, it should be relatively easy to build (and test) a monotonic model. Suppose we have an unordered collection of sheets, with some variable number of defects per sheet. If the defects are repeated (i.e. we can recognize defect A whenever we see it, as well as B, and so on), then we can build together paths- all of the sheets without defects pointing towards all of the sheets with just defect A, then defect A and B, and so on. There should be divergence- if we never see sheets with both defect A and C, then we can conclude the 0-A-B path is one tool (with the only some of the 0 defect sheets coming from that tool, obviously), the 0-C-D-E path is another tool, and the 0-F-G path is a third tool. (Noting that here 'tool' refers to one repair cycle, not the entire lifecycle.)

Replies from: EHeller
comment by EHeller · 2013-04-17T18:26:47.721Z · LW(p) · GW(p)

If you assume that the marks result from defects in the tool that accumulate, it should be relatively easy to build (and test) a monotonic model

The first assumption seems bad to me- I would assume defects accumulate only until equipment is reset or repaired, which is why I think you'd want some actual data.

Replies from: Vaniver, HumanitiesResearcher
comment by Vaniver · 2013-04-17T19:09:14.019Z · LW(p) · GW(p)

The first assumption seems bad to me- I would assume defects accumulate only until equipment is reset or repaired, which is why I think you'd want some actual data.

That looks to me like it agrees with my assumption; I suspect my grammar is somehow unclear. (Note the last line of the grandparent.)

comment by HumanitiesResearcher · 2013-04-18T05:18:51.857Z · LW(p) · GW(p)

Yes, I see an accord between your statement and Vaniver's. As I said below, most tools have very slow repair cycles.

comment by PrawnOfFate · 2013-04-17T02:14:27.202Z · LW(p) · GW(p)

How about talking clearly about whatever you are currently hinting at?

Replies from: Kindly, MugaSofer, HumanitiesResearcher
comment by Kindly · 2013-04-17T16:07:59.582Z · LW(p) · GW(p)

I dunno, I find the complexity-hiding capitalized nouns things strangely attractive. Maybe there should be more capitalized nouns. Why isn't Sheets capitalized?

This is probably coming back to my fascination with graph theory, which has similar but even more exotic terminology. "A spider is a subdivision of a star, which is a kind of tree made up only of leaves and a root; a star with three arcs is called a claw."

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-18T05:17:47.791Z · LW(p) · GW(p)

I was openly warned by a professor (who will likely be on the dissertation committee) not to talk about this project widely.

The capitalized nouns are to highlight key terms. I believe the current description is specific enough to describe the situation accurately and without misleading people, but not too specific to break my professor's (correct) advice.

Have I broken LW protocol? Obviously, I'm new here.

Replies from: beoShaffer
comment by beoShaffer · 2013-04-18T05:20:18.933Z · LW(p) · GW(p)

I was openly warned by a professor (who will likely be on the dissertation committee) not to talk about this project widely.

Did they say why?

Replies from: HumanitiesResearcher
comment by HumanitiesResearcher · 2013-04-21T16:23:22.196Z · LW(p) · GW(p)

Yes. He said that I should be careful about sharing my project because, otherwise, I'll be reading about it in a journal in a few months. His warning may exaggerate the likelihood of a rival researcher and mis-value the expansion of knowledge, but I'm deferring to him as a concession of my ignorance, especially regarding rules of the academy.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-04-22T16:40:06.364Z · LW(p) · GW(p)

"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats."

Replies from: Vaniver
comment by Vaniver · 2013-04-22T19:47:47.406Z · LW(p) · GW(p)

This is heavily context-dependent. Many fields are idea-rich and implementation-poor, in which case you do have to ram ideas down people's throats, because there's a glut of other ideas you have to compete against. But in fields that are implementation-rich and idea-poor, ideas should be guarded until you've implemented them. There are no doubt academic fields where the latter case applies.

Replies from: gwern
comment by gwern · 2013-04-22T20:01:50.676Z · LW(p) · GW(p)

But in fields that are implementation-rich and idea-poor, ideas should be guarded until you've implemented them. There are no doubt academic fields where the latter case applies.

Can you name any?

Replies from: shminux, Vaniver, shminux
comment by Shmi (shminux) · 2013-04-22T21:02:48.201Z · LW(p) · GW(p)

I've been privately told of several such cases in high-energy physics. Below is an excerpt from the Politzer's Nobel lecture. He discovered Asymptotic freedom (that quarks are essentially connected by the miniature rubber bands which have no tension when the quarks are close to each other).

I slowly and carefully completed a calculation of the Yang-Mills beta function. I happen to be ambidextrous and mildly dyslexic. So I have trouble with left/right, in/out, forward/backward, etc. Hence, I derived each partial result from scratch, paying special attention to signs and conventions. It did not take long to go from dismay over the final minus sign (it was indeed useless for studying low energy phenomena) to excitement over the possibilities. I phoned Sidney Coleman. He listened patiently and said it was interesting. But, according to Coleman, I had apparently made an error because David Gross and his student had completed the same calculation, and they found it was plus. Coleman seemed to have more faith in the reliability of a team of two, which included a seasoned theorist, than in a single, young student. I said I’d check it yet once more. I called again about a week later to say I could find nothing wrong with my first calculation. Coleman said yes, he knew because the Princeton team had found a mistake, corrected it, and already submitted a paper to Physical Review Letters.

He does not explicitly say that Gross was tipped off, but it's easy to read between the lines. The rest of his lecture, titled The Dilemma Of Attribution is also worth reading.

Replies from: gwern
comment by gwern · 2013-04-22T21:36:25.766Z · LW(p) · GW(p)

I cannot speak to your private examples, but I think you may be reading that into what Politzer said. He previously mentions the existence of 'multiples':

And the neat, linear progress, as outlined by the sequence of gleaming gems recognized by Nobel Prizes, is a useful fiction. But a fiction it is. The truth is often far more complicated. Of course, there are the oft-told priority disputes, bickering over who is responsible for some particular idea. But those questions are not only often unresolvable, they are often rather meaningless. Genuinely independent discovery is not only possible, it occurs all the time.

And shortly after your passage, he says

On learning of the Gross-Wilczek-Politzer result, [Nobelist] Ken Wilson, who might have thought of its impossibility along the same lines as I attributed to [Nobelist] Schwinger, above, knew who to call to check the result. He realized that there were actually several people around the world who had done the calculation, en passant as it were, as part of their work on radiative corrections to weak interactions in the newly-popular Weinberg-Salam model. They just never thought to focus particularly on this aspect. But they could quickly confirm for Wilson by looking in their notebooks that the claimed result was, indeed, correct....[Nobelist] Steve Weinberg and [Nobelist] Murray Gell-Mann were among those to instantly embrace non-Abelian color SU(3) gauge theory as the theory of the strong interactions. In Gell-Mann’s case, it was in no small part because he had already invented it (!) with Harald Fritzsch and christened it QCD...’d only heard of Gell-Mann and Fritzsch’s work second hand, from [Nobelist] Shelly Glashow, and he seemed think it shouldn’t be taken too seriously. I only later realized it was more Glashow’s mode of communication than his serious assessment of the plausibility of the proposal. In any case, I had completely lost track of Gell-Mann and Fritzsch’s QCD.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-22T21:45:06.057Z · LW(p) · GW(p)

I cannot speak to your private examples, but I think you may be reading that into what Politzer said.

Not me. This tip-off story had been talked about in the community for a long time, just never publicly until Politzer decided to carefully and tactfully state what he knew personally and avoid speculating on what might have transpired. The result itself, of course, was ripe for discovery, and indeed was discovered but glossed over by others before him. I mentioned this particular story because it's one of the most famous and most public ones. Of course, it might all be rumors and in reality there was no issue.

Replies from: gwern
comment by gwern · 2013-04-22T21:51:36.592Z · LW(p) · GW(p)

'When you hear hoofbeats, think horses, not zebras'. I see here by Politzer's testimony a multiple discovery of at least 3 (Gell-Mann and the more-than-one persons implied by 'several') and you ask me to believe that a fourth multiple is not yet another multiple but rather a plagiarism/theft based, solely on you say it was being talked about. It's not exactly a convincing case.

Replies from: None, shminux
comment by [deleted] · 2013-04-22T22:00:40.172Z · LW(p) · GW(p)

The general narrative sounds very similar to cases in my own field, but I'd rather not talk about it. I've been cautioned not to speak about my current projects with certain people, on account of this.

comment by Shmi (shminux) · 2013-04-22T21:57:33.023Z · LW(p) · GW(p)

David Gross and his student had completed the same calculation, and they found it was plus.

A week after Politzer shared his calculation:

the Princeton team had found a mistake, corrected it, and already submitted a paper to Physical Review Letters.

Why would they decide to redo the calculation (not a very hard one, but rather laborious back then, though it's a standard one in any grad QFT course now) at exactly the same time?

Anyway, no point in further speculations without new data.

comment by Vaniver · 2013-04-22T22:29:42.497Z · LW(p) · GW(p)

It may be more precise to say there are academic groups to which that description applies, and that discretion is worthwhile in their proximity. Examples of those still living will remain private for obvious reasons.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-04-23T05:14:18.065Z · LW(p) · GW(p)

Yup, some specific people steal. This definitely happens (but I will not mention names for obvious reasons).

comment by Shmi (shminux) · 2013-04-22T20:52:31.965Z · LW(p) · GW(p)

I've been privately told of several such cases in high-energy physics. Some even allege that the main reason that David Gross got a share of the Nobel Prize for Asymptotic Freedom because he was a referee or maybe a journal editor for the Politzer's paper and managed to hasten his group's somewhat lagging research to get it published at the same time. No idea if the story has any true in it.

comment by MugaSofer · 2013-04-17T13:49:26.999Z · LW(p) · GW(p)

I think Gwern's right on this.

Replies from: PrawnOfFate, Nornagest
comment by PrawnOfFate · 2013-04-17T14:16:59.467Z · LW(p) · GW(p)

But Humanities has rejected that!

Replies from: HumanitiesResearcher, MugaSofer
comment by HumanitiesResearcher · 2013-04-18T05:22:35.339Z · LW(p) · GW(p)

Yep. It's not the Bible. I suspect that there are already good stats compiled on the Q-source, etc.

In a way it's not only futile but limiting to play the guessing game. There are lots of possible applications of Bayesian methods to the humanities. Maybe this discussion will help more projects than my own.

comment by MugaSofer · 2013-04-19T13:18:16.138Z · LW(p) · GW(p)

Ah, OK. They hadn't when I wrote it.

comment by Nornagest · 2013-04-22T17:39:03.786Z · LW(p) · GW(p)

That was my first thought too; there's a huge textual analysis tradition relating to the Bible and what I know of it maps pretty closely to the summary, although it's also mature enough that there wouldn't be much reason to obfuscate it like this. But it's not implausible that it applies to some other body of literature. I understand there are some similar things going on in classics, for example.

The specifics shouldn't matter too much, though. Although some types of mark are going to be a lot more machine-distinguishable than others, and that's going to affect the kinds of analysis you can do -- differences in spelling and grammar, for example, are far machine-friendlier than differences in letterforms in a manuscript.

comment by HumanitiesResearcher · 2013-04-17T13:42:44.254Z · LW(p) · GW(p)

Thanks for the feedback. I actually cleared up the technical language considerably. I don't think there's any need to get lost in the weeds of the specifics while I'm still hammering out the method.

comment by [deleted] · 2013-04-07T22:59:08.673Z · LW(p) · GW(p)

I found HPMOR nearly three years ago. Soon afterward, I finished the core sequences up through the QM sequence, read some of Eliezer's other posts, and other sequences and authors on LW. When I look back, I realize my thinking has been hugely influenced by what I have learned from this community. I cannot even begin to draw boundaries in my mind identifying what exactly came from LW; hopefully this means I have internalized the ideas and that I am actually using what I learned.

There is a story behind why I have now, after three years of lurking, finally created an account. I am currently a sophomore in high school. I have always been driven to learn by my curiosity and desire for truth and knowledge. But I am also a perfectionist and an overachiever. Somehow, in the last two years of high school, I began to latch onto academics as my “goal.” I started obsessing about ridiculous things - getting perfect scores on every assignment and test, guarding my perfect GPA, etc. It wasn't enough anymore that I understood the content without needing to study - I had to devote huge amounts of time and energy to achieve "perfection."

In March, over spring break, I returned to make some progress on my to-read list that had been piling up. I read Thinking, Fast and Slow; I finished the decision theory FAQ and Eliezer's most recent sequence on LW; and I read the FAQ on MIRI and several articles by Nick Bostrom and Eliezer on AI. When I returned to school, I found I had broken out of the destructive spiral around academics. I had no interest in chasing “perfection” in scores. Interestingly, my grades have hardly changed - the largest drop in any class was 2 percentage points. I have been far happier, curious about the world, and enthusiastic about my involvement in it. My drive to know the “why” behind things, and my interest in other topics (many of which are discussed on LW) have returned.

Now, mentally refreshed, I see opportunities everywhere; I am in the short period after making a huge mental change, during which it is easiest to start taking action. I have wanted to leave high school for quite some time now, but never took any action before. I just finished my application to Bard College at Simon’s Rock last week.

Replies from: Alicorn
comment by Alicorn · 2013-04-08T03:22:50.055Z · LW(p) · GW(p)

I just finished my application to Bard College at Simon’s Rock last week.

Ooh, good school, I went there, best of luck.

comment by MindTheLeap · 2013-04-06T13:51:16.339Z · LW(p) · GW(p)

Hi everyone,

I'm a PhD student in artificial intelligence/robotics, though my work is related to computational neuroscience, and I have strong interests in philosophy of mind, meta-ethics and the "meaning of life". Though I feel that I should treat finishing my PhD as a personal priority, I like to think about these things. As such, I've been working on an explanation for consciousness and a blueprint for artificial general intelligence, and trying to conceive of a set of weighted values that can be applied to scientifically observable/measurable/calculable quantities, both of which have some implications for an explanation of the "meaning" of life.

At the center of the value system I'm working on is a broad notion of "information". Though still at preliminary stages, I'm considering a hierarchy of weights for the value of different types of information, and trying to determine how bad this is as a utility function. At the moment, I consider the preservation and creation of all information valuable; at an everyday level I try to translate this into learning and creating new knowledge and searching for unique, meaningful experiences.

I've been aware of Less Wrong for years, though haven't quite mustered the motivation to read all of the sequences. Nevertheless, I've lurked here on and off over that time, and read lots of interesting discussions. I consider the ability to make rational decisions, and not be fooled by illogical arguments, important. Though without a definite set of values and goals, any action is simply shooting in the dark.

comment by Intrism · 2013-04-02T03:53:05.292Z · LW(p) · GW(p)

Greetings, LessWrongers. I call myself Intrism; I'm a serial lurker, and I've been hiding under the cupboards for a few months already. As with many of my favorite online communities, I found this one multiple times, through Eliezer's website, TVTropes, and Methods of Rationality (twice), before it finally stuck. I am a student of computer science, and greatly enjoy the discipline. I've already read many of the sequences. While I can't say I've noticed an increase in rationality since I've started, I have made some significant progress on my akrasia, including recently starting on an interesting but unknown LW-inspired technique which I'll write up once I have a better idea of how well it's performing.

Replies from: VCavallo
comment by VCavallo · 2013-04-04T18:58:49.311Z · LW(p) · GW(p)

Thank you for introducing me to the term akrasia!

comment by MumpsimusLane · 2013-05-01T23:23:36.766Z · LW(p) · GW(p)

Saluton! I'm an ex-mormon athiest, a postgenderist, a conlanging dabbler, and a chronic three-day monk.

Looking at the above posts (and a bunch of other places on the net), I think ex-mormons seem to be more common than I thought they would be. Weird.

I'm a first-year college student studying only core/LCD classes so far because every major's terrible and choosing is scary. Also, the college system is madness. I've read lots of posts on the subject of higher education on LessWrong already, and my experience with college seems to be pretty common.

I discovered LessWrong a few months ago via a link on a self-help blog, and quickly fell in love with it. The sequences pretty much completely matched up with what I had come up with on my own, and before reading LW I had never encountered anyone other than myself who regularly tabooed words and rejected the "death gives meaning to life" argument et cetera. It was nice to find out that I'm not the only sane person in the world. Of course, the less happy side of the story is that now I'm not the sanest person in my universe anymore. I'm not sure what I think about that. (Yes, having access to people that are smarter than me will probably leave me better off than before, but it's hard to turn off the "I wanna be the very best like no one ever was" desire.) Yet again, my experience seems to be pretty common.

Huh, I've never walked into a room of people and had nothing out of the ordinary to say. Being redundant is a new experience for me. I guess my secret ambition to start a movement of rationalists is redundant now too, huh? Drat! I should have come up with a plan B! :)

Replies from: Osiris
comment by Osiris · 2013-05-02T02:49:08.107Z · LW(p) · GW(p)

What will you do now that you can't form a movement of rationalists? Take over world? Become a superhero? Invent the best recipe for cookies? MAINTAIN AND INCREASE DIVERSITY?

For example, I am going to post a recipe for a bacon trilobite and my experiences and thoughts about paperclipping among humans. Any interesting things you be thinkin' of postin'? ^^

Replies from: MumpsimusLane
comment by MumpsimusLane · 2013-05-02T18:08:47.989Z · LW(p) · GW(p)

What will I do? I don't really know. Luminosity skills seem like an important requisite for answering that question, but while the luminosity sequence was nice, I feel like it didn't go far enough. Maybe that would be something worth postin' about.

comment by yakurbe0112 · 2013-04-13T03:47:06.668Z · LW(p) · GW(p)

Alright. Hi. I'm a senior in high school and thinking about majoring in Computer Science. Unlike most other people my age, this is probably my first post on any chat forum/ wiki/ blog. I also don't normaly type things without a spell checker and would like to get better. Any coments about my spelling or anything else would be appriciated.

My brother showed me this site a while back and also HP:MoR. Spicificly, I saw the Sequences. And they were long. Some of them were some-what interesting but mostly they were just long. In addition to that, I had just been introduced to the Methods of Rationality which, dispite being long, was realy interisting (actualy my favorite story that I have ever read), and there was some other things, so yeah . . . I still haven't read them. But anyway, that was about a year ago and at this point I have read through MoR at least three times. I feel that I am starting to think sort of rationaly and would like to improve on that.

In addition to that, I have this friend that I talk to at lunch. Normaly we talk about things that we probably don't have any ideas about that actualy reflect reality, like the origins of the universe, time travel, artificial intelligence (I did actualy read a bit about that by Eliezer. didn't understand as much as I would have liked, but still) those sorts of things. And about half the time I am almost entirly sure that whatever thought process he's using just doesn't work. So another reason I'm here is to make sure that what ever thought process I'm using is actualy the right way to be looking at things and that I am acting as an intelligent thinker rather than a condecending jerk.

So, I'm going to go get reading those sequences now.

Replies from: Dahlen, PhilGoetz, Randaly
comment by Dahlen · 2013-04-14T04:27:58.561Z · LW(p) · GW(p)

Any coments about my spelling or anything else would be appriciated.

Since you asked... "comments", "appreciated".

Welcome to LessWrong!

comment by PhilGoetz · 2013-04-13T07:18:07.925Z · LW(p) · GW(p)

Welcome!

I should probably write a post, "Why not to major in computer science." My advice is to be aware that there is almost no money in the world budgeted to computer science research, that most people can't even conceive of or believe in the concept of computer science research, and that a degree in computer science leads only to jobs as a computer programmer unless it is from a top-five school.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2013-04-13T07:50:03.222Z · LW(p) · GW(p)

jobs as a computer programmer

You say that like it's a bad thing.

Replies from: ModusPonies
comment by ModusPonies · 2013-04-15T18:40:30.767Z · LW(p) · GW(p)

Such jobs can also be acquired without a CS degree.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-04-15T19:15:56.160Z · LW(p) · GW(p)

Well, if you're good enough to teach yourself enough programming from scratch to be effective in those jobs. Not everyone is like that, IMO.

Replies from: ModusPonies
comment by ModusPonies · 2013-04-15T19:43:55.077Z · LW(p) · GW(p)

Not everyone is like that, IMO.

I'm certainly not! (Yet.) I should've been specific instead of pithy.

My own approach was to take a couple of courses through Boston University to get some foundational knowledge, and then start the job hunting dance. It was enough of a structured educational environment to get me to actually learn, but still far, far cheaper than getting another degree. (Happy ending: I am employed.)

I do wish I'd studied more CS (i.e., any CS) when I was doing the undergrad thing, but programming jobs have much less respect for credentialing than other professions.

comment by Randaly · 2013-04-15T19:31:21.198Z · LW(p) · GW(p)

I also don't normaly type things without a spell checker and would like to get better.

Some browsers have spellcheckers- I know that Pale Moon does.

comment by labachevskij · 2013-04-09T22:07:11.075Z · LW(p) · GW(p)

Hi everyone, I'm labachevskij. I'm a long time lurker on this site, attracted by (IIRC) Bayesian Decision Theory. I'm completing my PhD studies in Maths, but I have also been caught by HPMOR, which is proving a huge source of procrastination (I'm reading it again for the third time). I'm also on my way with the reading of the sequences.

Replies from: wedrifid
comment by wedrifid · 2013-04-10T08:34:33.072Z · LW(p) · GW(p)

Hi everyone, I'm labachevskij. I'm a long time lurker on this site, attracted by (IIRC) Bayesian Decision Theory. I'm completing my PhD studies in Maths, but I have also been caught by HPMOR, which is proving a huge source of procrastination (I'm reading it again for the third time). I'm also on my way with the reading of the sequences.

Welcome labachevskij!

What part of Math are you focusing on?

Replies from: labachevskij
comment by labachevskij · 2013-04-10T09:52:58.704Z · LW(p) · GW(p)

I'm working on Partial Differential Equations in Fluid-dynamics, both deterministic and stochastic. I'm dealing mostly with turbulence models, right now. But I trained as a probabilist (and there's where my heart lies).

Are you into maths too?

Replies from: wedrifid
comment by wedrifid · 2013-04-10T11:20:18.874Z · LW(p) · GW(p)

Are you into maths too?

Only for recreation purposes these days. Most recently I've been grappling with the various ways of formulating and manipulating infinitesimals. The need for them keeps cropping up when I explore various obscure decision theory problems and the implication of certain counterfactuals expressed in terms of subsets of an ultimate ensemble.

But I trained as a probabilist (and there's where my heart lies).

Sounds fun!

comment by Brendon_Wong · 2013-07-18T18:21:56.161Z · LW(p) · GW(p)

Hello! I’m a 15 year old sophomore in high school, living in the San Francisco Bay Area. I was introduced to rationality and Less Wrong while interning at Leverage Research, which was about a month ago.

I was given a free copy of Chapters 1-17 of HPMOR during my stay. I was hooked. I finished the whole series in two weeks and made up my mind to try and learn what it would be like being Harry.

I decided to learn rationality by reading and implementing The Sequences in my daily life. The only problem was, I discovered the length of the Eliezer’s posts from 2006-2010 was around around 10 Harry Potter books. I was told it would take months to read, and some people got lost along the way due to all the dependencies.

Luckily I am very interested in self improvement, so I decided that I should learn speed reading to avoid spending months dedicated solely to reading The Sequences. After several hours of training, I increased my reading speed (with high comprehension) five times, from around 150 words per minute to 700 words per minute. At that speed, it will take me 33.3 hours to read The Sequences.

It seems like most people advise reading The Sequences in chronological order in ebook form. Is using this ebook a good way to read The Sequences? Also, If I could spend 5 seconds to a minute after each blog post doing anything, what should I do? I was thinking of making some quick notes for myself to remember everything I read, perhaps with a spaced repetition system, or figuring out all the dependencies to smooth the way for future readers, perhaps leading to the easier creation of a training program...

Thanks for all your help, and I look forward to contributing to Less Wrong in the future!

Replies from: James_Miller, Nisan
comment by James_Miller · 2013-07-18T18:43:11.299Z · LW(p) · GW(p)

If I could spend 5 seconds to a minute after each blog post doing anything, what should I do?

Figure out how you would explain the main idea of the post to a smart friend.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T18:45:45.883Z · LW(p) · GW(p)

Thanks! Just curious, how come you chose that over simply taking short 10 second notes allowing me to memorize all the main ideas?

Replies from: Eliezer_Yudkowsky, James_Miller
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-18T19:33:43.968Z · LW(p) · GW(p)

IIRC notetaking is supposed to work less well than explaining something to others. I don't know about imagining how to explain something to others.

Replies from: Vaniver
comment by Vaniver · 2013-07-18T19:42:15.642Z · LW(p) · GW(p)

I don't know about imagining how to explain something to others.

I would imagine that actually explaining it out loud to a rubber duck is better than imagining explaining it to a friend, for the same reasons that it is a common debugging practice. Actually putting something into words makes weak spots in understanding obvious in a way that imagination can glide over.

Replies from: Brendon_Wong, army1987
comment by Brendon_Wong · 2013-07-18T20:04:32.920Z · LW(p) · GW(p)

IIRC notetaking is supposed to work less well than explaining something to others.

Perhaps note taking works less well for understanding, but explaining it out loud without recording it down or even writing my explanation will do very little for long term recall. What good will it do if I forget everything I read, after spending many hours reading it?

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T21:15:15.297Z · LW(p) · GW(p)

At first, I think I will try explaining ideas out loud as I read to save time, then write ultrashort notes on main ideas for long term memory.

Thanks for everyone's help!

comment by A1987dM (army1987) · 2013-07-20T23:51:30.423Z · LW(p) · GW(p)

When I imagine speaking to someone, I generally imagine specific words. YMMV.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-07-21T00:15:48.909Z · LW(p) · GW(p)

Actually speaking the words activates different areas of Broca's and Wernicke's regions (and elsewhere) than merely imagining them. Physically vocalizing the words, and hearing yourself vocalize them, allows them to be processed by more areas of your brain.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-22T20:06:49.469Z · LW(p) · GW(p)

If that made much of a difference, it would also matter whether I was talking to someone out loud vs in writing. I don't feel that is the case, though it's not like I did any Gwern-level statistics about that. (Also, some people have more vivid auditory imagery than others.)

comment by James_Miller · 2013-07-18T19:13:27.063Z · LW(p) · GW(p)

Both would work but my idea is less obvious so perhaps more helpful.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T19:23:36.820Z · LW(p) · GW(p)

That's an interesting idea. I suppose it might help with better understanding the concept, but it might not work for long term memorization. Should I write the explanations down?

Replies from: James_Miller
comment by James_Miller · 2013-07-18T20:37:37.505Z · LW(p) · GW(p)

That would probably help if you have the time.

comment by Nisan · 2013-07-18T18:51:06.735Z · LW(p) · GW(p)

Welcome! As you're interested in applying the Sequences to your daily life, I suggest checking out the Center for Applied Rationality. (Maybe you overlapped with them at Leverage?) As part of their curriculum development process, they offer free classes at their Berkeley office sometimes. If you sign up here you'll be put on a mailing list where they announce these sessions, usually a day or so in advance.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T18:55:27.753Z · LW(p) · GW(p)

Thanks, I just signed up. Do you think taking a full CFAR workshop would be a good next step after The Sequences? I'll be done in about 4 days at current reading speed (no planning fallacy adjustments), so I should probably plan ahead now.

Replies from: Nisan
comment by Nisan · 2013-07-18T21:12:31.408Z · LW(p) · GW(p)

It would definitely be a good next step. I don't know if they have a minimum age for workshops, but it doesn't hurt to apply.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T21:17:38.860Z · LW(p) · GW(p)

I don't believe they have age constraints, the issue is the monetary constraints :p

Thanks for your help!

Replies from: Nisan
comment by Nisan · 2013-07-18T21:21:00.329Z · LW(p) · GW(p)

They offer financial aid, too.

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T21:28:43.381Z · LW(p) · GW(p)

Since I have a total of $23, I must get my parents to pay and allow me to go for a week, that will be the tricky part

Replies from: Jiro
comment by Jiro · 2013-07-19T18:03:09.489Z · LW(p) · GW(p)

People might not like my response, but I'd say that if you're in a situation where you believe something might be beneficial to you but it consumes a substantial portion of your resources, you should heavily lean towards not going. This applies as much to a rationality workshop attended by someone with a tiny budget as it applies to playing the stock market. Making large expenditures for an uncertain return is generally a bad bet even if the expected utility gain is positive, if failure has a very negative consequence. And human beings are notoriously bad at assessing the expected utility in such situations.

You also need to be very confident in your ability to evaluate arguments if you don't want to end up worse than before.

Obviously, this doesn't apply if you're absolutely certain that going gives you more benefit than you forego in money, time, and parental willingness to give in (which may, in fact, be in limited supply) so there is no risk of loss, but not too many people are really that certain.

Replies from: thomblake
comment by thomblake · 2013-07-19T18:24:26.143Z · LW(p) · GW(p)

But surely going to a rationality workshop is the best way to learn to evaluate whether to go to a rationality workshop. And whether it succeeds or not, you can be convinced it was a good idea!

comment by Anna_Zhang · 2013-07-18T07:28:02.020Z · LW(p) · GW(p)

Hello, Less Wrong, I'm Anna Zhang, a high school student. I found this site about half a month ago, after reading Harry Potter and the Methods of Rationality. On Mr. Yudkowsky's Wikipedia page, I found a link to his site, where I found a link to this site. I've been reading the sequence How to Actually Change Your Mind, as Mr. Yudkowsky recommended, and I've learned a lot from it (though I still have a lot to learn...)

Replies from: Brendon_Wong
comment by Brendon_Wong · 2013-07-18T18:42:59.883Z · LW(p) · GW(p)

Welcome!

If you want to meet other high schoolers, this looks like a good place to start.

comment by TheSarge · 2013-05-03T09:05:44.261Z · LW(p) · GW(p)

Discovered while researching the global effects of a Pak-Indo nuclear exchange. Once here I began to dig further and found it appealing. I am a simple soldier pushing myself into a Masters in biology. Am I rationalist? I am not sure to be honest. If I am I know the exact date and time when I started to become one. Nov 2004 I was part of the battle of Fallujah, during an exchange of gunfire a child was injured. I will never know if it was one my rounds that caused her head injury but my lips worked to bring her life again. It was a futile attempt, she passed and while clouded with this damn experience I myself was wounded. At that very moment I lost my faith in any loving deity. My endless pursuit of knowledge, to include academics provided by a brick and mortar school has helped me recover from the loss of a limb. I still have the leg however it does not function well. I like to think and philosophy fascinates me, and this site fascinates me. :) Political ideology- Fiscally Conservative Religion-possibilian Rather progressive on issues like gay marriage and abortion. Abortion actually the act I despise but as a man I feel somehow that I haven't the organs to complain. To sum me up I suppose I am a crippled, tobacco chewing, gun toting member of the Sierra Club with a future as a freshwater biologist with memories I would like to replace with Bayes. LoL Well I just spilled that mess out, might as well hit post. Please feel free to ask anything you like, I am not sensitive. Open honesty to those that are curious is good medicine.

Replies from: TimS
comment by TimS · 2013-05-03T12:30:53.732Z · LW(p) · GW(p)

Welcome. Hope you find what you are looking for, and maybe find some of it here.

Replies from: TheSarge
comment by TheSarge · 2013-05-03T17:59:31.190Z · LW(p) · GW(p)

Thanks Tim! It will take years to research the materials here, that sounds fun lol

comment by jjvt · 2013-04-08T21:51:32.300Z · LW(p) · GW(p)

Hi. I'm a computer science student in Oulu University (Finland).

I don't remember exactly how I got here, but I guess some of the first posts I read were about counterarguments to religious delial of evolution.

I have been intrested in rationality (along with sciense and technology) for a long time before I found lesswrong, but back then my view of rationality was mostly that it was the opposite of emotion. I still dislike emotions - I guess that it's because they are so often "immune to reflection" (ie. persistently "out of sync" with what I know to be the right thing to do). However, I'm aware that emotions do have some information value (worse than optimal, but better than nothing) and simply removing emotions from human neuroarchitechture without other changes might result something functionally closer to a rock than a superhuman...

I'm an atheist and don't believe in non-physical entities like souls, but I still believe in eternal life. This unorthodox view is because 1) I'm a (sort of) "modal realist": I believe that every logically possible world actually physically exists (it's the simplest answer I've found to the question "Why does anything exist at all?") and 2) I don't believe in identity distinct from physical mind state, ie. if a copy was made of my mind, I could not see any way of telling which of them was "me"/"original", even if one of them was implemented in completely different hardware and/or was separated by large distance/time from my previous position in space-time. The result is that as long as there is a logically possible "successor" mind-state to my current mind-state, "I" will continue to experience "being".

I'm intrested in politics, but I hope not to become mind-killed by it (or worse: already being mind-killed). If someone is intrested in knowing my political views and is not conserned of killing their mind, I put a short summary here in ROT13: V'z terravfu sne yrsg yvoreny/nanepuvfg, ntnvafg pbclevtug (nf vg pheeragyl vf) naq ntnvafg chavfuzragf. V unir nyfb (gbb znal gb erzrzore ng bapr be yvfg urer) bgure fznyyre aba-znvafgernz cbyvgvpny vqrnf.

I think I'm much better at epistemic rationality than instrumental rationality. I'm bad at getting things done. I'm a pessimist and usually think the bad side of things first, although I'm able to find the good side too if I deliberately search for it. I sometimes make a joke about it: "I'm a pessimist, therefore I'm - unfortunately - more likely than average to be correct."

I have asperger syndrome and I'm suffering from quite bad OCD. I hope to be able to improve my rationality so that one day I'll be able to write an article about "how rationality cured my OCD"...

I don't want to lie to anyone, but I don't think I'm morally required to say out loud everything I know. However because of many hidden assumptions in human language it is sometimes hard to find words that convey partial information, but not false information. Also in many social situations people are expected to lie and figuring out what to say without lying or causing unnecessary anger is non-trivial. For these reasons I can't clain to be a perfect non-liar, although I try to. Am I hypocritical in this? - I don't know.

I have problems at writing text, or to be more specific, figuring out what to write. I think of many different ways of converting my thoughts into text, but they all seem wrong in some way or another, so it takes a long time for me to write nothing and I likely give up. This applies to this post also - I started writing it for the previous welcome thread, and then gave up when the welcome thread started getting old and inactive. So I apologise if reply slowly or not at all. I hope that improving my rationality will help me in this problem also.

I've been lurking here for some years now and also had an account for a couple of years. I have several ideas for posts of my own. I don't know if I ever get to post them, but I at least want to get rid of the trivial inconvience of the karma barrier.

Because there seems to be very smart people here in much greater consentration than in my everyday life, I expect that there may be significant shifts in my views resulting from conversations with you (many changes have already happened just because of reading lesswrong); nothing in this message should be considered as permanet.

Replies from: beoShaffer, satt, MugaSofer
comment by beoShaffer · 2013-04-09T20:00:17.202Z · LW(p) · GW(p)

I have asperger syndrome and I'm suffering from quite bad OCD. I hope to be able to improve my rationality so that one day I'll be able to write an article about "how rationality cured my OCD"...

Have you read Brain Lock?

comment by satt · 2013-04-09T23:23:49.530Z · LW(p) · GW(p)

Welcome!

I'm a (sort of) "modal realist": I believe that every logically possible world actually physically exists (it's the simplest answer I've found to the question "Why does anything exist at all?")

I recently saw an answer that's even simpler: it's a wrong question!

Edit: and now I take the time to find the relevant EY post, I see he already got halfway to that answer himself. Doesn't look like anyone's linked this paper in the comments there, actually.

comment by MugaSofer · 2013-04-12T11:41:04.762Z · LW(p) · GW(p)

ntnvafg chavfuzragf

Ner jr gnyxvat ntnvafg nal chavfuzrag urer, be whfg ivaqvpgvir chavfuzrag?

comment by ThinkOfTheChildren · 2013-04-08T19:35:31.931Z · LW(p) · GW(p)

Hey Lesswrong.

This is a sockpuppet account I made for the purpose of making a post to Discussion and possibly Main, while obscuring my identity, which is important due to some NDAs I've signed with regards to the content of the post.

I am explicitly asking for +2 karma so that I can make the post.

comment by nicdevera · 2013-04-06T14:09:11.656Z · LW(p) · GW(p)

Yo. I've been around a couple years, posted a few times as "ZoneSeek," re-registered this year under my real name as part of a Radical Honesty thing.

comment by DiscyD3rp · 2013-06-27T03:14:28.474Z · LW(p) · GW(p)

Hello LW. My pseudonym is DiscyD3rp, and this introduction is long overdo. I am 17, male, and currently enrolled in high school. I discovered this site over a year ago, via HPMoR, and have read a good percentage of the main sequences in a kinda correct order. However, i was experiencing significant angst from what I call Dungeon Crawl Anxiety (The same reason that when exploring RPG dungeons i double back and explore even AFTER discovering the correct path). I am now (re-)reading the entirety of Eliezer's posts in the ebook version of the sequences. I have found the re-read articles still useful after having gotten a basic handle on bayesian thought, and look forward to completing my enlightenment

As far as personality, I was (am) incredibly arrogant, and future goals involve MIRI and/or rationality teaching myself (one time involves an email to Eliezer claiming the ability to save the world, and subsequently learning that decision theory is HARD). I am not particularly talented in quickly absorbing technical fields of knowledge, but plan on on developing that skill. My existing talent seems to be manipulating idea and concepts easily and creatively once well understood. Im great at reading the map, but suffer difficulty in writing it. (In very mathy fields)

Im a born Christian, with a moderate upbringing, but likely saved from extremism by the internet just in time. Now a skeptic and an atheist.

Replies from: None, wedrifid, DiscyD3rp
comment by [deleted] · 2013-06-27T03:50:51.066Z · LW(p) · GW(p)

I hope you will forgive the impertinance of offering unsolicited advice: if you havn't already, you might consider teaching yourself several programming languages in your free time. It's a very marketable skill, important to MIRI's work, and in many ways suffices for a basic education in logic. The mathy stuff is probably not optional given your ambitions, and much of the same discipline and attention to detail necessary cor programming can be applied to learning serious math. Arrogance will be a terrible burden if unaccompanied by usefulness and skill.

Replies from: DiscyD3rp
comment by DiscyD3rp · 2013-06-27T16:01:48.773Z · LW(p) · GW(p)

I am currently teaching myself Haskel and have a functional programming textbook on my device. While unsolicited, i apreciate ALL advice. Any other tips?

Replies from: None, sparkles
comment by [deleted] · 2013-06-27T20:00:20.581Z · LW(p) · GW(p)

Nope, that's all I got. Wait, one more thing. I learned in a painful way that scholarly credentials are most cheaply won (time and effort wise) in high school, and then it gets exponentially more difficult as you age. Every hour you spend making sure you get perfect grades now is worth ten or a hundred hours in your early-mid twenties. Looking back, getting anything less than perfect grades, given how easy that is in high school, seems utterly foolish. Maybe you already know that. Good luck!

Replies from: DiscyD3rp
comment by DiscyD3rp · 2013-06-27T20:58:40.516Z · LW(p) · GW(p)

Ok, followup question: How important are scholarly credentials vs just having that knowledge without a diploma? Obviously it varies with the field and what one wishes to use the knowledge for. However, it's important to know, because i don't want to waste resources getting a degree when alternatively auditing courses and reading textbooks is just as useful.

Ex: Art degree is useful if I want to be employed specifically by a company that requires it, but pure knowledge is just as useful for freelance/independent work in the same field, and is much cheaper.

Replies from: None, shminux
comment by [deleted] · 2013-06-27T21:26:45.309Z · LW(p) · GW(p)

How important are scholarly credentials vs just having that knowledge without a diploma?

I think in almost every field and occupation, having the scholarly credentials is extremely important. Knowledge without the credentials is pretty worthless (unless its worthwhile in itself, but even then you can't eat it): using that knowledge will generally require that people put trust in your having it, often when they're not in a position to evaluate how much you know (either because they're not experts, or they don't have the time). Credentials are generally therefore the basis of that trust. Since freelance work either requires more trust, or pays very badly and inconsistently, credentials are worth getting.

And that was the point of my previous post: some way or other, you have to earn people's trust that you can do a job worth paying you for. One way to earn that trust is to perform well despite lacking credentials. This will take an enormous amount of time and effort (during which you will not be paid, or at least not well) compared to doing whatever it takes to get as close to a 4.0 as you can. The faster you get people to trust you, the faster you can stop fighting to feed and shelter yourself and start fighting for the future of humanity.

Getting a degree sucks because it's expensive and time consuming. But if you work harder than everyone else around you, you'll get through it faster, and scholarships will make it closer to free. The whole point is just to get to a job where you're doing some real good as soon as you can. Getting solid credentials is without a doubt the fastest way to do this for almost everyone.

The exceptions are people who are either very smart or very lucky. Obviously you can't count on luck, but you shouldn't count on being a super-genius either. First, you're still in high school at 17. People smart enough to skip the normal system of credentials (which is really, really, really smart) are not in high school at 17. And the credential system is and has been tightening for decades, because higher education is so packed with people. You're going to be competing with people who have degrees at almost every level, and not just BA/S's. Empirically there's no question that they out-compete people without degrees.

Not every degree is worth getting, of course, but the 'autodidact' thing is almost certainly just going to be the longest and hardest path to getting where you want to be.

ETA: I don't remember the specifics, but EY once said that very probably every person-hour spent working toward FAI saves [insert shockingly large number] lives. Think about it this way: those people are all looking at you from the future, watching you to see what you do. You're looking back at them, and watching [insert shockingly large number] of them vanish with every hour wasted. Time is a factor.

Replies from: DiscyD3rp
comment by DiscyD3rp · 2013-06-27T22:02:13.504Z · LW(p) · GW(p)

...

I need to get my shit together. This is the most compelling argument I've heard for "jumping through the hoops".

Thank you for that, I hope I can actually change my mind about this.

comment by Shmi (shminux) · 2013-06-27T22:47:26.511Z · LW(p) · GW(p)

Programming is one of the very few occupations you still can get into without formal credentials, with some difficulty. Academia is right out, and so is any area where formal certification is a legal prerequisite, like most of engineering, commerce, law, medicine etc. You can certainly start your own business in one of many of the less regulated areas, if you are good, lucky and willing to work your ass off.

comment by sparkles · 2013-06-27T16:33:28.283Z · LW(p) · GW(p)

you use english painfully poorly . i may be one to eschew standard rules, but i am quite consistent in my own ways and i do know them

specifically, get a spellcheck and be consistent

Replies from: TimS, shminux
comment by TimS · 2013-06-27T17:10:34.240Z · LW(p) · GW(p)

For comments? Really? This ain't professional writing - and the meaning of D's writing is quite clear.

Edit:

you use english painfully poorly . i may be one to eschew standard rules, but i am quite consistent in my own ways and i do know them

specifically, get a spellcheck and be consistent

(emphasis in original)

Replies from: DiscyD3rp, Kawoomba, sparkles
comment by DiscyD3rp · 2013-06-27T18:21:38.273Z · LW(p) · GW(p)

I apreciate coming to my defense, although my writig is poor. I've bean meaning to get a copy of Elements of Style from the library, and practice does make perfect, so the more I comment the better I'll get.

comment by Kawoomba · 2013-06-27T17:13:04.450Z · LW(p) · GW(p)

Also note the irony of the pot calling the kettle black (unless it was subtle irony, which I doubt).

Replies from: TimS, sparkles
comment by TimS · 2013-06-27T17:27:12.500Z · LW(p) · GW(p)

That's not sparkles's fault, it's just McKean's Law

comment by sparkles · 2013-06-27T20:01:55.721Z · LW(p) · GW(p)

idiot

did you read the part about being consistent in my own ways? i'm not criticizing him for eschewing standard rules!

oh, and if you're all "you screwed up by joining two independent clauses with a conjunction and no comma"? that was fucking deliberate (the relevant consistency is "for speechlike writing, i use punctuation to approximate the structure of speech")

comment by sparkles · 2013-06-27T20:03:42.642Z · LW(p) · GW(p)

spellcheck is so easy you have no excuse

bad grammar is less painful than doesn't-even-care-to-spellcheck

Replies from: DiscyD3rp, TimS
comment by DiscyD3rp · 2013-06-27T21:03:54.090Z · LW(p) · GW(p)

I apologize. I was in a particular rush at the time of that particular comment, and was using my ipod. I understand that people are liable to respond adversely to bad spelling, and hope to prevent similar mistakes in the future.

I thank you for providing me with an emotive memory for why this is important, and I hope the future is just as critical of me as you currently are.

Replies from: TimS
comment by TimS · 2013-06-27T21:18:50.433Z · LW(p) · GW(p)

If you find sparkles's comments helpful, great. I would not have found them worthwhile while I had high levels of typo problems (I've gotten better, but I'm not perfect - worse, my typos tend to change meanings rather than simply fail to make words).

Quality of writing improves with practice, but spelling mistakes of the kind you were making in the context you made them (comment section, via mobile device) are not closely correlated with quality.

was using my ipod

If these mistakes really bother you, the lesson might simply be not to post through the Ipod, without getting into the emotional reaction that sparkles is trying to generate. Because whipsawing your emotions is not a friendly act.

comment by TimS · 2013-06-27T21:11:17.578Z · LW(p) · GW(p)

Because your habit of blanking old comments is much more effective communication?

Look, criticism is tolerable when it is constructive. You seem subjectively motivated to win a status contest, not provide input leading to improvement.

Edit:

spellcheck is so easy you have no excuse

bad grammar is less painful than doesn't-even-care-to-spellcheck

Replies from: sparkles
comment by sparkles · 2013-06-27T23:17:31.510Z · LW(p) · GW(p)

aw, he still thinks ad hominem is cute

you should respect norm-violating lesswrong posters more

especially when someone around you goes all "yeah, thank you :D"

comment by Shmi (shminux) · 2013-06-27T21:24:43.600Z · LW(p) · GW(p)

specifically, get a spellcheck and be consistent

None of the common words DiscyD3rp used were actually misspelled, except for "overdo", and no apostrophe in "Im" was probably intentional, as well as a the liberties with punctuation and capitalization. But I agree with a more general point: good written English makes one more likely to be taken seriously.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-27T22:09:49.742Z · LW(p) · GW(p)

While unsolicited, i apreciate ALL advice.

"apreciate"

Replies from: sparkles
comment by sparkles · 2013-06-27T23:06:18.729Z · LW(p) · GW(p)
comment by wedrifid · 2013-06-27T05:55:35.856Z · LW(p) · GW(p)

My pseudonym is DiscyD3rp

Given your ambition I suggest changing your name to something respectable before you have spent time establishing a name for yourself. DiscyD3rp will make establishing credibility more difficult for you.

Replies from: DiscyD3rp
comment by DiscyD3rp · 2013-06-27T15:55:40.791Z · LW(p) · GW(p)

Ackknowledged. Its currently my go-to username for personal/fun use, and is less apropriate for serious science. I wasnt sure if LeasWrong was the best place to start professionally. Would you reccommend irl name or a professional paeudonym?

Replies from: wedrifid
comment by wedrifid · 2013-06-27T17:05:29.639Z · LW(p) · GW(p)

Ackknowledged. Its currently my go-to username for personal/fun use, and is less apropriate for serious science. I wasnt sure if LeasWrong was the best place to start professionally. Would you reccommend irl name or a professional paeudonym?

Given that you have indicated professional interest within the rationalist community going with a real name is a better-than-usual option.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-27T17:11:10.776Z · LW(p) · GW(p)

Au contraire, mon ami! Given that he has indicated a professional interest within the rationalist community, he should shield his early steps -- and the invariable (pit)falls they will lead him to -- from his one unchangeable identity.

Not everyone in his early years produces EY-quality content, and even he is often confronted with -- and has to distance himself from -- certain remarks from decades ago. The internet does not forgive, its search engines do not forget.

Also, "wedrifid" advising the use of real names?

Replies from: wedrifid
comment by wedrifid · 2013-06-27T18:54:32.583Z · LW(p) · GW(p)

On the contrary, given that he has indicated a professional interest within the rationalist community, he should shield his early steps -- and the invariable (pit)falls they will lead him to -- from his one unchangeable identity.

Lesswrong usernames do not tend to remain anonymous among the rationalist community even among those who make (some, but not extreme amounts) of effort to hide them. Personality and commonly expressed beliefs tend to make identity rather obvious. On the other hand pseudonymity is enough to shield you from a token google search by a mainstream employer associating you with any non mainstream beliefs expressed.

Do note that 'better-than-usual' option is hardly a ringing endorsement. It's more a failure to advise against in this case rather than particular advice for. My only serious advice is to abandon any name that makes him look like a 13 year old troll.

Also, "wedrifid" advising the use of real names?

I have 27,746 karma worth of learning from that mistake.

comment by DiscyD3rp · 2013-06-27T03:36:57.341Z · LW(p) · GW(p)

A point I meant to make in my original comment: I hope the community support will more effectively encourage rational behavior in myself than I've currently been able to do solo. Enforce your group norms, and i hope to adapt to this tribe's methods quickly, unless more effective self hacks are known.

comment by someonewrongonthenet · 2013-04-02T09:39:37.246Z · LW(p) · GW(p)

I made an account seven months ago, but I wasn't aware of the last welcome thread, so I guess I'll post on this one.

I'm not sure when I exactly "joined". My first contact with this community was passing familiarity with "Overcoming bias" as one of the blogs which sometimes got linked in the blogosphere I frequented in high school. As typical of my surfing habits in those days, I spent one or two sessions reading it for hours and then promptly forgot about all it. Second contact was a recommendation from another user on reddit to Lesswrong. Third contact was a few months later when my roommate recommended I read hpmor. I lurked for a short time, and made an account, and went to my first few meetups about two months ago. Meetups are fun, you meet lots of smart people, and I highly recommend it.

First impressions? I think this is the (for lack of a better word) most intellectual internet community that I am familiar with. Almost every post or comment is worth reading, and the site has got an addictive reddit-ish feel about it (which hampers my productivity somewhat, but que sera, sera.)

I've noticed that most of the opinions here tend to align precisely with my own, which is gratifying, because it's evidence that my thinking makes sense. However, it's also irritating, because it means I learn less and I have little to contribute. It's a little disconcerting for someone who thrives on discussion and is usually forced to play the role of the contrarian. Not that I'm complaining - it would be quite worrying if rationalists didn't tend to agree. Plus, it's really refreshing to have discussions where two people mutually try to figure out where the truth lies, rather than arguments where two people try to convince each other of something.

Biggest upside: Lesswrong has it's own, rationality/philosophy specific jargon, which is really helpful for communicating complicated ideas using very few words. In addition to introducing me to a few concepts I'd never even considered, I think the greatest benefit I got from reading this site is that I've got a better language to verbalize abstract concepts.

What I'd like to see: Expansion from the philosophical side into practical things, like scientific knowledge,, useful skills, etc. It's not often you get a community hub with such a high concentration of skills and knowledge, and I think it should be put to more use. (the rationalist-utilitarian charities lesswrong is loosely affiliated with is one good example of this being done successfully)

Replies from: None
comment by [deleted] · 2013-04-08T00:29:38.305Z · LW(p) · GW(p)

I've noticed that most of the opinions here tend to align precisely with my own, which is gratifying, because it's evidence that my thinking makes sense. However, it's also irritating, because it means I learn less and I have little to contribute.

I noticed this as well, while first reading the sequences. I flew through blog posts, absorbing it all in, since it all either matched my own thoughts, or were so similar that it hardly took effort to comprehend. But I struggled to find anything original to say, which was part of why I initially didn't bother making an account - I didn't want to simply express agreement every time. (And now I notice that my second comment is precisely that.)

Biggest upside: Lesswrong has it's own, rationality/philosophy specific jargon, which is really helpful for communicating complicated ideas using very few words.

That's one of the things I've frequently benefited from in my thinking. I have found that the concepts behind keywords like dissolving the question, mysterious answers, map and territory, and the teacher's password can be applied in so many areas, and that having the arsenal to use them makes it much easier to think clearly about otherwise elusive concepts.

comment by Camaragon · 2013-07-08T09:43:53.006Z · LW(p) · GW(p)

Hello, my name is Cam :]

My goals in life are:

  1. To build a self sufficient farm I with renewable alternative energy and everything.
  2. Acquire financial assets to support the building of my farm and other hobbies and activities I pursue. 3 .To further my fitness and health and maintain it.
  3. Love and Romance.

That's pretty much it, hahaha, I want to learn the ways of a Rationalist to make the best decisions and solutions for problems I might encounter in pursuing these goals! I have a immature or childlike air around me, people tend to say, which is why I am often looked down upon me and not taken seriously. I think it's how I construct my sentences maybe? My English is only at decent quality. Maybe I just see things too simply and positively people see it as being naive? Well, Anyway, I look forward to having you as my one of my buddies! :D

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-18T12:40:15.064Z · LW(p) · GW(p)

To build a self sufficient farm I with renewable alternative energy and everything.

Have you already built something? Do you have specific plans?

comment by Zoe · 2013-07-06T04:39:32.362Z · LW(p) · GW(p)

Hello Less Wrong community members,

My name is Zoe, I'm a philosophy student, and increasingly discombobulated by the inadequacy of my field of study to teach me how to Actually Do Things. I discovered Less Wrong 18 months ago, thanks to the story Harry Potter and the Method of Rationality. I've read a number of articles and discussions since then, mostly whenever I felt like reading something both intelligent and relevant, but I have not systematically read through any sequence or topic.

I have recently formed the goal to develop the skills necessary to 'raise the waterline' of rationality in the meat space discussions in which I take part, but without appearing to put anyone down.

Working towards this goal will make me interact more with a greater proportion of the people that are around me, which is something that I need to do. Right now, apart from a few friends whose minds I love, I usually flee at the earliest politically correct time from most conversations, due to sheer boredom or annoyance and a huge lack of confidence in my ability to steer the conversation somewhere interesting. I want to change this by improving myself (since Less Wrong has well taught me that it would be foolish to wait or hope for others to change or improve when I could be changing myself.)

While so far my use of Less Wrong has been recreational, I'm creating an account now to be able to participate in discussions, not because I think I have anything really important to say, but because practicing rationality not just in my mind but while actually interacting is probably a good way to go about my newfound objective. I would really like to become able to introduce rationality into conversations with the average non-rationalist and do so tactfully, and I think Less Wrong can help me.

Do you agree with my assessment that the Less Wrong posts and discussion community have the potential to help me further my goal? If so, how do you think I should best use the resources here?

I'm looking very much forward to interact with all of you!

Zoé

PS : My first language is French. I really do welcome any and all nitpicks and corrections about my English.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-07-11T06:37:06.473Z · LW(p) · GW(p)

Welcome!

I would really like to become able to introduce rationality into conversations with the average non-rationalist and do so tactfully, and I think Less Wrong can help me.

Let me know if you figure something out. So far I haven't been able to do it without coming across as weird.

comment by wubbles · 2013-05-30T00:12:54.699Z · LW(p) · GW(p)

Hello, my name is Watson. The username comes from my initials and a Left 4 Dead player attempting to pronounce them. I am a math student at UC Berkeley and a longtime lurker. I've got a post on rational investing, based on the conclusions of years of research by academic economists, but despite lurking I never realized there is a karma limit to post in discussion. I'm interested in just about everything, a dangerous phenomenon.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-05-30T00:18:21.728Z · LW(p) · GW(p)

Hi Watson! I didn't know you were a Berkeley student. Starting this fall?

Replies from: wubbles
comment by wubbles · 2013-05-30T00:25:08.546Z · LW(p) · GW(p)

Yes.

comment by lesliecuthbert · 2013-05-20T05:54:00.730Z · LW(p) · GW(p)

Hello to the Less Wrong community. My name is Leslie Cuthbert and I'm a lawyer based in the United Kingdom. I look forward to reading the various sequences and posts here.

comment by hylleddin · 2013-05-13T21:55:18.406Z · LW(p) · GW(p)

Hi! I'm a 24 year old woman starting grad school this fall studying mathematics. Specifically I'm interested in mathematically modelling organizational decision making.

My parents raised me on Carl Sagan and Michael Shermer, so there was never really a point that I didn't identify as a rationalist. I discovered less wrong long enough ago that I don't actually remember how I found it. I've been lurking here for several years. I finally registered after doing the last survey, though I didn't make another post until the last few days.

Oh, and I have a talking coyote in my head. This post has more information. I'm going to be diving into the psychological literature to understand this phenomonon better and I'm planning on making a post with anything I find out that would be useful for rationalists.

comment by jetm · 2013-04-09T02:29:40.122Z · LW(p) · GW(p)

I've been browsing the site for at least a year. Found it through HP:MoR, which is absolutely amazing. I've been coming to the LessWrong study hall for a couple weeks now and have found it highly effective.

For the most part, I haven't really applied this at all. I ended up making a final break with Christianity, but the only significant difference is that I now say "Yay humanism!" instead of "Yay God!" I've used a few tricks here and there, like the Sunk Cost Fallacy, and the Planning Fallacy, but I still spent the majority of my time not thinking about things. Because thinking is hard.

Then I started trying again to figure out what I should do with my life. Now, the first time I tried this I spent less effort on the decision than I did on most papers I've written for class. Ended up signing a five-year contract with miserable results. Now I'm actually thinking. It is incredibly difficult, but I am convinced that it is worth it.

My current goals are to broaden my knowledge (I know a ton of information about classical music but almost nothing else) and sharpen my critical thinking skills.

Replies from: ModusPonies
comment by ModusPonies · 2013-04-12T14:20:51.598Z · LW(p) · GW(p)

Now I'm actually thinking. It is incredibly difficult, but I am convinced that it is worth it.

I strongly agree with both of those statements.

Do you know what you'd like to do with the knowledge and skills you're acquiring, or is that still an open question?

Replies from: jetm
comment by jetm · 2013-04-12T21:24:34.410Z · LW(p) · GW(p)

What I want to do is to figure out what I want to do. My basic (and vague) goal is to do the most amount of good with my future career. If I make that decision with my current tools, I will likely overlook something.

Replies from: ModusPonies
comment by ModusPonies · 2013-04-12T22:25:47.000Z · LW(p) · GW(p)

Have you looked at 80,000 hours? They have a lot of great resources for people in exactly your situation, including individual advice. I've found a lot of their posts extremely useful. (I'm currently earning to give, largely as a result of arguments I ran into here on LW. I'd be happy to talk in more depth, if you think it would be useful.)

comment by Ronak · 2013-04-08T16:25:50.499Z · LW(p) · GW(p)

Well, hello. I'm a first-year physics PhD student in India. Found this place through Yvain's blog, which I found when I was linked there from a feminist blog. It's great fun, and I'm happy I found a place where I can discuss stuff with people without anyone regularly playing with words (or, more accurately, where it's acceptable to stop and define your words properly). So, one of my favourite things about this place is the fact that it's based on the map to territory idea of truth and beliefs; I've been using it to insult people ever since I read it.

The post says I should say why I identify as a rationalist; I wouldn't, personally, 'cause I never feel like being better at rationality is the point, and whatever you or I say the word means it stands to be misunderstood in this way. But as for why I'm interested in this place at all: better calibration, and the possibility of better communication.

Anyway, still going through the sequences (personally, I would prefer reading something more mathematical, but I can understand why these posts aren't). I have a whole tab group in firefox for LW right now, because it went too far out of hand.

As for special personal interests, I'm ridiculously scatterbrained, and so haven't garnered any non-trivial understanding of anything. One vaguely interesting thing I do enjoy doing is trying to charitably understand some mystical-looking stuff, like the Tao te Ching or Maya or art criticism (warning: my blog is a review blog, but you won't find much of this if you click through, as there I just use the conventions post-justification and modify them whenever). My methodology: ask what questions they were thinking about to posit the answers they did, and then think about the questions myself. Collect more information, and update. Maybe I'll even write about some of this once I have a better grasp of how to explicitly use the tools presented here.

Also, I have a question about Anki: is the web part defunct or something? I can't find anything there. Whatever I search for, I get a blank page. (I was going to post in that page, but this is more likely to be replied to.)

Replies from: Ronak
comment by Ronak · 2013-04-08T16:26:30.011Z · LW(p) · GW(p)

Oh, and can I latex in the comments?

Replies from: tgb
comment by tgb · 2013-04-08T16:39:45.189Z · LW(p) · GW(p)

Yup, but it's not super elegant! There's some info here.

Also, AnkiWeb.net works for me - but you need to use https:// for Anki 2 and http:// for Anki 1.

comment by Stefan_Schubert · 2014-01-07T22:57:51.687Z · LW(p) · GW(p)

Hi,

I'm a philosopher (postdoc) at the London School of Economics who recently discovered Less Wrong. I am now reading through lots of old posts, especially Yudkowsky's and lukeprog's philosophy-related material, which I find very interesting.

I think lukeprog is right when he points out that the general thrust of Yudkowsky's philosophy belongs to a naturalistic tradition often associated with Quine's name. In general, I think it would be useful to situate Yudkowsky's ideas visavi the philosophical tradition. I hope to be able to contributre something here at some point (though I should point out that I'm not an expert in the history of philosophy).

lukeprog argues for these ideas in two excellent articles:

http://lesswrong.com/lw/4vr/less_wrong_rationality_and_mainstream_philosophy/ http://lesswrong.com/lw/4zs/philosophy_a_diseased_discipline/

I agree with most of what is said there, and am myself very critical of mainstream analytical philosophy. It also seems to me that the overall program advocated here - to let psychological knowledge permeate all philosophical arguments in a very radical way - is very promising. Though there are philosophers who make use of psychology, do experiments, etc., few let it influence their thinking as radically as it is done here.

The site seems very interesting in other respects as well. I am presently reading up on cognitive science (I found this site after googling on Stanovichs Rationality and the Reflective Mind, which I now have read) and am grateful for the info on this subject gathered on Less Wrong.

comment by LM7805 · 2013-09-18T01:38:32.252Z · LW(p) · GW(p)

Hi. I've been a distant LW lurker for a while now; I first encountered the Sequences sometime around 2009, and have been an avid HP:MOR fan since mid-2011.

I work in computer security with a fair bit of software verification as flavoring, so the AI confinement problem is of interest to me, particularly in light of recent stunts like arbitrary computation in zero CPU instructions via creative abuse of the MMU trap handler. I'm also interested in applying instrumental rationality to improve the quality and utility of my research in general. I flirt with some other topics as well, including capability security, societal iterated game theory, trust (e.g., PKI), and machine learning; a meta-goal is to figure out how to organize my time so that I can do more applied work in these areas.

Apart from that, lately I've become disillusioned with my usual social media circles, in part due to a perceived* uptick in terrible epistemology and in part due to facing the fact that I use them as procrastination tools. I struggle with akrasia, and am experiencing less of it since quitting my previous haunts cold turkey, but things could still be better and I hope to improve them by seeking out positive influences here.

*I haven't measured this. It's entirely possible I've become more sensitive to bad epistemology, or some other influence is lowering my tolerance to bad epistemology.

comment by [deleted] · 2013-07-25T11:27:06.139Z · LW(p) · GW(p)

Hello, I am a 46 yr old software developer from Australia with a keen interest in Artificial Intelligence.

I don’t have any formal qualifications, which is a shame as my ideal life would be to do full time research in AI - without a PhD I realise this won’t happen, so I am learning as much as I can through books, practice and various online courses.

I came across this site today from a link via MIRI and feel like I have struck gold - the articles, sequences and discussions here are very well written, interesting and thoughtful.

My current goals are to build a framework that would allow a machine to manage its information (goals, tasks, raw data, external biases, weightings, and eventually its “knowledge”). As I understand it the last bit hasn’t been solved yet as it implies the machine needs a consciousness, but I am having fun playing around with it.

comment by BraydenM · 2013-06-10T09:46:24.466Z · LW(p) · GW(p)

Hi, I'm Brayden, from Melbourne Australia. I attended the May 2013 CfAR workshop in Berkeley about 1 year after finding Less Wrong, and 2 years after finding HPMOR. My trip to The States was phenomenal, and I highly recommend the CfAR workshops.

My life is significantly better now than it was before, and I think I am on track with the planning process for eventually working on the highest impact causes that might help save the world.

comment by Scott Garrabrant · 2013-06-09T23:08:06.294Z · LW(p) · GW(p)

Hello Less Wrong! I am Scott Garrabrant, a 23 year old math PhD student at UCLA, studying combinatorics. I discovered Less Wrong about 4 months ago. After reading MoR and a few sequences, I decided to go back and read every blog post. (I just finished all Eliezer's OB posts) I was going to wait and start posting after I got completely caught up, but then I started attending weekly meetups 2 months ago, and now I need to earn enough karma to make meetup announcements.

I have been interested in meta-thinking for a long time. I have spent a lot of time thinking about the nature of rationality, purely out of curiosity, and have independently made many of the same conclusions I have found on this blog. I believe that I realized that decision/probability theory was the correct language to talk about rationality in high school about 6 years ago. It has made me very happy to learn that there are so many like-minded people.

However, there has been one mistake I have been making for a long time. I have been giving other people too much respect in their rationality. I have been treating other people as almost rational agents with different utility functions and very different prior probabilities. This blog has taught me how wrong that view was, which is causing me to rethink some of my prior views.

One thing would like some help in deciding right now is about Unitarian Universalism. I would love it if any rationalists who know anything about Unitarianism (or who don't) could help me out. I am agnostic (If you define the god hypothesis to include the simulation hypothesis, atheist otherwise). I believe that most of the bad parts of religion and theism come from the fact that they tend to encourage irrationality. So far, my picture of the average Unitarian is above average rationality, but not great. The main thing that attracts me to the group is that they (at least claim to) promote "a free and responsible search for truth and meaning." Their search algorithms could really use some work, but they both view truth as a goal and understand that they have not attained it completely. In looking for a local community to provide "brownies and babysitters," it seems to be the best I have found. Also, although I do not have a "god shaped hole" that needs to be filled, I understand that many people do, and so I can see that it might be good to support an organization that will help to allow those people to fill that hole with something that does not encourage irrationality. On the other hand, sometimes I feel like Unitarians care a lot a lot more about the "free" part of "free and responsible search for truth and meaning" than the "responsible" part. I am worried that they like to discuss their individual beliefs as they would discuss their favorite colors, and never actually change. Maybe with our current messed up society, the first step is for people to feel free to believe what they want, and then learn how to be critical.

In attending Unitarian churches, I have repeatedly enjoyed myself, thought about interesting philosophy (even though I often disagree with the sermon), had sufficiently strong emotional responses from the music (e.g. "imagine"), and been encouraged by how much people were willing to help each other. I already know that I enjoy experience. What I am trying to decide is morally if I should be willing to support this organization. For the future, I am also trying to decide if I should be worried that being around this kind of thinking might be bad for my future kids.

comment by codingstrand · 2013-05-31T14:54:56.315Z · LW(p) · GW(p)

As a new member of this community, I am having a bit of difficulty with the numerous abbreviations that people use in their writing on this site. For example I have come across a number of these that are not listed on the Jargon page (eg: EY, PC, NPC, MWI...). I realize that as a new member, I will eventually understand many of these, however, it is very frustrating trying to read something and be continually distracted by having to look-up some of these obscure terms. This is especially a problem on the Welcome Thread, where a potential new member could be put off by the argot like discussions. Alternatively, if someone want to use an abbreviation that is not common or listed on the Jargon page, then perhaps they could spell it out at first use and then resort to the abbreviation thereafter within the post. Or set-up and use a text-expander is also another possible solution.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-07-11T06:58:53.745Z · LW(p) · GW(p)

I added the acronyms you mentioned to the Jargon page. Tell me if you come across any more. You can also edit the page to add them yourself as you learn them if you like.

comment by anon2280 · 2013-05-26T21:15:10.101Z · LW(p) · GW(p)

Hi, my name is Danon. I just joined less wrong after reading a wonderful post by Swimmer963: http://lesswrong.com/lw/9j1/how_i_ended_up_nonambitious/ on her reasoning for why she ended up without ambition (actually, I felt she had a lot of ambition). I got to her post while trying to figure out why I am lazy, I was wondering if it was because I had no (or little, if any) ambition. Her post got me asking the right questions I have finally been able to save a private draft in LW stating a reasoning for my laziness. It really is refreshing to read the posts here at LW. Thank you for having me.

comment by Alrenous · 2013-04-09T02:49:24.624Z · LW(p) · GW(p)

Apparently I have just registered.

So, I have a question. What's an introduction do? What is it supposed to do? How would I be able to tell that I've introduced myself if I somehow accidentally willed myself to forget?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-11T22:22:30.173Z · LW(p) · GW(p)

Well, I didn't introduce myself, but I guess it lets people know stuff about you without having to piece it together from your comments?

Replies from: Alrenous
comment by Alrenous · 2013-04-14T19:09:26.389Z · LW(p) · GW(p)

Sounds like a good goal to me. However, then I have to guess what features of mine are useful to share, which I've proven to be less than 50% effective at in the past. (For example, that was a feature. Does anyone care?) It also relies on me having a more accurate self-impression than I've noticed anyone else having.

I guess, taken together, I just learned that I don't think introductions are in fact epistemically worthwhile. So I'll update my question: are introductions repairable, and if so, how?

An additional issue is that I'm skilled at being deliberately inflammatory or conciliatory. Good enough that I sometimes do it by accident. I can easily overcome my resistance to introduction by doing either, but I'd rather not. It's likely this makes doing an introduction cost-ineffective for me in particular. So my question here is, have I forgotten a reason to do an introduction, which would show it's still worthwhile? Either, despite being inflammatory, or despite having to work hard to prevent it being inflammatory?

Replies from: TheOtherDave, CCC, MugaSofer
comment by TheOtherDave · 2013-04-14T20:02:31.350Z · LW(p) · GW(p)

My $0.02: the most valuable piece of information I get from open-ended introductions is typically what people choose to talk about, which I interpret as a reflection of what they consider important. For example, I interpret the way you describe yourself here as reflecting a substantial interest in how other people judge you.

Replies from: Alrenous
comment by Alrenous · 2013-04-14T20:28:54.174Z · LW(p) · GW(p)

Found helpful. Your conclusion is true, but not something I'd think to mention.

Now I can construct an introduction template: "I'm Alrenous, and I find X important." It won't be complete, but at least it also won't be inaccurate.

comment by CCC · 2013-04-14T19:30:17.758Z · LW(p) · GW(p)

An additional issue is that I'm skilled at being deliberately inflammatory or conciliatory. Good enough that I sometimes do it by accident.

Deliberately... by accident? Accidentally inflammatory, or conciliatory makes sense, yes, but anyone can be that.

My language parsing module is returning a reasonable probability that I'm misunderstanding something in those sentances.

I guess, taken together, I just learned that I don't think introductions are in fact epistemically worthwhile. So I'll update my question: are introductions repairable, and if so, how?

To provide a starting point - a 'this is what I choose to say about myself' - which gives other people some information about your beliefs, personality, and other elements of identity. Often, parts of the introduction will be true and parts false (often due to exaggeration). It will certainly be incomplete, due to limitations of language. But, in the case of error, it would be repairable by demonstrating a correct identity; if (for example) someone erroneously concludes from your introduction that you can't stand the taste of peas, then that error is repairable by your happily eating a large plate of peas.

Without the starting point, people are forced to start out with a blank, generic depiction of you, and then add observed features of identity one by one.

That's what I think, at least.

Replies from: Alrenous
comment by Alrenous · 2013-04-14T20:28:41.814Z · LW(p) · GW(p)

Deliberately by accident: When I do it on purpose, it works. Sometimes, I have the impulse to, decide I shouldn't, and then I do it anyway.

For example, I think this conversation should be about introductions, not me, at least until I settle on how I think the introduction should go. I could easily make it about me, though - I almost did so, accidentally. Specifically, about how I hijack threads without meaning to.

you can't stand the taste of peas

I in fact can't stand the taste of peas. Except fresh ones, as in, I just picked them, which are great.

To provide a starting point - a 'this is what I choose to say about myself' - which gives other people some information about your beliefs, personality, and other elements of identity.

My problem is that I find introductions are mainly error. That said you've made me think of some things that I can do that should at least be worthwhile, even if not really introduction-y.

Edit: also revealed that one of my heuristics is being inconsistently applied.

Replies from: CCC
comment by CCC · 2013-04-15T09:17:24.741Z · LW(p) · GW(p)

Deliberately by accident: When I do it on purpose, it works. Sometimes, I have the impulse to, decide I shouldn't, and then I do it anyway.

Ah - so it's deliberately, including when you feel you shouldn't but want to in any case. Your definition of 'by accident' differs from mine (I define 'by accident' as undeliberate and almost always unexpected).

comment by MugaSofer · 2013-04-15T11:08:21.344Z · LW(p) · GW(p)

I have to guess what features of mine are useful to share, which I've proven to be less than 50% effective at in the past.

Let's see, what features have I seen come in handy ...

The philosophical positions you hold would be good; helps stop people assuming you hold opinions you don't.

Some people might like to know roughly where you are, both in case they need to talk about something that differs between nations or you live near them and they can rope you into attending meetups.

If you have any areas of expertise/qualifications, people who seek knowledge on those topics (for whatever purpose) would know they can ask you and get the mainstream position on things. For example, people with any training in physics will be treated as evidence in debates over the many-worlds interpretation of quantum mechanics. This could be a double-edged sword, though, in theory.

An additional issue is that I'm skilled at being deliberately inflammatory or conciliatory. Good enough that I sometimes do it by accident.

Hmm. Does this conciliatory ability extend to sacrificing your interests? Because if not, it sounds like a handy minor superpower.

comment by RationalAsh · 2013-04-07T08:32:03.251Z · LW(p) · GW(p)

Well... I'm an engineering student who intends to graduate in electronics. I became interested in AI when I started learning programming at the age of 12. I became fascinated with what I could make the computer do. And rather naively I tried for months and months to program something that was "intelligent" (and failed horribly of course). I set that project aside temporarily but never stopped thinking about it. Years later I discovered HPMoR and through it LessWrong and suddenly found a whole community of people interested in AI and similar things. That was also about the same time I became a full blown atheist. So while exploring this website, I felt like a kid in a candy store.

Once I graduate I really hope I can do some research in AI.

comment by franz_bonaparta · 2013-04-02T23:53:58.413Z · LW(p) · GW(p)

Hello everyone, I'm Franz. I don't actually remember how I happened upon this site, but I do know it was rotting in my unsorted bookmark folder for over a year before I actually decided to read any post. This I do regret.

Because of circumstances I am currently in Brazil and due to a lack of internet infrastructure, I have to read the downloadable versions of the sequences and won't be able to comment often. I do enjoying reading your insightful thoughts!

I was wondering if anyone has directly applied EY methods to their own life? For what reason and what were the results? I tend to be very unproductive with my time and incredibly guilty of procrastination, and was wondering what introspection tools and/or protocols others in similar positions have used to overcome these problems.

(I was also curious if a diagram of priors/reflection of a Bayesian-rationalist existed somewhere, as I am probably more of a visual learner)

Replies from: None, ModusPonies, Watercressed
comment by [deleted] · 2013-04-04T20:36:07.430Z · LW(p) · GW(p)

Welcome!

I was wondering if anyone has directly applied EY methods to their own life?

I have. Specifically, the How to Actually Change Your Mind sequence was very helpful to me in real life.

However, in spite of how some people feel about this site, for me, it is not about [only] EY. Lots of things from Less Wrong have affected my life outside of Less Wrong, specifically (quoting from an older draft of this comment, now, so that is why the flow may be weird here):

One of the most helpful posts I came upon here was "The Power of Pomodoros", which introduced me to the Pomodoro technique. See this PDF from the official website for a more detailed guide.

Another helpful thing I discovered via Less Wrong is the Less Wrong Study Hall. See "Co-Working Collaboration to Combat Akrasia" and "Programming the LW Study Hall". This is the current study hall (on Tinychat), but I think it will eventually be moved to somewhere else.

Less Wrong taught me about existential risk and efficient charity. This has produced a tangible change in what I do with my money.

lukeprog's The Science of Winning at Life sequence was also very helpful to me.

I could write more, but I've already spent too much time on this comment. Enjoy Less Wrong!

comment by ModusPonies · 2013-04-04T13:15:54.820Z · LW(p) · GW(p)

I tend to be very unproductive with my time and incredibly guilty of procrastination, and was wondering what introspection tools and/or protocols others in similar positions have used to overcome these problems.

http://lesswrong.com/lw/3w3/how_to_beat_procrastination/ may be what you're looking for. The community is also fond of the pomodoro technique (i.e., work for 25 minutes, then take a break for 5 minutes, then repeat; use an actual timer for both parts), which I can vouch for personally, and the Getting Things Done method, which I haven't yet tried. beeminder is also great, but requires an internet connection, so it may not be what you need.

Replies from: Paradrop
comment by Paradrop · 2013-04-04T14:10:54.350Z · LW(p) · GW(p)

I can attest to Beeminder. If you're able to read and send emails daily, you can use it.

comment by Watercressed · 2013-04-04T04:29:43.120Z · LW(p) · GW(p)

An Intuitive Explanation of Bayes' Theorem has prior/posterior diagrams.

comment by AABoyles · 2014-09-26T15:14:57.186Z · LW(p) · GW(p)

Hi Everyone! I'm AABoyles (that's true most places on the internet besides LW).

I first found LW when a colleague mentioned That Alien Message over lunch. I said something to the effect of "That sounds like an Arthur C. Clarke short story. Who is the author?" "Eliezer Yudkowsky," He said, and sent me the link. I read it, and promptly forgot about it. Fast forward a year, and another friend posts the link to HPMOR on Facebook. The author's name sounded very familiar. I read it voraciously. I subscribed to the Main RSS feed and lurked for a year.

I joined the community last month because I wanted to respond to a specific discussion, but I've been having a lot of fun since I got here. I'm interested in finding ways to achieve the greatest good (read: reducing the number of lost Disability Adjusted Life Years), including Effective Altruism and Global Catastrophic Risk Reduction.

comment by vollmer · 2013-08-02T04:37:33.130Z · LW(p) · GW(p)

I'm a Swiss medical student. I've read HPMoR and a large part of the core sequences. I've attended LW meetups in several US cities and met quite a few of you in the Bay Area and/or at the Effective Altruism Summit. I've interned for Leverage Research. I co-founded giordano-bruno-stiftung.ch (outreach organisation with German translations of some LessWrong blog posts, and other posts about rationality). Looking forward to participating in the comment section more often.

Replies from: vollmer
comment by vollmer · 2013-08-03T17:41:25.096Z · LW(p) · GW(p)

this is a test

comment by kvd · 2013-06-18T22:10:57.104Z · LW(p) · GW(p)

Hi everyone,

I have been lurking LessWrong on and off for quite a while. I originally found this place through HPMoR; I thought the 'LessWrong' authorname was clever and it was nice to find out there was a whole community based around aiming to be less wrong! My tendency to overthink whatever I write has gotten in the way of actually taking part in the community so far though. Maybe now that I have gotten the introduction out of the way I'll be more likely to post.

A bit more about myself: I'm a student from the Netherlands, doing a masters in Artificial Intelligence. I'm currently planning a research internship in Albany, NY, that will start sometime this summer. I'd love to get in touch with people from there by the way, so if anyone is interested let me know!

comment by TemplarR · 2013-06-12T12:59:42.830Z · LW(p) · GW(p)

Hello, Less Wrong! I'm Michael Odintsov from Ukraine, so sorry for my not-nearly-perfect :) English. Just like many here I found this site from Yudkowsky's link while reading his "Harry Potter and the Methods of Rationality". I am 27 years old programmer, fond of science in general and mostly math of all kinds.

I worked a bit in fields of AI and machine learning and looking forward for new opportunities. Well... that's almost all that I can tell about me right now - never been a great talker :) If anyone have questions or need some help with CS related topics - just ask, I always ready to help.

comment by DSherron · 2013-05-01T16:02:09.945Z · LW(p) · GW(p)

Hi! I've been lurking here for maybe 6 months, and I wanted to finally step out and say hello, and thank you! This site has helped to shape huge parts of my worldview for the better and improved my life in general to boot. I just want to make a list of a few of the things I've learned since coming here which I never would have otherwise, as nearly as I can tell.

  • I've dropped the frankly silly beliefs I held as an evangelical Christian; I wasn't as bad as most in that category but in hindsight that was just due to luck and strong logical skills. (I knew better than to assert that everyone should [know that they should] believe, but nonetheless I chose to follow a harmful "morality")
  • I've learned how to argue effectively and identify real disagreements as opposed to simple definitional disputes, or asking the wrong question. I've used this to resolve a long-standing (think years) dispute with my cousin about the application of the word "literally" as it relates to hyperbole
  • I've realized that intelligence isn't just a fun party trick; I can use it directly to improve my life. Instrumental rationality was something that just never crossed my mind before coming here; intellect made me a good programmer but it sucked that I couldn't get girls. Now I've been actively dieting and getting exercise just because I suddenly realized that I can actually improve my life, if I try.
  • In a similar vein, akrasia is a thing I can fight, and a thing I can fight smart. If just jumping in and doing doesn't work, I have options.
  • Cryonics exists, like, right now. I can go out and buy immortality (or at least a decent chance of a really long life). That's a huge deal to me.
  • There are other people who think the same way I do. I always had trouble finding any combination of intelligence and epistemically rationality, plus the desire to talk about relevant topics using those skills. I knew, realistically, that I couldn't be that exceptional, but I had trouble finding evidence to disprove it (not that I looked that well, mind you).
  • Polyamory (sp?) is a real thing that real people do, not just a cool idea from a story. I haven't made any use of the observation yet, but it tends to mesh well with many of my intuitions about romance.
  • Did I really just almost forget the basic premise of this site? I've become, in general, less wrong (epistemic rationality). Quantum Mechanics are awesome!
  • Probably a bunch of stuff that isn't coming to mind at the moment.

Anyway, for all of that and more, thanks! This site has influenced me more than anything or anyone else ever has. It's really difficult to describe what it feels like to be less wrong and know exactly how and why, but I guess you guys probably know anyway.

And a few questions. First, I noticed that there's a meetup in Austin but not in the (much larger) Houston area. Is this just a lack of members in the area (this is the Bible Belt after all) or just because no one's tried to start one? Second, and there may be a thread already devoted to this somewhere, but what are some good math or computer science books I should look for? I already know the basics of calculus and I can throw my own solutions together for most harder problems but I'd like to get a stronger understanding of higher level math and computer algorithms to use it. And third, are there any other websites/ blogs (besides OB) that have a similar tone/community to this one, though perhaps on different topics, which anyone would recommend?

Replies from: shminux
comment by Shmi (shminux) · 2013-05-01T17:06:07.696Z · LW(p) · GW(p)

And third, are there any other websites/ blogs (besides OB) that have a similar tone/community to this one, though perhaps on different topics, which anyone would recommend?

Anything written by Yvain, including his old and new blogs, though someone ought to compile a list of his greatest hits.

comment by citizen9-100 · 2013-04-12T20:22:44.091Z · LW(p) · GW(p)

Hello LW users, I use the alias Citizen 9-100 (nine one-hundred) but you may call me Nozz. This account will be shared between my sister and I, but we will sign it with the name of whoever is speaking. I would write more but I wrote a lot already but it didn't post due to a laptop error, so all I'll say for now is anything you'd like to know, feel free to ask, just make sure you clarify who your asking. BTW, for those interested, you may call my sister, any of the following, Sam, Sammy, Samantha, or any version of that :)

Replies from: Alicorn, MugaSofer
comment by Alicorn · 2013-04-12T20:53:57.817Z · LW(p) · GW(p)

I don't recommend sharing an account. It will be confusing, and signatures are not customary here.

comment by MugaSofer · 2013-04-12T21:44:50.487Z · LW(p) · GW(p)

Regardless of how good an idea sharing accounts is (not very, I'm guessing, for the record) who on earth downvotes an introduction? Upvoted back to neutral.

comment by VCavallo · 2013-04-04T15:17:54.364Z · LW(p) · GW(p)

Hey! My name is Vinney, I'm 28 years old and live in New York City.

To be exceedingly brief: I've been working through the sequences (quite slowly and sporadically) for the past year and a half. I've loved everything I've seen on LW so far and I expect to continue. I hope to ramp up my study this year and finally get through the rest of the sequences.

I'd like to become more active in discussions but feel like I should finish the sequences first so I don't wind up making some silly error in reasoning and committing it to a comment. Perhaps that isn't an ideal approach to the community discussions, but I suspect it may be common..

Replies from: None
comment by [deleted] · 2013-04-04T15:32:51.788Z · LW(p) · GW(p)

Welcome!

I'd like to become more active in discussions but feel like I should finish the sequences first so I don't wind up making some silly error in reasoning and committing it to a comment.

Do finish the sequences, but you won't be done then; you'll still make stupid mistakes. Best to start making them now, I think.

Replies from: VCavallo
comment by VCavallo · 2013-04-04T15:37:08.892Z · LW(p) · GW(p)

Thanks, I'll get started making stupid mistakes as quickly as I can! I'm sorry I wasn't able to make any here.

comment by briancrisan · 2014-01-21T11:30:25.113Z · LW(p) · GW(p)

Greetings!

I'm Brian. I'm a full-time police dispatcher and part-time graduate student in the marriage and family therapy/counseling master's degree program at the University of Akron (in northeast Ohio). Before I began studies in my master's program, I earned a bachelor's degree in emergency management. I am an atheist and skeptic. I think I can trace my earliest interest in rationality back to my high school days, when I began critically examining theism (generally) and Catholicism (in particular) while taking an elective religion class called "Questions About God." It turned out the class raised more questions than answers, for me.

I found LessWrong by way of browsing CFAR's website and wishing that I had the money to attend one of their workshops. With that being said, I haven't been lurking around LW proper for very long. Thus, I anticipate it will take some time for me to become acquainted with norms of this platform. However, after briefly browsing around, I get the sense that this is a thoughtful community of people that value rationality. That's exciting to me! I hope to get more involved, as time permits, and to eventually become a valuable contributor.

comment by csvoss (Terdragon) · 2013-11-10T19:24:42.018Z · LW(p) · GW(p)

Hello LessWrong!

I found LessWrong, like so many others, through Methods of Rationality. I have lurked for at least two years now, since I discovered this website; I have read many of Eliezer's short stories and a few scattered posts of the Sequences. Eventually, I intend to get around to those and read them in a systematic fashion.... eventually.

I'm a computer science student, halfway through my life as an undergraduate at a certain institute of technology. I recently switched my main area of interest to theoretical computer science, after taking an excellent class on the subject that convinced me to relinquish the cached-thought that I would study computational biology. (I have an olympiad-level background in biology, so that particular cached-thought was difficult to overcome. But taking biology courses is just too useless, when I could be learning more challenging theoretical and mathematical material instead.) I find computability theory delicious.

I was raised in a devoutly Christian, creationist family, but by the end of middle school I managed to realize that I disagreed with them. The works of Richard Dawkins were my secret mainstay throughout high school, and arguments with an evangelical high school friend were my favorite way of honing my political philosophy and my skills as a rationalist.

I spent much of my time in high school trying to formulate ethical postulates so that I could develop my own objective system of ethics. I discovered that I agreed with the ideals of humanism and transhumanism only after reading Harry Potter and the Methods of Rationality. I still get shivers down my spine when I reread certain HPMoR snippets: Harry's thoughts when casting the Patronus, or his thoughts when looking at the stars, or the ending quote of Chapter 96. Humanism has been a wonderful perspective to acquire.

On areas of improvement: I want to have a better-trained inner pigeon. I seek to improve myself to overcome procrastination and similar problems, but I do not currently have the discipline to do so.

LessWrong has been a great resource for feeding my philosophical curiosity and my awe of science. It is always great to find a community of similar-minded people.

comment by rafiss · 2013-07-11T05:35:01.312Z · LW(p) · GW(p)

Hi everyone! I've been lurking around here for a few years, but now I want to be more active in the great discussions that often occur on this site. I discovered Less Wrong about 4 years ago, but the Methods of Rationality fanfic brought me here as a more attentive reader. I've read some of the sequences, and found them generally to use clear reasoning to make great points. If nothing else, reading them has definitely made me think very carefully about the way nature operates and how we perceive it.

In fact, this site was my first exposure to cognitive biases, and since then I've had the chance to study them further in college and read about them independently. This has been tremendously useful for me to understand why I and others I know behave the way we do.

I recently graduated college with a major in computer science and a decent exposure to math, having done some small independent research projects in machine learning. I'll soon begin a job as a software engineer at a late-stage startup that brings machine learning to the field of education.

I find that my greatest weakness with online communities is my tendency to return to lurking, even if I find the content very engaging. I hope to avoid that problem here, and at least continue participating in the comment threads.

comment by caffemacchiavelli · 2013-07-11T00:00:37.496Z · LW(p) · GW(p)

Hello, everyone. I stumbled upon LW after listening to Eliezer make some surprisingly lucid and dissonance-free comments on Skepticon's death panel that inspired me to look up more of his work.

I've been browsing this site for a few days now, and I don't think I've ever had so many "Hey, this has always irritated me, too!" moments in such short intervals, from the rant about "applause lights" to the discussions about efficient charity work. I like how this site provides some actual depth to the topics it discusses, rather than hand the reader a bullet list of trivialities and have them figure out the application.

I am working as a direct marketing consultant, in the process of getting my MBA (a decision I've started to regret; my faith in the scientific validity of academic management begins to resemble a Shepard Tone) and with future ambitions in entrepreneurship, investing, scaling and other things that fit in the "things I've never done yet smart people are supposed to be good at" box.

I'm a member of Mensa, casual Poker (winning) and Mahjong (losing) player, enjoy lifting weights, cooking (in an utterly unscientific way that would make Heston Blumenthal weep) and martial arts. I also have an imaginary -5yo son/daughter who keeps me motivated to put in more hours at work so we won't have financial worries once they get born.

There are a bunch of things I'd like to do with my life long-term, with varying amounts of megalomania, but I'm generally content with focusing on increasing my financial and (practical) intellectual power in the short- to mid-term and let the future decide just how far off my predictions and plans turn out to be. Estimates range from very to utterly.

Here's hoping LW will help me with that, and that I'll be helpful to others.

comment by wadavis · 2013-05-17T19:11:04.516Z · LW(p) · GW(p)

Greetings Less Wrong Community. I have been lurking on the site for a year reading the articles and sequences and now feel I've cut down the inferential differences enough to contribute meaningful comments.

My goal here is to have clear thought and effective communication in all aspects of my life, with special attention to application in the work environment.

Above most else I value the 12th virtue of rationality. Focus on the goal, value the goal, everything else is a tool to achieve the goal. Like chess, you only need two pieces to win, the only purpose of the other 14 is put the right two into position.

The sequences have been a great source of new idea, and a great exploration of some thoughts I've held but never took the time to charge rent on.

Lastly I was surprised by the amount of atheist/theist discussion. I only encounter situations were I consider having an atheist/theist talk roughly annually and even then I often decide to avoid the discussion because it would not further my current goals. How often do other Less Wrong reader enter atheist/theist discussions with the intent of achieving a goal?

Regards, wadavis

Replies from: wadavis, TheOtherDave
comment by wadavis · 2013-06-01T06:45:52.061Z · LW(p) · GW(p)

A little late, but I found Less Wrong while trying to understand what this comic was talking about.

comment by TheOtherDave · 2013-05-17T19:43:22.634Z · LW(p) · GW(p)

Theists come along from time to time and generate a lot of discussion for a little while about every 10 months or so, I'd off-the-top-of-my-head estimate. I can't speak for anyone else, I will generally engage with them for as long as they agree to be bound by logic; when it becomes "playing tennis without a net" (which it usually does, sooner or laer) I generally beg off.

My goal is mostly entertainment; partly education; I enjoy the process of clarifying ideas that other people have confidence in and their reasons for that confidence.

And welcome!

comment by seanwelsh77 · 2013-04-25T04:06:05.423Z · LW(p) · GW(p)

Hi Less Wrong,

My name is Sean Welsh. I am a graduate student at the University of Canterbury in Christchurch NZ. I was most recently a Solution Architect working on software development projects for telcos. I have decided to take a year off to do a Master's. My topic is Ethical Algorithms: Modelling Moral Decisions in Software. I am particularly interested in questions of machine ethics & robot ethics (obviously).

I would say at the outset that I think 'the hard problem of ethics' remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.

I can't honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of 'Reason.' However, I have a healthy and robustly expressed disregard for all forms of bullshit - be they theist or atheist.

As Confucius said: Shall I teach you the meaning of knowledge? If you know a thing, to know that you know it. And if you do not know, to know that you do not know. THAT is the meaning of knowledge.

Apart from working in software development, I have also been an English teacher, a taxi driver, a tourism industry operator, online travel agent and a media adviser to a Federal politician (i.e. a spin doctor).

I don't mind a bit of biff - but generally regard it as unproductive.

Replies from: shminux, MugaSofer
comment by Shmi (shminux) · 2013-04-25T05:30:09.522Z · LW(p) · GW(p)

Welcome!

I can't honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of 'Reason.'

Not sure why you link rationality with "Academy" (academia?). Consider scanning through the sequences to learn with is generally considered rationality on this forum and how Eliezer Yudkowsky treats metaethics. Whether you agree with him or not, you are likely to find a lot of insights into machine (and human) ethics, maybe even helpful in your research.

Replies from: seanwelsh77
comment by seanwelsh77 · 2013-04-30T21:25:23.215Z · LW(p) · GW(p)

Not sure why you link rationality with "Academy" (academia?).

Pirsig calls the Academy "the Church of Reason" in Zen and the Art of Motorcycle Maintenance. I think there is much evidence to suggest academia has been strongly biased to 'Reason' for most of its recorded history. It is only very recently that research is highlighting the role of Emotion in decision making.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-04-30T22:00:38.472Z · LW(p) · GW(p)

Let's not get started on the medical profession's bias towards health..maybe it's just their job to teach reason..have you ever met someone who couldn't do emotional/system-I decision-making right out of the box?

Replies from: seanwelsh77
comment by seanwelsh77 · 2013-04-30T22:16:20.597Z · LW(p) · GW(p)

In my experience homo sapiens does not come 'out of a box.' Are you a MacBook Pro? :-)

But seriously, I have seen some interestingly flawed 'decision-making systems' in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don't think Reason alone (however you construe it) is up to the job of friendly AI.

Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are 'valid' or 'correct?'

Replies from: John_D, Juno_Watt, MugaSofer
comment by John_D · 2013-05-01T17:59:34.239Z · LW(p) · GW(p)

"Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together."

That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.

The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in certain situations. Perhaps you should modify your statement.

http://www.cognitive-neuroscience.ro/pdf/11.%20Anxiety%20impairs%20decision-making.pdf

comment by Juno_Watt · 2013-05-01T11:49:40.889Z · LW(p) · GW(p)

A thousand sci-fi authors would agree with you that AIs are not going to have emotion. One prominent AI researcher will disagree

comment by MugaSofer · 2013-05-01T17:28:16.349Z · LW(p) · GW(p)

Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don't think Reason alone (however you construe it) is up to the job of friendly AI.

Certainly, our desires are emotional in nature; "reason" is merely how we achieve them. But wouldn't it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves? Introspection is hard.

Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are 'valid' or 'correct?'

Have you read the Metaethics Sequence? It's pretty good at this sort of question.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-01T17:49:21.956Z · LW(p) · GW(p)

But wouldn't it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves?

Would it be easier?

Introspection is hard.

Especially about other people

Replies from: MugaSofer
comment by MugaSofer · 2013-05-01T19:07:12.778Z · LW(p) · GW(p)

Would it be easier?

Well, if you can build the damn thing, it should be better equipped than we are, being superintelligent and all.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-01T20:31:04.009Z · LW(p) · GW(p)

Having only the disadvantages of no emotions itself, and an outside view...

..but if we build an Intelligence based on the only template we have, our own, its likely to be emotional. That seems to be the easy way.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-01T22:21:10.261Z · LW(p) · GW(p)

That's why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we'll need to understand them using our own rationality, which is sadly lacking, I fear.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-01T22:57:51.991Z · LW(p) · GW(p)

That seems to imply we understand our rationality...

Replies from: seanwelsh77, MugaSofer
comment by seanwelsh77 · 2013-05-09T10:42:39.464Z · LW(p) · GW(p)

More research...

Gerd Gigerenzer's views on heuristics in moral decision making are very interesting though.

comment by MugaSofer · 2013-05-12T22:01:12.663Z · LW(p) · GW(p)

Hah. Well, yes. I don't exactly have a working AI in my pocket, even an unFriendly one.

I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they're both out of my grasp right now.

There's some good stuff on this floating around this site; try searching for "complexity of value" to start off. There's likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.

comment by MugaSofer · 2013-04-25T12:21:02.972Z · LW(p) · GW(p)

I think the Academy puts for too much faith in their technological marvel of 'Reason.'

I don't think I'm parsing this correctly. Could you expand on it a bit?

I would say at the outset that I think 'the hard problem of ethics' remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.

Well, you'll find plenty of agreement here, for certain definitions of "unsolved".

Replies from: seanwelsh77
comment by seanwelsh77 · 2013-04-30T21:21:01.255Z · LW(p) · GW(p)

I don't think I'm parsing this correctly. Could you expand on it a bit?

You need the Sith parser :-)

I guess the point I am making is that Reason alone is not enough and a lot of what we call Reason is technology derived from the effect on brains on being able to write. There is some interesting research on how cognition and reasoning differs between literate and preliterate people. I think Emotion plays a critical role in decision making. I am not going out to bat for Faith except in the Taras Bulba sense: "I put my faith in my sword and my sword in the Pole!" (The Polish were the enemy of the Cossack Taras Bulba in the ancient Yul Brynner flick I am quoting from.)

Replies from: MugaSofer
comment by MugaSofer · 2013-05-01T14:06:54.157Z · LW(p) · GW(p)

Whee, references!

We create technologies to help us do stuff better - Taras' sword being only one example. Why not a technology to help us think better? Heck, there are plenty of "mental technologies" besides Rationality - a great example would be the Memory Palace visualization technique (it's featured in an episode of Sherlock, for bonus reference points, but it's not portrayed very well; Google it instead.)

comment by Bluehawk · 2013-04-20T05:32:00.610Z · LW(p) · GW(p)

Hi there, denizens of Less Wrong! I've actually been lurking around here for a while (browsing furtively since 2010), and only just discovered that I hadn't introduced myself properly.

So! I'm Bluehawk, and I'll tell you my real name if and when it becomes relevant. I'm mid-20's, male, Australian, with an educational history in Music, Cinema Studies and Philosophy, and I'm looking for any jobs and experience that I can get with the craft of writing. My current projects are a pair of feature-length screenplays; one's in the editing/second draft stages, the other's coming up to the end of the first draft. When I have the experience to pull it off (gimme another year or two), I'm hoping to develop a few projects that are more focussed on rationality. The backup plan for my future is to take on a Masters and beyond in screenwriting and/or film, either at RMIT or overseas (NY, LA, France?) depending on where my folio can get me.

That said, my scientific literacy is way lower than it "should" be, and I'm tempted to spend a few years working on that instead, but I'm not sure how much would be practical for my life; I normally find that I can ask (some of) the right questions about a list of stats, and I can generally understand human psychology when the concepts are put in front of me, and that seems to have been enough to get me by so far; I just feel really, really out of my league whenever I run into predicate logic, advanced mathematics, physics, chemistry, or programming languages.

I also aspire to aspire to become fluent in French and Japanese.

comment by TimS · 2013-04-05T12:57:09.843Z · LW(p) · GW(p)

[trap closes]

Don't do that. I think the rest of your post is fine, but this is not a debate-for-debate's-sake kind of place (and even if it were, that's not a winning move).

comment by lirene · 2014-08-10T13:48:53.870Z · LW(p) · GW(p)

Hello community.

I've been aware of LW for a while, reading individual posts linked in programmer/engineering hangouts now and then, and I independently came across HPMOR in search of good fanfiction. But the decision to un-lurk myself came after I attended a CFAR workshop (a major positive life change) and realized that I want to keep being engaged with the community.

I'm very interested in anti-aging research (both from the effective altruism point of view, and because I find the topic really exciting and fascinating) and want to learn about it in as much depth as time permits. So far I would come across science articles about single related discoveries in specialized fields (molecular biology, brain science, ... ) but I haven't found a good resource (book, coursera course, whatever) where I can learn the necessary medicine/biology background and how it all comes together in the current state of the art (I'm thinking of something similar to all the remarkable physics books we have on the market). Any pointers are appreciated.

comment by shelikavoid · 2014-07-25T17:57:44.750Z · LW(p) · GW(p)

Hi I'm N. Currently a systems engineer. Lurked for sometime and finally decided to create an account. I am interested in mathematics and computer science and typography. Fonts can give me happiness or drive me crazy.

I am currently in SoCal.

comment by MelbourneLW · 2014-07-23T03:59:10.284Z · LW(p) · GW(p)

This account is used by a VA to post events for the Melbourne Meetup group. Comment is to accrue 2 karma to allow posting.

comment by more_wrong · 2014-05-26T16:59:23.083Z · LW(p) · GW(p)

I chose more_wrong as a name because I'm in disagreement with a lot of the lesswrong posters about what constitutes a reasonable model of the world. Presumably my opinions are more wrong than opinions that are lesswrong, hence the name :)

My rationalist origin story would have a series of watershed events but as far as I can tell, I never had any core beliefs to discard to become rational, because I never had any core beliefs at all. Do not have a use for them, never picked them up.

As far as identifying myself as an aspiring rationalist, the main events that come to mind would be:

  1. Devouring as a child anything by Isaac Asimov that I could get my hands on. In case you are not familiar with the bulk of his work, most of it is scientific and historical exposition, not his more famous science fiction; see especially his essays for rationalist material.

  2. Working on questions in physics like "Why do we call two regions of spacetime close to each other?", that is, delving into foundational physics.

  3. Learning about epistemology and historiography from my parents, a mathematician and a historian.

  4. Thinking about the thinking process itself. Note: Being afflicted with neurological and psychological conditions that shut down various parts of my mentality, notably severe intermittent aphasia, has given me a different perspective on the thinking process.

  5. Making some effort to learn about historical perspectives on what constitutes reason or rationality, and not assuming that the latest perspectives are necessarily the best.

    I could go on but that might be enough for an intro.

    My hope is to both learn how to reason more effectively and, if fortunate, make a contribution to the discussion group that helps us to learn the same as a community. mw

comment by higurashimerlin · 2014-01-23T18:35:55.784Z · LW(p) · GW(p)

My name is Morgan. I was brought here by my brother and have been lurking for awhile. I've have read most of the sequences which have cleared up some of my confused thinking. There were things that I didn't think about because I didn't have an answer for them. Free will and morality used to confuse me and so I never thought much about them since I didn't have a guarantee that they were answerable.

Lesswrong has helped me get back into programming. It has helped me learn to think about things with precision. And to understand how an Cognitive algorithm feels from the inside to dissolve questions.

I am going to join this community and improve my skills. Tsuyoku Naritai.

comment by radu_floricica · 2014-01-12T09:14:36.800Z · LW(p) · GW(p)

Hello,

I'm a 34 yo programmer/entrepreneur in Romania, with a long time interest in rationality - long before I called it by that name. I think the earliest name I had for it was "wisdom", and a desire to find a consistent, repeatable way to obtain it. Must admit at that time I didn't imagine it was going to be so complicated.

Spent some of my 20s believing I already know everything, and then I made a decision that in retrospect was the best I ever made: never to look at the price when I buy a book, but only at the likelihood of finishing it. Which is something I strongly recommend even (or especially) to cash-starved students. The first one happened to be Nassim Taleb's Black Swan, which was another huge stroke of luck. Not only it exposed me to some pretty revolutionary concepts and destroyed my illusions of omniscience, but he's a frequent name dropper and provided a lot of leads for future reading material. And the rest, as they say, is history.

Introduction aside, I'm a long time lurker and I actually came here with a request for comments. There is an often mentioned thought experiment in the sequences that compares a lot of harm done to a person (like torture) with minimum harm done to a lot of people, like a mote in the eye of a billion people. I've always found it a bit disturbing, but couldn't escape the conclusion that harm is additive and comparable. Except I now think it's not.

I've recently read Anti-fragile and found the concept of "hormesis", i.e. small harm done to a complex system generates an over-compensatory response, resulting in overall improvement. Simple examples: cold showers or lifting weights. So small harm done to a lot of people is possible to overall have net positive effects.

Two holes I see in this argument: some harms like going to the gym create hormesis, while motes in the eye don't. Also, you could just up the harm: use a big enough mote that the overall effect is a net negative, like maybe cause some permanent damage. But both holes are plugged by the fact that complex systems will always find ways to compensate. Small cornea damage gets compensated at processing level, muscle damage turns into new muscle, neuron damage means rerouting etc. There are tipping points and limits, but they're still counter-intuitive. Killing the n-th neuron will put somebody in a wheelchair, but their happiness level still bounces back. There is harm, but it's very non-linear in respect to the original damage. So I can't help but conclude that harm is simply non-additive and non-comparable, at least not easily.

comment by [deleted] · 2013-12-03T12:05:33.162Z · LW(p) · GW(p)

Hello, LW,

One of my names is holist. I am 45. Self-employed family man, 6 kids, 2 dogs, 1 cat. Originally a philosopher (BA+MA from Sussex, UK), but I've been a translator for 19 years now... it is wearing thin. Music and art are also important parts of my life (have sold music, musically directed a small circus, have exhibited pictures), and recently, with dictatorship being established here in Hungary, politics seems increasingly urgent, too. I dabble in psychotherapy and call myself a Discordian. Recently, I started thinking about doing a PhD somewhere. My topic is very general: what has caused the increasingly hostile relationship between individual and culture, and what are the remedies available? My window of opportunity is a few years away: I am intermittently thinking about possible supervisors. I have a blog at holist.hu, some of it is in English and it has a lot of pictures. The discordians at PD kicked me out, I found the Secular Café to be indescribably boring, I am a keen MeFite, but I'd like something a little more discussioney. A friend pointed me at LW. I hope it works out.

For pointers, here's a slightly random list of some books that are very important to me: Fiction: Mason and Dixon by Thomas Pynchon, Ulysses by James Joyce, Karnevál by Béla Hamvas, Moby Dick by Herman Melville, The Diamong Age by Neal Stephenson. Non-fiction: The Continuum Concept by Jean Liedloff, The Facts of Life by R. D. Laing, Children of the Future by Wilhelm Reich, The Drama of the Gifted Child by Alice Miller, The Story of B by Daniel Quinn, Tools for Conviviality by Ivan Illich

I guess I'll start by lurking, but you never know :)

comment by JoshElders · 2013-10-10T20:13:48.322Z · LW(p) · GW(p)

I am a celibate pedophile. That means I feel a sexual and romantic attraction to young girls (3-12) but have never acted on that attraction and never will. In some forums, this revelation causes strong negative reactions and a movement to have me banned. I hope that's not true here.

From a brief search, I see that someone raised the topic of non-celibate pedophilia, and it was accepted for discussion. http://lesswrong.com/lw/67h/the_phobia_or_the_trauma_the_probem_of_the_chcken/ Hopefully celibate pedophilia is less controversial.

I have developed views on the subject, though I like to think that I can be persuaded to change them, and one thing I hope to get here on LessWrong is reason-based challenges. Hopefully others will find the topics informative as well. In the absence of advice on a better way to proceed, I plan to make posts in Discussion now and then on various aspects of the topic.

I'm in my 50s and am impressed with the LessWrong approach in general and have done my best to follow some of its precepts for years. I have read most of the core sequences.

Replies from: wedrifid
comment by wedrifid · 2013-10-10T21:13:39.072Z · LW(p) · GW(p)

I have developed views on the subject, though I like to think that I can be persuaded to change them, and one thing I hope to get here on LessWrong is reason-based challenges. Hopefully others will find the topics informative as well. In the absence of advice on a better way to proceed, I plan to make posts in Discussion now and then on various aspects of the topic.

Reason based challenges? There doesn't seem to be much to say. You have some attraction, you choose not to act on it for altruistic and/or pragmatic reasons. Nothing much to challenge.

Perhaps an aspect that could provoke discussion would be the decision of whether to self-modify to not have those desires if the technological capability to do so easily were available. I believe there is a Sci. Fi. short story out there that explores this premise. My take is that given an inclusive preference to not have sex with children I'd be perfectly content to self modify to remove the urge which I would never endorse acting on. I would consider this to be a practical choice that allowed me to experience more pleasure and less frustration without sacrificing my values. I would not consider it a moral obligation for people to do so.

A variant situation would be when that same technology is available, along with the technology to detect both preferences and decision making traits in people. Consider a case where it is detected that someone with a desire to do a forbidden thing and who is committed to not doing that thing and yet who has also identifiable deficits in willpower or decision making that make it likely that they will act on the desires anyway. In that case it seems practical to enforce a choice of either self-modifying or submitting to restrictions of behaviour and access.

A further scenario would be one in which for technological or evolutionary reasons there are 12 year old girls who would not be physically or psychologically harmed by sexual liaisons with adults and who have been confirmed (by brain scan and superintelligent extrapolation) to prefer outcomes where they engage in such practice than those in which they don't. That would tend to make any residual moral objection to pedophilia to be not about altruistic consideration of consequenes and all about prudishness. (I of course declare vehemently that I would still oppose pedophilia regardless of consequences. Rah Blues!)

For some real controversy I suppose you could do an analysis of the research on just how much physical and psychological damage is done to children in loving-but-sexual relationships with adults versus how much damage is done by other adults who wish to signal their opposition to the crime by the way they treat the victims. Mind you, that is something that if it were ever to be discussed would perhaps best be discussed by people who are not celibate pedophiles. It would be offensive enough to many people to even see it considered a valid avenue of enquiry by an individual with zero sexual interest in children. Protection of victims from 'righteous' authorities is something best done by those with no interest in committing the crime.

comment by raydpratt · 2013-07-27T22:28:37.206Z · LW(p) · GW(p)

I am a maximum-security ex-con who studied and used logic for pro se, civil-rights lawsuits. (The importance of being a maximum-security ex-con is that I was stubborn iconoclast who learned and used logic in all seriousness.) Logic helped me identify the weak links in my opponent's arguments and to avoid weak links in my own arguments, and logic helped my organize my writing and evidence. I also studied and learned to use “The Option Process” for eliminating my negative emotions and to understand other people's negative emotions. The core truth of “The Option Process” is that we choose to have negative emotions for reasons, not randomly, and not even necessarily. So, our rationality is very much a part of our emotions, and, as such, good reasoning can utterly remove negative emotions at the core of their raison d'être. However, some of my emotional and intellectual challenges have resisted solutions via logic and “The Option Process.” For example, I could not figure out how to stay objective and to behave objectively while trying to gamble for profit (not for fun). So, I began reading widely about self-control, discipline, integrity, neuroeconomics, etc. And, in the process, I found this LessWrong website.

I have only recently identified what may be at the root of my problem with gambling and why it resists both logic and “The Option Process.” Freud called it “childhood megalomania.” In our early years, whenever we cried and sniveled, the universe of Mom and Dad and others rushed to our needs. That inner baby rarely grows up well in any of us, and we still whine, snivel, and howl at the universe when things don't go our way, and we can get down right obstinate about doing so until the universe listens! The universe, in turn, responds favorably often enough to keep our inner babies convinced of our magic, temper-tantrum powers over reality.

I figured out that when I get frustrated, afraid, and challenged by the difficulties of gambling, I would rather feel safe, powerful and warm, and so I often lapse into an obstinate insistence on continuing to gamble because I want to believe and feel that I can successfully gamble whenever I want, even during objectively bad, fear-inducing, and frustrating conditions.

The universe has not been kind in that regard, but with my recent insight, I at least hope that my inner baby has grown one year older. The rest of the problem, the frustration and fear, will easily fall prey to the power of logic and “The Option Process.”

comment by dirtfruit · 2013-07-24T21:25:17.067Z · LW(p) · GW(p)

Hey, I'm dirtfruit.

I've lurked here for quite a while now. LessWrong is one of the most interesting internet communities I've observed, and I'd like to begin involving myself more actively. I've been to one meetup, in NYC, a few months ago, which was nice. I've read most of the sequences (I think I've read all of them at least once, but I haven't looked hard enough to be super-confident saying that). HPMOR is cool, I enjoyed reading it and continue to check for updates. I've tried to read most of what Eliezer has written, but gave up early on anything extremely technical, as I don't have the background for it. EY seems like a righteous dude to me. I dig his cause, and would like to make myself available to help, in what ways I can.

I'm currently 21 years old. I was born and raised on the west coast of the united states, and am now attending a college on the east coast studying fine art, with a concentration in drawing. I've always read a lot. When I was young; analog fiction, mostly. Now I most often find myself reading nonfiction online .

I'd like to find ways for artists (specifically me, but also other interested artists(to a lesser degree)) to be useful to the general cause of rationality; raising waterlines and whatnot. I believe there exists a general feeling among lesswrong users that artists can be fun, but are not very instrumentally useful to their particular cause. If this belief is misplaced, I'd be overjoyed to adjust it properly. I'm obviously biased, but I believe this feeling to be more than a few shades off from correct. Pictorial communication can be super intuitive. It can communicate very quickly relative to the written word, can be very memorable, and is capable of transcending many written/spoken language barriers. It's main downsides include: time-expense (drawing a picture generally takes longer than describing something verbally(spoken or written)); and scarcity of expertise - drawing and painting's difficulty curves seems roughly similar to that of writing, but they are practiced far less often than writing, and (nowadays, in the fine art world at least) held to very different standards. Experts in visual communication should be very instrumentally useful, for clarifying concepts not well suited to words, and also for attracting/aiding/communicating with those beyond the reach of literacy. I'm not claiming expertise (I'm still building my skills as a student), but at the very least I have some experience in crafting understandable, detailed pictures to something of a high standard. I'm also somewhat talented with words; integrating textual communication with visual communication (and visa versa) is something I'm sensitive to and interested in.

I also just really like the spirit and conventions of debate here, and would very much like to hear any and all thoughts about what I just wrote. :D thanks!

(also I think we need a new welcome thread? either that or I failed to find the proper one. This thread has far exceeded 500 posts...)

comment by Tsende · 2013-07-19T00:35:22.215Z · LW(p) · GW(p)

Hi, I'm a second year engineering student at a university of California. I like engaging in rational discussions and find importance in knowing about what's going on in the world and gain more insight on controversial issues such as abortion, gay rights, sexuality, immigration, etc. Someone on Facebook directed me to this site but I easily get bored so I may or may not be much of a contribution.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-18T12:37:33.616Z · LW(p) · GW(p)

gain more insight on controversial issues

It is probably better to practice rationality skills on less controversial issues. When speaking about politics, people instinctively become less rational, because politics is usually not about being correct, but about belonging to the winning tribe.

comment by claus · 2013-06-20T10:18:04.498Z · LW(p) · GW(p)

Hi all, my name is Claus. I am unsure how exactly I got here, but I sure do know why I kept coming back. I'm so happy to have found such a large and confident group of like minded people.

Currently I am trying to finish some essays on Science and evidence based politics. I'm sure I will enjoy my stay here!

Replies from: Larks, Kawoomba
comment by Larks · 2013-06-20T11:55:49.027Z · LW(p) · GW(p)

Welcome to Less Wrong!

In the interests of LW-modesty,

I mainly grew tired of what can be called 'continental philosophy' (specifically 'Hegelianism') because of it's lack of clarity.

still leaves open the possibility you might find analytic philosophy more congenital than LW-philosophy.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-06-20T12:04:09.700Z · LW(p) · GW(p)

Congenital should be congenial.

comment by Kawoomba · 2013-06-20T10:26:30.536Z · LW(p) · GW(p)

Is that belief-database private?

Replies from: claus
comment by claus · 2013-06-20T11:30:42.512Z · LW(p) · GW(p)Replies from: Kawoomba
comment by Kawoomba · 2013-06-20T11:55:47.574Z · LW(p) · GW(p)

I'm interested.

comment by Articulator · 2013-06-14T23:47:48.519Z · LW(p) · GW(p)

Hi everyone, I’m The Articulator. (No ‘The’ in my username because I dislike using underscores in place of spaces)

I found LessWrong originally through RationalWiki, and more recently through Iceman’s excellent pony-fic about AI and transhumanism, Friendship is Optimal.

I’ve started reading the Sequences, and made some decent progress, though we’ll see how long I maintain my current rate.

I’ll be attending University this fall for Electrical Engineering, with a desire to focus in electronics.

Prior to LW, I have a year’s worth of Philosophy and Ethics classes, and a decent amount of derivation and introspection.

As a result, I’ve started forming a philosophical position, made up of a mishmash of formally learnt and self-derived concepts. I would be very grateful if anyone would take the time to analyze, and if possible, pick apart what I’ve come up with. After all, it’s only a belief worth holding if it stands up to rigorous debate.

(If this is the wrong place to do this, I apologize - it seemed slightly presumptuous to imply that my comment thread would be large enough to warrant a separate discussion article.)

I apologize in advance for a possible lack of precise terminology for already existing concepts. As I’ve said, I’m partially self-derived, and without knowing the name of an idea, it’s hard to check if it already exists. If you do spot such gaps in my knowledge, I would be grateful if you’d point them out. Though I understand correct terminology is nice, I'd appreciate it if you could judge my ideas regardless of how many fancy words I use to descrive them.

My thought process so far:

P: Naturalism is the only standard by which we can understand the world

P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature

IC : We cannot derive ethical statements

IC: There is no intrinsic value

C: Nihilism is correct

However, assuming nihilism is correct, why don’t I just kill myself now? That’s down to the evolutionary instincts that need me alive to reproduce. Well, why not overcome those and kill myself? But now, we’re in a difficult situation – why, if nothing matters, am I so desperate to kill myself?

Nihilism is the total negation of the intrinsic and definitive value in anything. It’s like sticking a coefficient of zero onto all of your utility calculations. However, that includes the bad as well as the good. Why bother doing bad things just as much as doing good things?

My eventual realization came as a result of analyzing the level or order of concepts. Firstly, we have the lowest order, instinct, which we are only partially conscious of. Then, we have a middle order of conscious thought, wherein we utilize our sapience to optimize our instinctual aims. Finally, we have the first of a series of high order thought processes devoted to analyzing our thoughts. It struck me that only this order and above is concerned with my newfound existential crisis. When I allow my rationality to slip a bit, a few minutes later, I stop caring, and start eating or taking out my testosterone on small defenseless computer images. Essentially, it is only the meta-order processes which directly suffer as a result of nihilism, as they are the ones that have to deal with the results and implications.

Nihilism expects you to give up attempting to change things or apply ethics because those are seen as meaningful concepts. However, really, the way I see it, Nihilism is about simply the state of ‘going with the flow’, colloquially speaking. However, that’s intentionally vague. Consider: if your middle-order processes don’t care that you just realized nothing matters, what’ll happen? They’ll just keep doing what they’ve always done.

In other words, since humans compartmentalize, going with the flow is synonymous with turning off your meta-level thought processes as a goal-oriented drive, and purely operate on middle-level processes and below. That corresponds, for a Naturalist, with Utilitarianism.

Now, that’s not to say “turn off your meta-level cognition”, because otherwise, what am I doing here? What I’m doing right now is optimizing utility because I enjoy LessWrong and the types of discussions they have. I bother to optimize utility despite being a nihilist because it is easier, and less work, meta-level-wise, to give in to my middle-level desires than to fight them.

To define Nihilism, for me, now comes to the concept of passively maintaining the status quo, or more aptly, not attempting to change it. Why not wirehead? – because that state is no more desirable in a world with zero utility, but takes effort to reach. It’s going up a gradient which we can comfortably sit at the bottom of instead.

I fear I haven’t done the best job of explaining concisely, and I believe my original, purely mental, formulations were more elegant, so that’s a lesson on writing everything down learned. However, I hope some of you can see some flaws in this argument that I can’t, because at the moment, this explains just about everything I can think of in one way or another.

Thank you all in advance for any help given,

The Articulator (It’s kind of an ironic choice of name, present ineptitude considered.)

Replies from: Articulator, Vaniver
comment by Articulator · 2013-06-15T00:02:01.117Z · LW(p) · GW(p)

Okay, whoa, hey. I clearly and repeatedly explained my lack of total understanding of LW conventions. I'm not sure what about this provoked a downvote, but I would appreciate a bit more to go on. If this is about my noobishness, well, this is the Welcome Thread. Great job on the welcoming, by the way, anonymous downvoter. At the very least offer constructive criticism.

Edit: Troll? Really?

Edit,Edit: Thank you whoever deleted the negative karma!

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T00:30:39.740Z · LW(p) · GW(p)

I wouldn't take downvotes to heart, if I were you, unless like, a whole bunch of people all downvote you. A downvote's not terribly meaningful by itself.

Welcome to Less Wrong, by the way.

Now, I didn't downvote you, but here's some criticism, hopefully constructive. I didn't read most of your post, from where you start discussing your philosophy (maybe I will later, but right now it's a bit tl;dr). In general, though, taking what you've learned and attempting to construct a coherent philosophical position out of it is usually a poor idea. You're likely to end up with a bunch of nonsense supported by a tower of reasoning detached from anything concrete. Read more first. Anyway, having a single "this is my philosophy" is really not necessary... pretty much ever. Figure out what your questions are, what you're confused about, and why, approach those things one at a time and in without an eye toward unifying everything or integrating everything into a coherent whole, and see what happens.

Also: read the Sequences, they are pretty much concentrated awesome and will help with like, 90% of all confusion.

Replies from: Articulator
comment by Articulator · 2013-06-15T01:04:16.418Z · LW(p) · GW(p)

Okay, noted. It's just that from what I've seen so far, a post with a net downvote is generally pretty horrible. I admit I took some offense from the implication. I'll try not to let it bother me unless N is high enough for it to be me, entirely, that's the problem.

Thanks. :)

Thank you for taking the time to give constructive criticism.

I will attempt to make it more coherent and summarized, assuming I keep any of it.

I appreciate I am likely to inexperienced to come up with anything that impressive, but I was hoping to use this as a method to understand which parts of my cognitive function were not behaving rationally, so as to improve.

I will absolutely continue to read, but with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I'd already held. By the point, two and a half sequences in, I felt it was unlikely that the enlightenment value would spike in such a way as to render my previously held views obsolete.

I'll bear your objections in mind, but I fear I won't let go of this theory unless somebody points out why it is wrong specifically, as opposed to methodically. Not that I'm putting any onus on you or anyone else to do so.

As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering.

Thanks again for your help and kindness. :)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-06-15T01:22:42.266Z · LW(p) · GW(p)

I appreciate I am likely to inexperienced to come up with anything that impressive,

It's not even that (ok, it's probably at least a little of that). Some of the most worthless and nonsensical philosophy has come from professional philosophers (guys with Famous Names, who get chapters in History of Philosophy textbooks) who've constructed massive edifices of blather without any connection to anything in the world. EDIT: See e.g. this quote.

with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I'd already held.

You've got it right. One of the points Eliezer sometimes makes is that true things, even novel true things, shouldn't sound surprising. Surprising and counterintuitive is what you get when you want to sound deep and wise. When you say true things, what you get is "Oh, well... yeah. Sure. I pretty much knew that." Also, the Sequences contain a lot of excellent distillation and coherent, accessible presentation of things that you would otherwise have to construct from a hundred philosophy books.

As for enlightenment that makes your previous views obsolete... in my case, at least, that happened slowly, as I digested things I read here and in other places, and spent time (over a long period) thinking about various things. Others may have different experiences.

As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering.

Yeah, one of the themes in Less Wrong material, I've found, is that how to think is more important than what to think (if for no other reason than that once you know how to think, thinking the right things follows naturally).

Replies from: Articulator
comment by Articulator · 2013-06-15T07:11:45.786Z · LW(p) · GW(p)

Some of the most worthless and nonsensical philosophy has come from professional philosophers

Oh, I know. I start crying inside every time I learn about Kant.

Well, I'll take what you've said on board. Thanks for the help!

comment by Vaniver · 2013-06-15T01:24:00.029Z · LW(p) · GW(p)

Welcome to LW!

There is a metaethics sequence, of which this post asks what you would do if morality didn't exist. This may be a good place to start looking, but I wouldn't be too discouraged if you don't find it terribly useful (as Eliezer and others see it as not as communicative as Eliezer wanted it to be).

The point I would focus on is that there's a difference between an ethical system that would compel any possible mind to follow it, and an ethical system in harmony with you and those around you. Figure out what you can get from ethics, and then seek to discover which the results of ethics you try. Worry more about developing a system that reliably makes small, positive changes than about developing a system that is perfectly correct. As it is said, a complex system that works is invariably found to have evolved from a simple system that worked.

Replies from: Articulator
comment by Articulator · 2013-06-15T07:06:26.371Z · LW(p) · GW(p)

Thanks!

Thanks for that link. I probably should have read that sequence, I'll admit, but what is interesting is that, despite me not having read it previously, the majority of comments reflect what I stated above, albeit that my formulation explains it slightly more cognitively that 'because I want to'. (Though that is an essential premise in my argument)

Though this is probably unfortunately irrational on my part, seeing my predictions confirmed by a decently sized sample only suggests to me that I'm on to something, at least so far as articulating something I have not seen previously formalized.

It seems like my largest problem here is that I absolutely failed to be concise, and added in non-necessary intermediate conclusions.

I think of this as less an ethical system in itself, rather a justification and rationalization of my position on Nihilism and its compatibility with Utilitarianism, which, coincidentally, seems to be the same as most people on LW.

I know that this'll be probably just as failed as the last attempt, but I've summarized my core argument into a much shorter series of premises and conclusions. Would you mind looking through them and telling me what you feel is invalid or is likely to be improved upon by prolonged exposure to LW?

P: Naturalism is the only standard by which we can understand the world

P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature

IC : We cannot derive ethical statements

IC: There is no intrinsic value

C: Nihilism is correct

P: Ethical statements are by definition prescriptive

P: Nihilism offers a total lack of ethical statements

IC: Nihilism offers no prescriptive statements

P: Prescriptive statements are like forces, in that they modify behavior (Consider Newton’s First Law)

IC: No prescriptive statements means no modification of behavior

C: Nihilism does not modify behavior, ethically speaking

P: Humans naturally or instinctively act according to a system very close to Utilitarianism

P: Deviation from this system takes effort

IC: Without further input or behavioral modification, most intellectual individuals will follow a Utilitarian system

IC: To act contrary to Utilitarianism requires effort

P: Nihilism does not modify behavior or encourage ethical effort

C: Nihilism implies Utilitarianism (or a general ethical system akin to it that is the default of the person in question)

I apologize if trying again like this is too much to ask for.

Replies from: Vaniver
comment by Vaniver · 2013-06-15T08:35:32.848Z · LW(p) · GW(p)

P: Humans naturally or instinctively act according to a system very close to Utilitarianism

Were this true, the utilitarian answers to common moral thought experiments would be seen as intuitive. Instead, we find that a minority of people endorse the utilitarian answers, and they are more likely to endorse those answers the more they rely on abstract thought rather than intuition. It seems that most people are intuitive deontologists.

I think of this as less an ethical system in itself, rather a justification and rationalization of my position on Nihilism and its compatibility with Utilitarianism, which, coincidentally, seems to be the same as most people on LW.

I don't think "nihilist" is an interesting term, because it smuggles in implications that I do not think are useful (like "why don't you just kill yourself, then?"). I think "moral anti-realist" is better, but not by much. The practical advice I would give: do not seek to use ethics as a foundation, because there is nothing to anchor it on. The parts of your mind are connected to each other, and it makes sense to develop them as a collection. If there is no intrinsic value, then let us look for extrinsic value.

Replies from: Articulator, BerryPick6
comment by Articulator · 2013-06-15T11:03:15.955Z · LW(p) · GW(p)

Firstly, thank you for replying and spending the time to discuss this with me.

P: Humans naturally or instinctively act according to a system very close to Utilitarianism

Were this true, the utilitarian answers to common moral thought experiments would be seen as intuitive. Instead, we find that a minority of people endorse the utilitarian answers, and they are more likely to endorse those answers the more they rely on abstract thought rather than intuition. It seems that most people are intuitive deontologists.

I admit I made a bit of a leap here, which may not be justified. I was careful to specify 'very close', as I realize it is obviously not an exact copy. I would argue that most people do attempt to follow Bentham's original formulation of seeking pleasure and avoiding pain instinctively, as that is where he derived his theory from. I would argue that though people may implement a deontological system for assigning moral responsibility, they are ultimately using Utilitarian principles as the model for their instinctive morality that describes whether an action is good or bad, much the same as Rule Utilitarianism does. I don't think I can overstate the importance of the fact that Bentham derived the idea of Utilitarianism from a human perspective.

I don't think "nihilist" is an interesting term, because it smuggles in implications that I do not think are useful (like "why don't you just kill yourself, then?").

In the longer formulation, I tackled this exact question, pointing out that is is more effort to overcome your survival instincts than it is to follow them, and thus an illogical attempt to change things which don't matter.

I like 'nihilist' as a term as it is immediately recognizable, short, punchy, and someone with a basic grasp of Latin or maybe even English should be able to derive a rough meaning. It also sounds better. :P

The practical advice I would give: do not seek to use ethics as a foundation, because there is nothing to anchor it on.

Well, as it currently stands, I'm happy with the logical progression necessary to reach my current understanding, and more importantly, it has given me a tremendous sense of inner peace. I don't think that it as such limits my mental progression, since I arrived at these conclusions through rational means, and would give them up if confronted with sufficient logic contrary to my understanding.

If there is no intrinsic value, then let us look for extrinsic value.

Would you mind elaborating on looking for extrinsic value? Is that like the Existentialist viewpoint?

comment by BerryPick6 · 2013-06-15T09:55:57.629Z · LW(p) · GW(p)

I think "moral anti-realist" is better, but not by much.

Specifically, they seem to be talking about something similar to Error Theory.

Replies from: Articulator
comment by Articulator · 2013-06-15T11:05:28.575Z · LW(p) · GW(p)

Well, I just looked it up, and I'd agree with it, though I do use it more as an intermediate conclusion than an actual end point.

Replies from: BerryPick6
comment by BerryPick6 · 2013-06-15T11:14:46.602Z · LW(p) · GW(p)

I don't know what you mean by that, but I resolved my weird ethical quasi-nihilism through a combination of studying Metaethics and reading Luke's metaethical sequence, so you might want to do that as well, if only for the terminology.

Replies from: Articulator
comment by Articulator · 2013-06-15T11:41:40.616Z · LW(p) · GW(p)

Sorry, what I meant was that while I am using something similar to Error Theory, I was also going beyond that and using it as a premise in other arguments. All I meant was that it wasn't the entirety of my argument.

I certainly plan on reading those, but thanks for the advice. Hopefully I'll be up to date with terminology by the end of the summer.

comment by Lumifer · 2013-05-16T17:51:00.172Z · LW(p) · GW(p)

Hello, smart weird people.

I've been lurking on and off for a while but now it seems to be a good time to try playing in the LW fields. We'll see how it goes.

I'm interested in "correct" ways of thinking, obviously, but I'm also interested in their limits. The edges, as usual, are the most interesting places to watch. And maybe to be, if you can survive it.

No particular hot-burning questions at the moment or any specific goals to achieve. Just exploring.

Replies from: DSimon
comment by DSimon · 2013-05-16T18:06:05.062Z · LW(p) · GW(p)

Hello, Lumifer! Welcome to smart-weird land. We have snacks.

So you say you have no burning questions, but here's one for you: as a new commenter, what are your expectations about how you'll be interacting with others on the site? It might be interesting to note those now, so you can compare later.

Replies from: Lumifer
comment by Lumifer · 2013-05-16T18:25:32.378Z · LW(p) · GW(p)

Hm, an interesting question.

In the net space I generally look like an irreverent smartass (in the meatspace too, but much attenuated by real relationships with real people). So on forums where I hang out, maybe about 10% of the regulars like me, about a quarter hate me, and the rest don't care. One of the things I'm curious about is whether LW will be different.

Or maybe I will be different -- I can argue that my smartassiness is just allergy to stupidity. Whether that's true or not depends on the value of "true", of course...

comment by Curiousguy · 2013-04-13T19:42:31.527Z · LW(p) · GW(p)

Student of economics. Not going to write any more than that about myself at this point.

"To post to the Discussion area you must have at least 2 points." - I'd like to post something I've written, but I need two karma to do so.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T20:42:26.225Z · LW(p) · GW(p)

People generally get more than 2 karma for giving a full introduction, so you could try that. Alternately, you could look around and reply to something - doesn't matter if it's old, people'll probably see it in the recent comments.

comment by JonMcGuire · 2013-04-13T14:58:41.809Z · LW(p) · GW(p)

New to LW... my wife re-ignited my long-dormant interest in AI via Yudkowski's Friendly AI stuff.

Is there a link somewhere to "General Intelligence and Seed AI"? It seems that older content at intelligence.org has gone missing. It actually went missing while my wife was in the middle of reading it online... very frustrating. Friendly AI makes a lot of references to it. Seems important to read it.

I'd prefer a PDF, if somebody knows where to find one.

Thanks!

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T15:18:22.034Z · LW(p) · GW(p)

Huh. You're right, it looks like it went missing - perhaps during the change to MIRI?

I'm afraid I've been unable to find a copy; looks like everyone was linking to that one vanished copy.

EDIT: which is still available at WaybackMachine.

Replies from: JonMcGuire
comment by JonMcGuire · 2013-04-13T15:53:11.626Z · LW(p) · GW(p)

Awesome, thanks. Didn't even think to look there... time to wget the whole thing!

comment by Shmi (shminux) · 2013-04-04T20:02:29.539Z · LW(p) · GW(p)

Just wondering if you realize that you simply guessed the two-letter teacher's password ("SE") which acted perfectly as a curiosity stopper for you.

comment by kotrfa · 2014-01-09T20:54:28.479Z · LW(p) · GW(p)

Hello,

I'd like to get some opinions about my future goals.

I'm 21 and I'm a second-year student of engineering in Prague, Czech Republic, focusing mainly on math and then physics.

My background is not stunning - I was born in 93, visiting sporting primary school and then general high school. Until I was in second year of high school, I behaved as an idiot with below-average results in almost everything, paradoxically except extraordinary "general study presupposes" (whatever it means). My not so bad IQ - according to IQ test I took when I was 15 - is about 130 points. When I was 17, I realized that there is something about the world that needs to be done with. I started to study, mainly math and physics. I was horrible at it - I had very big disadvantage because I missed basics and wasn't able to recognize it. Anyway, I tried (but, unfortunately, not as much as I had to) and reached so-so level and I got on the technical university. Here I tried really hard and I achieved relatively good results and got into the best maths-focused student group. I'm below-average in this group (about 30 students) and my results are satisfactory. I'm quite popular thanks to collaborating on some non-study events for my schoolmates. I also created a presentation for high school students about engineering and I distributed it among faculty workers and students, who are connected to propagation.

About 10 years I obtained ECDL and it started my curiosity about informatics. But nothing special - I was autodidact in HTML and "computer administration" for regular usage. I was also very interested in economy, as my father is working in this area. I actively did cross-country skiing and play on piano and trombone.

I have high charisma, authority and ability to organize people and some bigger events, which I was usually asked to prepare (the graduate prom, matriculation etc.). I have good reasoning skills and ability to negotiate even under heavy pressure and stress. People usually enjoy time with me and appreciates me for my honesty, empathy and "cold-think" reasoning solutions, which in most time shows there were the best possible. I'm in healthy relationship for two years. My family is good background for my activities and support me. They also support me financially. My expense per month is not more than 300 USD including accommodation with in an apartment (university students, two of them from my university and domain), food and social activities.

Currently, apart from my school activities, I'm also attending some kind of philosophy group every week, where we usually discuss some topic about epistemology, relationships, culture, religions etc., we read some philosophic works (Platon), deal with art (classical music or paintings) or we write some kind of voluntary essays. I'm really interested in discussions about these topics and I try to develop my reasoning skills as often I can. For example, now I contacted a priest from local temple with whom I want to discuss some religion based questions. I autodidact psychology (last book I've read was Kahneman: Think fast and slow), rationality (started to read LW sequences), and programming. I enjoy using open source software on my Archlinux laptop and now I dived into Python as a scripting language. I also develop some web for my mother using Django and I also signed for a statistical research task about datamining in Python (pandas, numpy, scikit-learn...) or R. In school I have courses of C++ also. I'm not the most talented or generally best mathematician or programmer, but I have quite good learning (and also teaching) skills.

I've chosen my "path" - I'd like to do what's right and true and seek for the truth whenever it is possible. I feel that I'm not getting everything (e.g. from my school) I need for changing the world to a better place. I could do more. I can't decide where to focus and how to divide my attention and possibilities. Should I do aggressive autodidact of sequences? Should I focus on maths and algorithms or biases? Should I try to develop my social skills?

And the second question is simple:" Are there any Czechs who are interested in meetups in PRAGUE?"

Thank you

comment by ericgarr · 2013-12-09T00:47:24.408Z · LW(p) · GW(p)

Hi everybody,

My name is Eric, and I'm currently finishing up my last semester of undergraduate study and applying to Ph.D. programs in cognitive psychology/cognitive neuroscience. I recently became interested in the predictive power offered by formal rational models of behavior after working in Paul Glimcher's lab this past summer at NYU, where I conducted research on matching behavior is rhesus monkeys. I stumbled upon Less Wrong while browsing the internet for behavioral economics blogs. After reading a couple of posts, I decided to join.

Some sample topics that I like reading about and discussing include intertemporal choice, risk preferences, strategic behavior in the context of games, reinforcement learning, and the evolution of cooperation. I look forward to chatting with some of you!

comment by aquaticko · 2013-11-29T17:07:02.729Z · LW(p) · GW(p)

Hello, my name is Luke. I'm an urban planning graduate student at Cleveland State University, having completed an undergrad in philosophy at the University of New Hampshire a year ago. It was the coursework I did at that school which lead me to be interested in the nebulous and translucent topic of rationality, and I'm happy to see so many people involved and interested in the same conversations I'd spend hours having with classmates. Heck, the very question I was asking myself in something of an ontological sense--am I missing the trees for the forest--is what led me here, specifically to Eliezer's article on the fallacies of compression, which was somewhat helpful. Suffice to say, I tend to think I'm not missing the trees for the forest, and that in fact the original form of the idiom remains true for most other people, though thankfully, not many here.

I'm deeply interested in epistemology, metaphysics, aesthetics, and metaethics, all of which I attempt to approach in systemic ways. As for what led me to consider myself a rationalist in these endeavors...I'm not sure I do. In fact, I'm not sure anyone can or should think of themselves a rationalist, considering that basic beliefs, other than solipsism, are inductive and inferential, and thus fallible. We could argue in circles forever (as others have) what constitutes knowledge, but any definition seems, in my view, to be arbitrary and thus non-universal and therefore, again, fallible--even mathematical knowledge and formal logic.

Granted, I don't sit in a corner rocking back and forth sucking my thumb, driven mad by the uncertainty of it all, but I also operate with the knowledge that whatever I deem rational behavior and thought processes only seem rational because I've pre-decided what constitutes rational behavior (i.e., circularity, or coherentism at best...feeling like I'm writing a duplicate of a different post). Of course, all that seems like too easy an exit from a number of hard problems, so I keep reading to make sure that, in fact, I oughtn't be rocking back and forth in a corner sucking my thumb for the utility of it, turning into a kind of utility monster. An absurdist I remain, but one with a pretty strong intuitive consequentialist metaethical framework which allows me to find great joy in the topics covered on LW.

comment by alexg · 2013-11-13T12:33:03.074Z · LW(p) · GW(p)

G'day

As you can probably guess, I'm Alex. I'm a high school student from Australia and have been disappointed with the education system here from quite some time.

I came to LW via HPMoR which was linked to me by a fellow member of the Aus IMO team. (I seriously doubt I'm the only (ex-)Olympian around here - seems just the sort of place that would attract them). I've spent the past few weeks reading the sequences by EY, as well as miscellaneous other stuff. Made a few (inconsequential) posts too.

I have very little in the way of controversial opinions to offer (relative to the demographics of this site) as just about all the unusual positions it takes I already agreed with (e.g. athiesm) or seemed pretty obvious to me after some thought (e.g. transhumanism). Maybe it's just hindsight bias.

I'm slightly disappointed with the ban on political discussion. I do agree that it should not be mentioned when not relevant but it seems a shame to waste this much rationality in one place by forbidding them to use it where it's most needed. A possible compromise would be to create a politics dicussion page to discuss pros and cons to particular ideologies. (If one already exists point me to it). A reason cited is that there are other sites to discuss politics - if any do so rationally I'd like to see them.

It is a relief to be somewhere where I don't have to constantly take into account inferential distance, and I shall try to make the most of this. I only resolve to write just that which has not been written.

Replies from: Vaniver
comment by Vaniver · 2013-11-13T17:02:43.634Z · LW(p) · GW(p)

Welcome!

There have been previous political threads, like here, here, or here. If you search "politics," you'll find quite a bit. Here was my response to the proposal that we have political discussion threads; basically, I think politics is a suboptimal way to spend your time. It might feel useful, but that doesn't mean it is useful. Here's Raemon's comment on the norm against discussing politics. Explicitly political discussion can be found on MoreRight, founded by posters active on LessWrong, as well as on other blogs. (MoreRight is part of 'neoreaction', which Yvain has recently criticized here, for example.)

I don't see what you mean by the 'pros and cons' of holding a particular ideology. Ideologies are, generally, value systems- they define what is a pro and what is a con.

Replies from: Lumifer
comment by Lumifer · 2013-11-13T17:20:36.045Z · LW(p) · GW(p)

I must add that not all political discussion is a mud-flinging match between the Cyans and the Magentas.

For example, the Public Choice theory is a bona fide intellectual topic, but it's also clearly political.

I would also argue that knowing things like the scope of NSA surveillance is actually useful.

Replies from: Vaniver
comment by Vaniver · 2013-11-13T17:29:16.203Z · LW(p) · GW(p)

Cyans and the Magentas

I'm curious why you'd divert from the historically compelling example of the Blues and the Greens.

For example, the Public Choice theory is a bona fide intellectual topic, but it's also clearly political.

It's about politics, but the methodology is not political. The part of politics that's generally fun for people is putting forth an impassioned defense of some idea or policy. That's generally not useful on LessWrong unless it's about a site policy- and even then, the passion probably doesn't help.

I would also argue that knowing things like the scope of NSA surveillance is actually useful.

Sure.

Replies from: Lumifer
comment by Lumifer · 2013-11-13T17:47:22.120Z · LW(p) · GW(p)

I'm curious why you'd divert from the historically compelling example of the Blues and the Greens.

I strongly associate the Greens with, well, the Greens -- a set of political parties in Europe and the whole environmentalist movement.

Blue is a politically-associated color in the US as well.

The part of politics that's generally fun for people is putting forth an impassioned defense of some idea or policy.

True, but LW is VERY unrepresentative sample :-) and maybe we could do a bit better. You're right in that discussing the "pros and cons" of ideological positions is not a good idea, but putting "Warning: mindkill" signs around a huge area of reality and saying "we just don't go there" doesn't look appealing either.

comment by afterburger · 2013-07-09T02:38:34.550Z · LW(p) · GW(p)

Hello! I'm here because...well, I've read all of HPMOR, and I'm looking for people who can help me find the truth and become more powerful. I work as an engineer and read textbooks for fun, so hopefully I can offer some small insights in return.

I'm not comfortable with death. I've signed up for cryonics, but still perceive that option as risky. As a rough estimate, it appears that current medical research is about 3% of GDP and extends lifespans by about 2 years per decade. I guess that if medical research spending were increased to 30% of current GDP, then most of us would live forever while feeling increasingly healthy. Unfortunately, raising taxes to achieve this is not realistic -- doubling taxes for an uncertain return is a hard sell, and I have been unable to find research quantifying the link between public research spending and healthcare technology improvements. Another approach is inventing a technology to increase the overall economy size by 10x, by creating a practical self-replicating robot. This is possible in principle (as demonstrated by Hod Lipson in 2006 and by FANUC robot arm factories daily) but I am currently not a good enough programmer to design and build a fully automated RepRap assembly system in a reasonable amount of time. Also, there are many smart and innovative people at Willow Garage, FANUC and other similar organizations, and it seems unlikely I could exceed the slow and incremental progress of those groups. A third option, trying to create super-level AI to make self-replicating robots for me, is even more difficult and unlikely. A fourth option, not taking heroic responsibility, would make me uncomfortable because I'm not that optimistic about the future. As it is, since dropping out of a PhD program I'm not confident in my ability to complete such a large project. Any practical help would be appreciated, as I would prefer not to rely on the untestable promises of quantum immortality, or on the faith that life is a computer game.

Replies from: idea21
comment by idea21 · 2013-07-25T08:25:15.100Z · LW(p) · GW(p)

Hi, afterburger

I find correct that you are not comfortable with death, the opposite of that would be unnatural.

I don't know whether you have ever heard of this person

https://en.wikipedia.org/wiki/Nikolai_Fyodorovich_Fyodorov

"Fedorov argued that the struggle against death can become the most natural cause uniting all people of Earth, regardless of their nationality, race, citizenship or wealth (he called this the Common Cause)."

Fedorov speculations about a future resurrection of all, although seen today as a joke, at least they are able to beat the "Pascal's wager" and, if we keep in mind the possiibilities of new particle physics, it is rational hoping that an extremely altruistic future humanity could decide to ressurrect all of us, by using resources on technology that today we cannot imagiine (the same way that current technology could have never been imagined by Plato or Aristotle).

Although science and technology could maybe keep limits, the most important issue about that would have to do with motivations. Why should a future humanity would be interested in acting so?

The only thing we could do today about helping that, would be starting to build up already the moral and cultural foundation of a fully altruistic and rational society (which would be inevitably, extremely economically efficient). And that is not done yet.

comment by polutropon · 2013-07-06T20:11:53.959Z · LW(p) · GW(p)

Hello again, Less Wrong! I'm not entirely new — I've been lurking since at least 2010 and I had an account for a while, but since that I've let that one lie fallow for almost two years now I thought I'd start afresh.

I'm a college senior, studying cognitive psychology with a focus on irrationality / heuristics and biases. In a couple of months I'll be starting my year-long senior thesis, which I'm currently looking for a specific topic for. I'm also a novice Python programmer and a dabbler in nootropics.

I'll be trying to avoid spending too unproductive time on LW ("insight porn" really is a great description, and I've learned to be wary of being excessively cerebral), but here I am again.

comment by [deleted] · 2013-07-04T13:20:58.571Z · LW(p) · GW(p)

Hi, I'm Alex, high school student. Came here from hpmor and have been lurking for about 5 months for now.

I use my "rationalnoodles" nickname almost everywhere, however still can't decide if it's appropriate on LW. Would like to read what others think.

Thanks.

Replies from: atorm, Cthulhoo
comment by atorm · 2013-07-04T13:34:40.111Z · LW(p) · GW(p)

It's not INappropriate.

comment by Cthulhoo · 2013-07-04T13:57:55.681Z · LW(p) · GW(p)

I use my "rationalnoodles" nickname almost everywhere, however still can't decide if it's appropriate on LW. Would like to read what others think.

Considering that infanticide is a generally accepted discussion topic here, I don't think people will question a nickname ;)

Welcome!

comment by [deleted] · 2013-06-15T21:54:42.534Z · LW(p) · GW(p)

Hi there. I'm thrilled to find a community so dedicated to the seeking of rational truth. I hope to participate in that.

comment by smercjd · 2013-06-03T21:15:16.763Z · LW(p) · GW(p)

Hi...I'm Will -- I learned about less wrong through a very intelligent childhood friend. I am quite nearly his opposite - so maybe I shouldn't say anything...ever...and just stick to reading and learning. But It recommended leaving an introduction post. I also like this as a method of learning. I skimmed a few of the articles in the about page and enjoyed them...they provided a good deal of information that I believe I am much better at processing and understanding as opposed to creating. Therefore, I'm excited to see what I get out of this. I'm also curious to attend a Less Wrong meeting. I haven't looked for one yet, but I will be.

Part of the problem I have is that I prefer doing things that provide tiny levels of fun with little to no levels of self-growth. For example, I would prefer playing a game of League of Legends over reading...anything. This is an annoying habit for obvious reasons. So I guess if there were suggestions of where I should start that might help me subconsciously (and eventually consciously) in favor of more meaningful activities. Not to abolish random bouts of pointless fun, but rather to refine my efficiency with time devoted to ALL of my daily activities.

Replies from: Will_Newsome
comment by Will_Newsome · 2013-06-03T21:40:32.607Z · LW(p) · GW(p)

How funny, I'm Will too! Just a quick & probably useless suggestion: be sure to be extremely honest with yourself about what it is all parts of you want, including the parts that want to play League of Legends. If you understand those parts and how they're a non-trivial part of you, not just an adversarial thing set up to subvert your prefrontal cortex's 'real' ambitions, that will allow you to find ways in which those parts can be satisfied that are more in line with your whole self's ambitions. E.g. the appeal of League of Legends is largely that you have understandable, objective goals that you can make measurable cumulative progress on, which is intrinsically rewarding—the parts of you that are tracking that intrinsic reward might be just as well rewarded by a sufficiently well-taskified approach to learning, say, piano, Japanese, programming, and other skills that are more likely to provide long-term esteem-worthy capital. Finding a way to taskify things in general might be tricky, and it won't itself be the sort of thing that you're likely to make unambiguous cumulative progress on, but it's meta and thus is a very good way to bootstrap to a position where further bootstrapping is easier and where you can hold on to momentum.

Replies from: smercjd
comment by smercjd · 2013-06-06T18:26:06.770Z · LW(p) · GW(p)

The suggestion was definitely not useless. Again, I'm not speaking out of rationalized information I've gathered from material sources. I'm speaking out of what my mind has processed through experiencing life. My current views might be somewhat depressing. Everybody has a different life no matter how much you want to macro them into generalized functions. What I mean by all of that? [I have a tendency of not necessarily saying precisely what I mean]. I mean that I can't be honest with 'all' parts of what I want because I don't know them. Is there potential for damage to my 'real' ambitions? Maybe they are no longer normal or good.

I completely understand what you mean about what to do once those first steps are established. I don't think that it would have necessarily been my course of action once getting that far. So thanks for the advice!

Again, thank you for posting so quickly. I had time to read it much earlier, but this is my first real chance [League prevailed] to reply. <-- embarrassing, but true.

Replies from: smercjd
comment by smercjd · 2013-06-06T18:30:36.264Z · LW(p) · GW(p)

Oh! And yes taskifying (if/once I get there) will be very difficult for me.

Replies from: smercjd
comment by smercjd · 2013-06-06T18:32:50.072Z · LW(p) · GW(p)

What about you? Why did you start on this site? What has brought you to your way of thinking? What are the most interesting aspects? The ones you think newbies like me should start?

comment by bouilhet · 2013-05-19T20:54:15.854Z · LW(p) · GW(p)

Hello everyone.

I go by bouilhet. I don't typically spend much time on the Internet, much less in the interactive blogosphere, and I don't know how joining LessWrong will fit into the schedule of my life, but here goes. I'm interested from a philosophical perspective in many of the problems discussed on LW - AI/futurism, rationalism, epistemology, probability, bias - and after reading through a fair share of the material here I thought it was time to engage. I don't exactly consider myself a rationalist (though perhaps I am one), but I spend a great deal of my thought-energy trying to see clearly - in my personal life as well as in my work life (art) - and reason plays a significant role in that. On the other hand, I'm fairly committed to the belief (at least partly based on observation) that a given (non-mathematical) truth claim cannot quite be separated from a person's desire for said claim to be true. I'd like to have this belief challenged, naturally, but mostly I'm looking forward to further investigations of the gray areas. Broadly, I'm very attracted to what seems to be the unspoken premise of this community: that being definitively right may be off the table, but that one might, with a little effort, be less wrong.

Replies from: None
comment by [deleted] · 2013-05-19T21:12:04.000Z · LW(p) · GW(p)

I'm fairly committed to the belief (at least partly based on observation) that a given (non-mathematical) truth claim cannot quite be separated from a person's desire for said claim to be true.

So, at the moment I believe that the car I can see out the window to the left of me is cream colored. I don't think this belief is one I desire to be true (I would not be disappointed with a red car, for example). I have an (depending on how you count) an infinity of such beliefs about my immediate environment. What do you make of these beliefs, given your above claim?

Replies from: bouilhet, Kawoomba
comment by bouilhet · 2013-05-19T22:25:15.926Z · LW(p) · GW(p)

Thanks for your reply, hen.

I guess I don't think you're making a truth claim when you say that the car you see is cream-colored. You're just reporting an empirical observation. If, however, someone sitting next to you objected that the same car was red, then there would be a problem to sort out, i.e. there would be some doubt as to what was being observed, whether one of you were color blind, etc. And in that case I think you would desire your perception to be the accurate one, not because cream-colored is better than red, but because humans, I think, generally need to believe that their direct experience of the world is reliable.

For practical purposes, intuition is of course indispensable. I prefer to distinguish between "beliefs" and "perceptions" when it comes to one's immediate environment (I wouldn't say I believe I'm sitting in front of my computer right now; I'd simply say that I am sitting in front of my computer), but there are also limits to what can be perceived immediately (e.g. by the naked eye) which can destabilize perceptions one would otherwise be happy to take for granted.

So: for most intents and purposes, I have no interest in challenging your report of what was seen out the window. But it seems to me that in making your report you already have some interest in its accuracy.

Replies from: None
comment by [deleted] · 2013-05-19T22:28:31.426Z · LW(p) · GW(p)

Thanks for clarifying.

comment by Kawoomba · 2013-05-19T21:27:34.152Z · LW(p) · GW(p)

I have an (depending on how you count) an infinity of such beliefs about my immediate environment.

No you don't.

Replies from: None
comment by [deleted] · 2013-05-19T22:18:44.395Z · LW(p) · GW(p)

Yes I do. I believe that the car outside is cream colored. I believe that the car outside is not a cat. I believe that the car outside is heavier than 2.1312 kilograms, I believe...etc. I have a uncountably infinite number of beliefs just about the weight of that car!

You might not want to call these 'beliefs' for one reason or another, but that's irrelevant to the grandparent: the great grand parent is just discussing truth-claims and my attitude towards them. And I can clearly make an infinite number of truth-claims about my immediate environment, given infinite claim making resources, of course (I assume, perhaps wrongly, that the question of my claim-making resources isn't relevant to the point about belief and desire).

Replies from: Kawoomba
comment by Kawoomba · 2013-05-20T06:23:55.707Z · LW(p) · GW(p)

When you say "I have an infinity of such beliefs", or even just "I can make an infinite number of truth-claims", I assume that the "I" refers to "hen", not some hypothetical entity with an infinite memory capacity (for the former), or an infinite lifespan (for the latter).

Unless you aren't talking about yourself (and that car), both claims (have an infinity of beliefs, can make an infinite number of truth-claims) are obviously false on resource grounds alone. Even the number of truth-claims you could make in the remainder of your lifetime is limited. (In a hypothetical with infinite resources, it still would be a stretch to construct an infinite number of distinct claims about a finite object.)

Edit: You edited the "given infinite claim making resources" in later, which is contradictory with your "I" and the whole scenario I responded to. "I have an infinite number of beliefs" - "No you dont" - "Yes I do ... with infinite resources" - "You don't have infinite resources" - ????

Replies from: None
comment by [deleted] · 2013-05-20T13:54:47.601Z · LW(p) · GW(p)

Yeah, but the point about resources isn't relevant to my question. Though, in fact, neither is the idea that I have an infinity of beliefs. So tapping out.

Edit: though, you know, this is an interesting question and I feel unsure of my answer, so I'd like to hear your objection. My thought is that if I believe A, and if A implies B, and if I'm aware that A implies B, then I believe B.

So in this case, I believe the car (now gone, sadly) weighs more than 100 kg. I'm aware that this implies that it weighs more than 99 kg. I'm also aware that this implies that it weighs more than all the real numbers of kilograms between 99 and 100. This is an infinity, and therefore I have an infinity of beliefs. Is that wrong?

Replies from: TheOtherDave, Kawoomba
comment by TheOtherDave · 2013-05-20T16:38:58.414Z · LW(p) · GW(p)

I'm not Kawoomba, but I would say that yes, that's wrong: the logical implications of my beliefs are not necessarily beliefs that I have, they are merely beliefs that I am capable of generating. (And in some cases, they aren't even that, but that's beside the point here.)

More specifically: do I believe that my car weighs more than 17.12311231 kilograms? Well, now that I've asked the question, yes I do. Did I believe that before I asked the question? No, I wouldn't say so... though in this case, the derivation is so trivial it would not ordinarily occur to me to highlight the distinction.

The distinction becomes more salient when the derivation is more difficult; I can easily imagine myself responding to a Socratic question with some form of "Huh. I didn't believe X a second ago, but X clearly follows from things I do believe, and which on reflection I continue to endorse, so I now believe X."

Replies from: None
comment by [deleted] · 2013-05-20T17:00:24.357Z · LW(p) · GW(p)

Did I believe that before I asked the question? No, I wouldn't say so...

Why not? Perhaps you could spell out the Socratic case a little more? I'm not stuck on saying that this or that must be what constitutes belief, but I do have the sense that I believe vastly more than what I do (or even am able to) call up in a given moment. This is why I'm reluctant to call explicit awareness* a criterion of belief. On the other hand, I'm not logically omniscient, so I can't be said to believe everything that follows from what I'm explicitly aware that I believe. My guess as to a solution is that I believe (at least) everything that follows from what I explicitly believe, where those implications are cases of implications I am explicitly aware of.

So for example, I am explicitly aware that the car weighs more than 100kg, and I'm explicitly aware that it follows from this that the car weighs more than 99kg, and more than everything between 99 and 100kg, and that it follows from this that it weighs more than 99.1234...kg. Hence, infinite beliefs.

*Edit: explicit awareness should be glossed: I mean by this the relation I stand to a claim after you've asked me a question and I've given you that claim as an answer. I'm not sure what this involves, but 'explicit awareness' seems to describe it pretty well.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-20T17:23:58.704Z · LW(p) · GW(p)

I'm not sure I have anything more to say; this feels more like a question of semantic preferences than anything deep. That is, I don't think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.

I certainly agree that I have many more things-I-would-label-beliefs than I am consciously aware of at any given moment. But I still wouldn't call "my car weighs more than 12.141341 kg" one of those beliefs. Nor would I say that I was explicitly aware that it followed from "car > 100kg" that "car > 12.141341 kg" prior to explicitly thinking about it.

Replies from: None
comment by [deleted] · 2013-05-20T17:33:55.287Z · LW(p) · GW(p)

That is, I don't think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.

We agree on what our brains are doing. I think we disagree on whether or not our beliefs are limited to what our brains are or were doing: I suppose I'm saying that I should be said to believe right now what my brain would predictably do (belief/inference wise) on the basis of what it's doing and has already done (excluding any new information).

Suppose we divide my beliefs (on my view of 'belief') into my occurrent beliefs (stuff my brain has done or is doing) from my extrapolated beliefs (stuff it would predictably do excluding new information). If you grant that my extrapolated beliefs have some special status that differentiates them from, say, the beliefs I'll have about the episodes of The Americans I haven't watched yet, then we're debating semantics. If you don't think my extrapolated beliefs are importantly different from any old beliefs I'll have later on, then I think we're arguing about something substantial.

Nor would I say that I was explicitly aware that it followed from "car > 100kg" that "car > 12.141341 kg" prior to explicitly thinking about it.

I mean that supposing you're explicitly aware of a more general claim, say 'the car weighs more than any specific real number of kilograms less than 100kg', then you believe the (infinite) set of implied beliefs about the relation of the car's weight to every real number of kg below 100, even though your brain hasn't, and couldn't, run through all of those beliefs explicitly.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-20T18:29:04.075Z · LW(p) · GW(p)

Yes, I grant that beliefs which I can have in the future based on analysis of data I already have are importantly different from beliefs I can have in the future only if I'm given new inputs.
Yes, I agree that the infinite set of implied beliefs about the car's weight is in the former category, assuming I'm aware that the car weighs more than 100 kg and that numbers work the way they work.
I think we're just debating semantics.

Replies from: None
comment by [deleted] · 2013-05-20T18:30:34.037Z · LW(p) · GW(p)

Okay, well, thanks for giving me the opportunity to think this through a bit more.

comment by Kawoomba · 2013-05-20T17:47:56.081Z · LW(p) · GW(p)

(Let me just add to what TheOtherKawoomba already said)

This is an infinity, and therefore I have an infinity of beliefs. Is that wrong?

If that were so, then "I believe the sky is blue" would mean "I have an infinity of beliefs about the sky, namely that it is blue, so it also is "not blue+1/nth the distance to the next color" (then vary the n).

A student writing down "x>2" would have stated an infinity of beliefs about the answer. Does that seem like a sensible definition of belief? Say I picked one out of your infinite beliefs about the car's weight. Where is it located in your brain? Which synapses encode it? It would have to be the same ones also encoding an infinity of other beliefs about the car's weight. Does that make sense? I plead the Chewbacca defense.

There's another problem if you consider all the implications as if they were your beliefs, even if you've not explicitly followed the implication. Propositions in math simply follow from axioms, i.e. are implications of some basic beliefs. Yet for some of those their truth value is famously not yet known. If you held all beliefs which were logically implied by stated beliefs to also be your beliefs just the same, you'd face a conundrum - you'd be uncertain about such famous - yet unknown - propositions. Yet that uncertainty isn't in the territory - either the proposition is implied by the axioms or it isn't. Yet you couldn't build the "beliefs implied by this belief". So would you just follow "trivial" implications such as in your example? You'd still need to evaluate them, and it is that simple fact of having to evaluate whether an implication actually is one, or even if 99 is actually smaller than 100 - however trivial it seems - that is the basis for the new (derived) belief, the reason you cannot automatically follow an infinity of implications simultaneously. Since you cannot evaluate an infinity of numbers, you cannot hold an infinity of beliefs.

Replies from: None
comment by [deleted] · 2013-05-20T18:11:29.547Z · LW(p) · GW(p)

If that were so...

Agreed. Edit: I don't think the one claim means the other, but I do agree that the one (in this case) implies the other. Do you believe that the sky's being blue excludes its being (at the same time and in the same respect) red?

A student writing down "x>2" would have stated an infinity of beliefs about the answer.

Well, the student could be said to believe an infinity of things about the answer, not that the student has stated such an infinity. We agree that to state (or explicitly think about) an infinity of beliefs would be impossible.

Where is it located in your brain?

In response to Dave (the other one), I distinguished beliefs on my view into occurrent beliefs (those beliefs that do or have corresponded to some neural process) and extrapolated beliefs (those beliefs, barring any new information, my brain could predictably arrive at from occurrent beliefs). I am saying that I should be said to believe right now both all of my occurrent beliefs and all my extrapolated beliefs, and that my extrapolated beliefs are infinite. My extrapolated beliefs have no place in my brain, but they're safely in the bounds of logic+physics.

I plead the Chewbacca defense.

I...haven't heard that one.

There's another problem if you consider all the implications as if they were your beliefs, even if you've not explicitly followed the implication.

I don't think this, I agree that this would lead to absurd results.

comment by Free_NRG · 2013-04-21T09:39:50.388Z · LW(p) · GW(p)

Hi! I'm Free_NRG. I've just started a physical chemistry PhD. I found this site through a link from Leah Libresco early last year (I can't remember exactly how I found her blog). I read through the sequences as one of the distractions from too much 4th year chemistry, and particularly liked the probability theory and evolutionary theory sequences. This year, I'm trying to apply some of the productivity porn I've been reading to my life. I'm thinking of blogging about it.

comment by EliasHasle · 2014-02-07T11:04:41.665Z · LW(p) · GW(p)Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-02-07T11:57:40.831Z · LW(p) · GW(p)

Hi Elias, nice to see that you've found your way here. What are your academic interests? Philosophy, it seems, but what kind? And what else are you interested in?

Replies from: EliasHasle
comment by zoltanistvan · 2014-02-07T02:03:10.229Z · LW(p) · GW(p)

Hi, My name is Zoltan Istvan. I'm a transhumanist, futurist, journalist, and the author of the philosophical novel "The Transhumanist Wager." I've been checking out this site for some time, but decided to create an account today to become closer to the community. I thought I'd start by posting an essay I recently wrote, which sums up some of my ideas. Feel free to share it if you like, and I hope you find it moving. Cheers.

"When Does Hindering Life Extension Science Become a Crime—or even Genocide?"

Every human being has both a minimum and a maximum amount of life hours left to live. If you add together the possible maximum life hours of every living person on the planet, you arrive at a special number: the optimum amount of time for our species to evolve, find happiness, and become the most that it can be. Many reasonable people feel we should attempt to achieve this maximum number of life hours for humankind. After all, very few people actually wish to prematurely die or wish for their fellow humans' premature deaths.

In a free and functioning democratic society, it's the duty of our leaders and government to implement laws and social strategies to maximize these life hours that we want to safeguard. Regardless of ideological, political, religious, or cultural beliefs, we expect our leaders and government to protect our lives and ensure the maximum length of our lifespans. Any other behavior cuts short the time human beings have left to live. Anything else becomes a crime of prematurely ending human lives. Anything else fits the common legal term we have for that type of reprehensible behavior: criminal manslaughter.

In 2001, former President George W. Bush restricted federal funding for stem cell research, one of the most promising fields of medicine in the 21st Century. Stem cells can be used to help fight disease and, therefore, can lengthen lives. Bush restricted the funding because his conservative religious beliefs—some stem cells came from aborted fetuses—conflicted with his fiduciary duty of helping millions of ailing, disease-stricken human beings. Much medical research in the United States relies heavily on government funding and the legal right to do the research. Ultimately, when a disapproving President limits public resources for a specific field of science, the research in that field slows down dramatically—even if that research would obviously lengthen and improve the lives of millions.

It's not just politicians that are prematurely ending our lives with what can be called "pro-death" policies and ideologies. In 2009, on a trip to Africa, Pope Benedict XVI told journalists that the epidemic of AIDS would be worsened by encouraging people to use condoms. More than 25 million people have died from AIDS since the first cases began being reported in the news in the early 1980s. In numerous studies, condoms have been shown to help stop the spread of HIV, the virus that causes AIDS. This makes condoms one of the simplest and most affordable life extension tools on the planet. Unfathomably, the billion-person strong Catholic Church actively supports the idea that condom usage is sinful, despite the fact that such a malicious policy has helped sicken and kill a staggering amount of innocent people.

Regrettably, in 2014, America continues to be permeated with an anti-life extension culture. Genetic engineering experiments in humans often have to pass numerous red-tape-laden government regulatory bodies in order to conduct any tests at all, especially at publically funded universities and research centers. Additionally, many states still ban human reproductive cloning, which could one day play a critical part in extending human life. The current US administration is also culpable. The White House is simply not doing enough to extend American lifespans. The US Government spends just 2% of the national budget on science and medical research, while their defense budget is over 20%, according to a 2011 US Office of Management Budget chart. Does President Obama not care about this fact, or is he unaware that not actively funding and supporting life extension research indeed shortens lives?

In my philosophical novel The Transhumanist Wager, there is a scene which takes place outside of a California courthouse where transhumanist activists are holding up a banner. The words inscribed on the banner sum up some eye-opening data: "By not actively funding life extension research, the amount of life hours the United States Government is stealing from its citizens is thousands of times more than all the American life hours lost in the Twin Towers tragedy, the AIDS epidemic, and the Vietnam War combined. Demand that your government federally fund transhuman research, nullify anti-science laws, and promote a life extension culture. The average human body can be made to live healthily and productively beyond age 150."

Some longevity experts think that with a small amount of funding—$50 billion dollars—targeted specifically towards life extension research and ending human mortality, average human lifespans could be increased by 25-50 years in about a decade's time. The world's net worth is over $200 trillion dollars, so the species can easily spare a fraction of its wealth to gain some of the most valuable commodities humans have: health and time.

Unfortunately, our species has already lost a massive amount of life hours; billions of lives have been unnecessarily cut short in the last 50 years because of widespread anti-science attitudes and policies. Even in the modern 21st Century, our evolutionary development continues to be significantly hampered by world leaders and governments who believe in non-empirical, faith-driven religious doctrines—most of which require the worship of deities whose teachings totally negate the need for radical life extension science. Virtually every major leader on the planet believes their "God" will give them an afterlife in a heavenly paradise, so living longer on planet Earth is just not that important.

Back in the real world, 150,000 people died yesterday. Another 150,000 will cease to exist today, and the same amount will disappear tomorrow. A good way to reverse this widespread deathist attitude should start with investigative government and non-government commissions examining whether public fiduciary duty requires acting in the best interest of people's health and longevity. Furthermore, investigative commissions should be set up to examine whether former and current top politicians and religious leaders are guilty of shortening people's lives for their own selfish beliefs and ideologies. Organizations and other global leaders that have done the same should be scrutinized and investigated too. And if fault or crimes against humanity are found, justice should be administered. After all, it's possible that the Catholic Church's stance on condoms will be responsible for more deaths in Africa than the Holocaust was responsible for in Europe. Over one million AIDS victims died in Africa last year alone. Catholicism is growing quickly in Africa, and there will soon be nearly 200 million Catholics on the continent. Obviously, the definition of genocide needs to be reconsidered by the public.

As a civilization of advanced beings who desire to live longer, better, and more successfully, it is our responsibility to put government, religious institutions, big business, and other entities that endorse pro-death policies on notice. Society should stand ready to prosecute anyone that deliberately promotes agendas and actions that prematurely end people's useful lives. Stifling or hindering life extension science, education, and practices needs to be recognized as a legitimate crime.

Replies from: James_Miller, Richard_Kennaway
comment by James_Miller · 2014-02-07T04:48:46.132Z · LW(p) · GW(p)

it is our responsibility to put government, religious institutions, big business, and other entities that endorse pro-death policies on notice. Society should stand ready to prosecute anyone that deliberately promotes agendas and actions that prematurely end people's useful lives.

A tiny minority group such as transhumanists should not make threats against the powers that be.

Replies from: thirdfloornorth, zoltanistvan
comment by thirdfloornorth · 2014-02-07T06:37:45.202Z · LW(p) · GW(p)

He's making it himself, not as a spokesperson of the movement. However, as a transhumanist myself, I can't say I disagree with him. Morally speaking, when does not only actively hindering, but choosing to not vehemently pursue, life extension research constitute a threat on our lives?

Maybe it is time (or if not, it will be very soon) for transhumanism and transhumanists to enter the public sphere, to become more visible and vocal.

We have the capacity, for the first time in human history, to potentially end death, and not for our progeny but for ourselves, now. Yet we are disorganized, spread thin, essentially invisible in terms of public consciousness. People are having freakouts about something as mundane as Google Glass: We are talking about the cyberization or gross genetic manipulation of our bodies, increasing life spans to quickly approach "indefinite", etc., and not in some distant future, but in the next twenty or thirty years.

We are being held back by lack of funding, poor cohesion, and a general failure of imagination, and that is largely our own fault for being content to be quiet, to remain a fringe element, optimistically debating and self-congratulating in nooks and niches of various online communities, bothering and being bothered by few if any.

I believe it is our moral imperative to, now that is is possible, pursue life extension with every cent and scrap of resources we have available to us. To do otherwise is reprehensible.

http://www.nickbostrom.com/fable/dragon.html

Let Mr. Istvan make his threats, as long as it gets people talking about us.

Replies from: James_Miller
comment by James_Miller · 2014-02-07T16:10:27.234Z · LW(p) · GW(p)

I believe it is our moral imperative to, now that is is possible, pursue life extension with every cent and scrap of resources we have available to us. To do otherwise is reprehensible.

This means taking a consequentialist public relations strategy. Imagine that group X advocates Y, and you know little about X and based on superficial analysis Y seems somewhat silly. How would your opinion of group X change if you find members of this group want to "prosecute anyone" who stands in the way of Y?

comment by zoltanistvan · 2014-02-07T06:17:12.449Z · LW(p) · GW(p)

Hi, Thanks for the response. I should be clear; transhumanists are not making the threat. I'm making it myself. And I'm doing it as publicly and openly as possible so there can be no misunderstanding:

http://www.psychologytoday.com/blog/the-transhumanist-philosopher/201401/when-does-hindering-life-extension-science-become-crime

http://ieet.org/index.php/IEET/more/istvan20140131

The problem is that lives are on the line. So I feel someone needs to openly state what seems to be quite obvious. Thanks for considering my thoughts.

comment by Richard_Kennaway · 2014-02-07T12:16:46.180Z · LW(p) · GW(p)

Society should stand ready to prosecute anyone that deliberately promotes agendas and actions that prematurely end people's useful lives.

Do you apply this stirring declaration to the beginning of a life as well as to the end of one?

Replies from: zoltanistvan
comment by zoltanistvan · 2014-02-07T21:19:50.225Z · LW(p) · GW(p)

First, let me just say that the essay is designed to provoke and challenge, while also aiming to move the idea forward in hopes life extension can be taken more seriously. I realize the incredible difficulties and violations of freedom, as the ideas in the essay would require. But to answer your question, I tend to concentrate on "useful" lives, so the declaration would not apply to the beginning of life, but rather to those lives that are already well under way.

comment by [deleted] · 2013-11-10T19:38:12.045Z · LW(p) · GW(p)

Hi, I've arrived here through HPMoR at least a year ago, but I was pretty intimidated by the size of the Sequences - I do try to catch up now. I'm a medical student from Hungary and I've never learnt maths beyond the high school requirements (I do intend to resolve this since it seems like a requirement here?).

I'm here to learn how to effectively change my mind and have intelligent discussion. I probably won't be active until later, as I don't think I would be able to present my reasonings in a sufficiently convincing way, and I already see a few points where I will politely disagree with the community average.

My ultimate goal is to understand how to help other people update on evidence. Disease prevention is quite close to my heart, and while it is a common argument that you shouldn't question people's decisions that are not harming other people (at least directly), I'm not convinced that you can call it an informed decision to e. g. smoke when one is full of rationalisations and such. I'm interested in learning/developing a method that is time- and cost-efficient for helping people change their own minds - this is not only a rationality problem but also a communication one, as I have to present myself just to right way, not being overly arrogant or meek, make it look like the person got to the point of updating on their own etc. I'm not hoping to find this method soon, though.

comment by aarongertler · 2013-10-10T20:38:04.722Z · LW(p) · GW(p)

Salutations!

My name is Aaron. I'm a college junior on the tail end of the cycle of Bar Mitzvah to New Atheist to info-omnivorous psychology geek to attempted systems thinker. Prospective Psychology/Cognitive Science major at Yale, very interested in meeting other rationalists in the New Haven area. I'm on the board of the Yale Humanist Community, I'm a research assistant in a neuroscience lab, and I do a lot of writing.

Big problems I've been thinking a lot about: Why are most people wildly irrational in the amount of time they're willing to devote to information search (that is, reducing uncertainty around uncertain decisions)? How can humanists and rationalists build a compelling community that serves adults of all ages as well as children? What sorts of media tend to encourage the "shift" from bad thinking to good thinking, and/or passive to active thinking (NPC vs. hero mindset, sort of--this one is complicated), and how can we get that media in the hands of more people?

I read HPMoR without really noticing Less Wrong, but have been linked to a few posts over the years. Last spring, I found "Privileging the Question", which rang so true that I went on to read the Sequences and much of the rest. I was never very certain in my philosophy before finding the site, but now I'm pretty sure I at least know how to think about philosophy, which is nice.

The next few years hopefully involve me getting a job out of college that will allow me to build savings while donating plenty, while aligning me to take a position in some high-upside sector of tech or in the rationalist arena, but a lot of people say that, and I'm very unsure about what will actually happen if I flunk my case interviews. Still, the future will be better than the past regardless, and that thought keeps me going (as does knowing how many people are out there working to avoid future-is-worse-than-past scenarios).

comment by nasrin · 2013-10-02T23:31:31.096Z · LW(p) · GW(p)

Hi! Everyone below are superbly impressive! I'm a physicist, in my second year of teaching English and that's as much rationality as I can provide at the moment. Looking to relocate to China in an effort to be superhuman. Would really appreciate a few pointers on teaching institutions to avoid/ embrace.

Excellent reading here, thanks! Nas

comment by pashakun · 2013-07-26T05:13:31.975Z · LW(p) · GW(p)

I'm Pasha, a financial journalist based in Tokyo.

I recently found out about this blog from this post on The View From Hell: http://goo.gl/DCNX4U

A few years in a school specialized in math and physics in the former Soviet Union have convinced me to seek my fortunes in liberal arts. (It's those kids in my class who would yell out an answer to a physics problem even before the teacher has finished reading the question.)

Covering the semiconductor industry here in Japan has sparked a renewed appreciation of the scientific method and revived my interest in rationality, math and computation. ... One thing leads to another and here I am ~

comment by Heraclitus · 2013-07-21T04:29:25.104Z · LW(p) · GW(p)

So: Here goes. I'm dipping my toe into this gigantic and somewhat scary pool/lake(/ocean?).

Here's the deal: I'm a recovering irrationalic. Not an irrationalist; I've never believed in anything but rationalism (in the sense it's used here, but that's another discussion), formally. But my behaviors and attitudes have been stuck in an irrational quagmire for years. Perhaps decades, depending on exactly how you're measuring. So I use "irrationalic" in the sense of "alcoholic"; someone who self-identifies as "alcoholic" is very unlikely to extol the virtues of alcohol, but nonetheless has a hard time staying away from the stuff.

And, like many alcoholics, I have a gut feeling that going "cold turkey" is a very bad idea. Not, in this case, in the sense that I want to continue being specifically irrational to some degree or another, but in that I am extremely wary of diving into the list of readings and immersing myself in rationalist literature and ideology (if that is the correct word) at this point. I have a feeling that I need to work some things out slowly, and I have learned from long and painful experience that my gut is always right on this particular kind of issue.

This does not mean that linking to suggested resources is in any way not okay, just that I'm going to take my time about reading them, and I suppose I'm making a weak (in a technical sense) request to be gentle at first. Yes, in principle, all of my premises are questionable; that's what rationalism means (in part). But...think about it as if you had a new, half-developed idea. If you tell it to people who tear it apart, that can kill it. That's kind of how I feel now. I'm feeling out this new(ish) way of being, and I don't feel like being pushed just yet (which people who know me might find quite rich; I'm a champion arguer).

Yes, this is personal, more personal than I am at all comfortable being in public. But if this community is anything like I imagine it to be (not that I don't have experience with foiled expectations!), I figure I'll probably end up divulging a lot more personal stuff anyway.

I honestly feel as if I'm walking into church for the first time in decades.

So why am I here then? Well, I was updating my long-dormant blog by fixing dead links &c, and in doing so, discovered to my joy that Memepool was no longer dead. There, I found a link to HPMOR. Reading this over the next several days contributed to my reawakening, along with other, more personal happenings. This is a journey of recovery I've been on for, depending on how you count, three to six years, but HPMOR certainly gave a significant boost to the process, and today (also for personal reasons) I feel that I've crossed a threshold, and feel comfortable "walking into church" again.

Alright, I'll anticipate the first question: "What are you talking about? Irrationality is an extremely broad label." Well, I'm not going to go into to too terribly much detail just now, but let's say that the revelation or step forward that occurred today was realizing that the extremely common belief that other people can make you morally wrong by their judgement is unequivocally false. This (that this premise is false) is what I strongly believed growing up, but...well, perhaps "strongly" is the wrong word. I had been raised in an environment that very much held that the opposite was true, that other people's opinion of you was crucial to your rightness, morality and worth as a human being. Nobody ever said it that way, of course, and would probably deny it if put that way, but that is nonetheless how most people believe. However, in my case it was so blatant that it was fairly easy to see how ridiculous it was. Nonetheless, as reasonable as my rational constructions seemed to me, there was really no way I could be certain that I was right and others were wrong, so I held a back-of-my-head belief, borne of the experience of being repeatedly mistaken that every inquisitive child experiences, that I would someday mature and come to realize I had been wrong all along.

Well, that happened. Sort of. Events in my life picked at that point of uncertainty, and I gave up my visceral devotion to rationality and personal responsibility, which led slowly down into an awful abyss that I'm not going to describe at just this moment, that I have (hopefully) at last managed to climb out of, and am now standing at the edge, blinking at the sunlight, trying to figure out precisely where to go from here, but wary of being blinded by the newfound brilliance and wishing to take my time to figure out the next step.

So again, then, why am I here? If I don't want to be bombarded with advice on how to think more rationally, why did I walk in here? I'm not sure. It seemed time, time to connect with people who, perhaps, could support me in this journey, and possibly shorten it somewhat.

I also notice that this thread has gone waaay beyond 500 comments; perhaps someone with more Karma than I can make a new Welcome thread?

Replies from: Heraclitus
comment by Heraclitus · 2013-07-21T04:48:32.618Z · LW(p) · GW(p)

So since I wrote this five minutes ago, I've gotten some insights (through looking at one of the links on the welcome page above) into why I'm so wary of being bombarded with arguments explaining how to be rational. Hopefully commenting on my own comment won't discourage others from doing so.

I'm not wary because I'm afraid my newfound insight is going to be damaged somehow; quite the contrary. I'm wary because I strongly fear that all these rationalist arguments will be very seductive. However, I've tried very hard my whole life (with varying degrees of success) to make sure my thoughts and ideas were my own, and, having so recently stepped back into the light, I fear I might be very susceptible to rationalist arguments. "But that's a good thing," you might say, "because it's rationalism!" (or rather, some more complicated and convincing formulation). Well, sure, but that doesn't make any specific rationalist argument certain to be right, and I'm not sure I feel competent to evaluate the truth of claims that sound very good and I really want to believe right now.

Replies from: satt
comment by satt · 2013-07-21T11:35:48.597Z · LW(p) · GW(p)

So since I wrote this five minutes ago, I've gotten some insights (through looking at one of the links on the welcome page above) into why I'm so wary of being bombarded with arguments explaining how to be rational.

You might like a couple of pieces that take a similarly positive-but-tempered view of LW-style rationality (both written by the person — Yvain — who wrote the piece at your link, as it happens): "Extreme Rationality: It's Not That Great" and "Epistemic learned helplessness". You might also like Yvain's other LW posts, most of which work as standalone pieces and are worth reading.

Replies from: Heraclitus, Heraclitus
comment by Heraclitus · 2013-07-24T00:36:16.813Z · LW(p) · GW(p)

Wow. Thank you. I just finished "Epistemic Learned Helplessness," and I feel much better now. Those two articles have successfully inoculated me against being sucked in too easily into the "x-rationalist" view.

I actually disagree with what he says in "Epistemic Learned Helplessness"; or rather, I don't believe that that helplessness is actually necessary, that I can--or if I can't, it is possible to with sufficient training--tell when a case has been reasonably proven and when I should suspend judgement. Or maybe he's more right than I like to admit; I have to concede that I was taken in by much of Graham Hancock's work until I tried to write a short story based on one of his ideas and it completely fell apart after some research and analysis. But regardless of whether the dilemma he poses is avoidable or not, he makes some excellent, indeed critical, points, and I can now proceed with a healthy dose of skepticism of rationalism, a phrase I would likely have been ashamed to utter before reading that article.

comment by Heraclitus · 2013-07-23T22:32:15.992Z · LW(p) · GW(p)

Okay, I've read the first article you linked, and I'm discovering that I was naive about what this site was about (this should not be surprising after all the times similar things have happened to me, but it apparently still is). I've read HPMOR, of course, but I didn't catch on that this site would be specifically geared to using specific, formal, scientifically-derived techniques to improve thinking. The article mentioned Scientology; this kind of sounds a little like Scientology (well, Dianetics) to me, though I'm sure it makes much more formal sense. This makes me still more wary than before; I like my own "organic" rationalist methods, and am skittish of adopting some formal "system" of thought. This is more grousing than complaint; I do not have enough information to intelligently critique at this point, although the thing that bothered me most about Harry was his overuse of formal techniques instead of just trying to grok the whole situation in a more organic fashion; that just seems like a good way to miss something. This does not mean that reading about common errors in thinking couldn't be useful.

I'm disappointed that my post didn't receive more response (poor me! I want attention! Well, alright, I was hoping for something analogous to a support group), but I appreciate yours. I'll definitely keep reading.

Replies from: satt
comment by satt · 2013-07-29T01:43:04.459Z · LW(p) · GW(p)

the thing that bothered me most about Harry was his overuse of formal techniques instead of just trying to grok the whole situation in a more organic fashion; that just seems like a good way to miss something.

I can't speak to how well this works out for Harry (I haven't read HPMoR) but I think I can guess why this bites people in real life.

The methods that work for someone tend to be the ones they're already familiar with. Why? At least two reasons. The boring one is that people are less likely to stick with methods that obviously don't work, so obviously bad methods get forgotten about and become unfamiliar again. The more interesting reason is that using a method makes it "better": practice allows you to apply it more quickly when it's relevant, you learn to recognize more quickly the situations where the method's relevant, and you get better at integrating what you learn from that method with your other thoughts.

This is why it can be safer to organically accrete a system of thinking piece by piece than to install a fully-fledged system in one go; you only have to keep one piece in your head at a time, and you can focus on that one piece for a while until you're used to it and can apply it without much conscious effort. By contrast, trying to take on a complete system in one go means you're constantly having to think hard about which parts of it are relevant to each problem you confront. It's the difference between seeing a loose screw sticking out of something and knowing you need a screwdriver to tighten it, and seeing a loose screw sticking out of something and emptying your toolbox on the floor so you can try each tool one-by-one.

The important distinction isn't so much between formal methods and organic methods, but between methods you've fully internalized and methods you haven't. A formal method that's permanently imprinted into your mind through practice is likely to be quicker to use, easier to use, and more effective than an informal method you've only just heard about. Eventually, if you practice a technique enough, formal or not, there's a good chance your brain will automatically reach out and apply it in the normal course of grokking a whole situation organically. (For example, if I need to predict or reason about some recurrent event in my life, I often automatically apply reference class forecasting without much thought, and I readily integrate that information with any other information I can glean about the event.)

So I think it makes sense to take this stuff at whatever pace feels comfortable. Certainly, when I first landed on LW, I didn't shoot off and read all of the sequences of core posts in one go. I just clicked around, read recent discussions, and when people referred to individual posts in the sequences while discussing other things, I'd click through and read the post they linked to. (And then if I felt like reading more, I'd look at the other posts linked by that post!)

I'll definitely keep reading.

Enjoy the site!

comment by tofu257 · 2013-06-16T15:19:51.314Z · LW(p) · GW(p)

Hello

I've been reading LW for a long time. At the moment I'd like to learn about decision making more rigorously as well as finding out how to make better decisions myself - and then actually doing that in real life.

I'm also very interested in algorithmic reasoning about and creation of computer programs but I know far too little about this.

comment by Salguod · 2013-05-29T05:26:53.047Z · LW(p) · GW(p)

Hi folks --

In high school I became obsessed with Gödel, Escher, Bach; in college in the 80s I studied philosophy of language, linguistics and AI; then tracked along with that stuff on the side through various career incarnations through the 90s (newspaper production guy, systems programmer, Internet entrepreneur, etc.). I'm now a transactional attorney who helps people buy and sell services and technology and work together to make stuff -- sort of a meta-anti-Lloyd Dobler.

I'm de-lurking because I finished HP:MoR a month ago and I'm chewing through the sequences at a rapid clip; it's all resonating nicely with my decades-long marinade in a lot of the same source materials referenced in the sequences. It's also helping me to systematize a lot of ad-hoc observations I've made over the years about the role that imperfect cognition plays in my life and my corner of the legal world.

Looking forward to hanging out here with you folks!

comment by bartimaeus · 2013-05-15T01:09:09.295Z · LW(p) · GW(p)

I've been lurking for almost a year; I'm a 25 year old mechanical engineer living in Montreal.

Like several people I've seen on the welcome thread, I already had figured out the general outline of reductionism before I found LW. A friend had been telling me about it for a while, but I only really started paying attention when I found it independently while reading up on transhumanism (I was also a transhumanist before finding it here). Reading the sequences did a few things for me:

  • It filled in the gaps in my world-model (and fleshed out my transhumanist ideas much more thoroughly, among many other things)
  • It showed me that my way of seeing the world is actually the "correct" way (it yields the best results for achieving your goals).

Since then, I've helped a friend of mine organize the Montreal LessWrong meetups (which are on temporary hiatus due to several members being gone for the summer, but will start again in the fall) and have begun actively trying to improve myself in a variety of ways along with the group.

I can't think of anything else in particular to say about myself...I like what I've seen of the community here and think I can learn a lot from everyone here and maybe contribute something worthwhile every now and again.

There's a lot of great information on Less Wrong, but some of it is hard to find. Are there any efforts for organizing the information here in progress? If so, can anyone let me know where?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-15T03:38:19.074Z · LW(p) · GW(p)

Are you aware of the LessWrong wiki?

Replies from: bartimaeus
comment by bartimaeus · 2013-05-15T14:01:27.163Z · LW(p) · GW(p)

I was, but hadn't delved too deeply into it until just now. There actually is a pretty good structure there that i'll look at more closely.

comment by dellbarnes · 2013-04-24T07:38:49.898Z · LW(p) · GW(p)

I have an interest in gaming management and practical probabilities. I have a great interest in economics as well. I stumbled onto this site and tyhe "Drawing 2 aces" post. I struggled with it for about a week, and then wrote a few things. The thread is old, but I look forward to any helpful responses.

comment by EHeller · 2013-04-04T18:26:20.110Z · LW(p) · GW(p)

According to the SE.

That fails to answer the question- the Schroedinger equation isn't lorentz invariant (its not even fully Galilean invariant), so it can't tell you much about spacetime.

You can't just replace Schroedinger with Dirac or Klein-Gordon without leading inevitably to a field theory, which opens up new cans of worms.

comment by Shmi (shminux) · 2013-04-04T18:20:12.933Z · LW(p) · GW(p)

Most formulations of MWI only require a "for all practical purposes" splitting. Like thermodynamic irreversibility.

A mental picture of thermodynamic irreversibility as a directed tree is indeed an appealing one. It becomes less appealing once your tree does not have any well-defined vertices or edges due to the issues I have outlined.

According to the SE.

The SE is non-relativistic, so it has absolutely nothing to say about propagation in spacetime. It does not even describe emission or absorption, an essential part of decoherence. You have to go fully monty QFT to talk about signal propagation, but no one talks about MWI in the context of QFT, as far as I know.

Merg[e]able states are not split worlds.

In MWI [eigen]states correspond to worlds, so I don't know what it means. I also don't know what you mean by mergeable states.

Do different worlds share the same spacetime and for how long?

Presumably.

This implies gravitational interaction between non-interacting worlds, so do they interact or don't they?

See Penrose on MW.

Feel free to quote... Just not his quantum consciousness speculations.

comment by Algernoq · 2014-05-23T03:55:56.786Z · LW(p) · GW(p)

Hello there! I really enjoyed HPMOR, because it expanded on some of my thoughts and made me feel less alone. I joined now to post a realization about Harry's (and my) personality. See my 1st post.

comment by zoltanistvan · 2014-02-08T20:13:49.639Z · LW(p) · GW(p)

Hi, I'm reposting my introduction here from 2 days ago, as it was moved for some reason, perhaps accidentally. Anyway, hello, my name is Zoltan Istvan. I'm a transhumanist, futurist, journalist, and the author of the philosophical novel "The Transhumanist Wager." I've been checking out this site for some time, but decided to create an account a few days ago to become closer to the community. I thought I'd start by posting an essay I recently wrote, which sums up some of my ideas. Feel free to share it if you like, and I look forward to interacting here. Cheers.

"When Does Hindering Life Extension Science Become a Crime—or even Genocide?"

Every human being has both a minimum and a maximum amount of life hours left to live. If you add together the possible maximum life hours of every living person on the planet, you arrive at a special number: the optimum amount of time for our species to evolve, find happiness, and become the most that it can be. Many reasonable people feel we should attempt to achieve this maximum number of life hours for humankind. After all, very few people actually wish to prematurely die or wish for their fellow humans' premature deaths.

In a free and functioning democratic society, it's the duty of our leaders and government to implement laws and social strategies to maximize these life hours that we want to safeguard. Regardless of ideological, political, religious, or cultural beliefs, we expect our leaders and government to protect our lives and ensure the maximum length of our lifespans. Any other behavior cuts short the time human beings have left to live. Anything else becomes a crime of prematurely ending human lives. Anything else fits the common legal term we have for that type of reprehensible behavior: criminal manslaughter.

In 2001, former President George W. Bush restricted federal funding for stem cell research, one of the most promising fields of medicine in the 21st Century. Stem cells can be used to help fight disease and, therefore, can lengthen lives. Bush restricted the funding because his conservative religious beliefs—some stem cells came from aborted fetuses—conflicted with his fiduciary duty of helping millions of ailing, disease-stricken human beings. Much medical research in the United States relies heavily on government funding and the legal right to do the research. Ultimately, when a disapproving President limits public resources for a specific field of science, the research in that field slows down dramatically—even if that research would obviously lengthen and improve the lives of millions.

It's not just politicians that are prematurely ending our lives with what can be called "pro-death" policies and ideologies. In 2009, on a trip to Africa, Pope Benedict XVI told journalists that the epidemic of AIDS would be worsened by encouraging people to use condoms. More than 25 million people have died from AIDS since the first cases began being reported in the news in the early 1980s. In numerous studies, condoms have been shown to help stop the spread of HIV, the virus that causes AIDS. This makes condoms one of the simplest and most affordable life extension tools on the planet. Unfathomably, the billion-person strong Catholic Church actively supports the idea that condom usage is sinful, despite the fact that such a malicious policy has helped sicken and kill a staggering amount of innocent people.

Regrettably, in 2014, America continues to be permeated with an anti-life extension culture. Genetic engineering experiments in humans often have to pass numerous red-tape-laden government regulatory bodies in order to conduct any tests at all, especially at publically funded universities and research centers. Additionally, many states still ban human reproductive cloning, which could one day play a critical part in extending human life. The current US administration is also culpable. The White House is simply not doing enough to extend American lifespans. The US Government spends just 2% of the national budget on science and medical research, while their defense budget is over 20%, according to a 2011 US Office of Management Budget chart. Does President Obama not care about this fact, or is he unaware that not actively funding and supporting life extension research indeed shortens lives?

In my philosophical novel The Transhumanist Wager, there is a scene which takes place outside of a California courthouse where transhumanist activists are holding up a banner. The words inscribed on the banner sum up some eye-opening data: "By not actively funding life extension research, the amount of life hours the United States Government is stealing from its citizens is thousands of times more than all the American life hours lost in the Twin Towers tragedy, the AIDS epidemic, and the Vietnam War combined. Demand that your government federally fund transhuman research, nullify anti-science laws, and promote a life extension culture. The average human body can be made to live healthily and productively beyond age 150."

Some longevity experts think that with a small amount of funding—$50 billion dollars—targeted specifically towards life extension research and ending human mortality, average human lifespans could be increased by 25-50 years in about a decade's time. The world's net worth is over $200 trillion dollars, so the species can easily spare a fraction of its wealth to gain some of the most valuable commodities humans have: health and time.

Unfortunately, our species has already lost a massive amount of life hours; billions of lives have been unnecessarily cut short in the last 50 years because of widespread anti-science attitudes and policies. Even in the modern 21st Century, our evolutionary development continues to be significantly hampered by world leaders and governments who believe in non-empirical, faith-driven religious doctrines—most of which require the worship of deities whose teachings totally negate the need for radical life extension science. Virtually every major leader on the planet believes their "God" will give them an afterlife in a heavenly paradise, so living longer on planet Earth is just not that important.

Back in the real world, 150,000 people died yesterday. Another 150,000 will cease to exist today, and the same amount will disappear tomorrow. A good way to reverse this widespread deathist attitude should start with investigative government and non-government commissions examining whether public fiduciary duty requires acting in the best interest of people's health and longevity. Furthermore, investigative commissions should be set up to examine whether former and current top politicians and religious leaders are guilty of shortening people's lives for their own selfish beliefs and ideologies. Organizations and other global leaders that have done the same should be scrutinized and investigated too. And if fault or crimes against humanity are found, justice should be administered. After all, it's possible that the Catholic Church's stance on condoms will be responsible for more deaths in Africa than the Holocaust was responsible for in Europe. Over one million AIDS victims died in Africa last year alone. Catholicism is growing quickly in Africa, and there will soon be nearly 200 million Catholics on the continent. Obviously, the definition of genocide needs to be reconsidered by the public.

As a civilization of advanced beings who desire to live longer, better, and more successfully, it is our responsibility to put government, religious institutions, big business, and other entities that endorse pro-death policies on notice. Society should stand ready to prosecute anyone that deliberately promotes agendas and actions that prematurely end people's useful lives. Stifling or hindering life extension science, education, and practices needs to be recognized as a legitimate crime.

Replies from: hg00, Kawoomba
comment by hg00 · 2014-02-18T01:57:21.384Z · LW(p) · GW(p)

I think I'm in favor of life extension research, but I wonder if your position is a little extreme for people who are not already transhumanists. See this blog post. Edit: though, moralizing sometimes works.

comment by Kawoomba · 2014-02-08T20:23:31.260Z · LW(p) · GW(p)

Welcome "onboard", good to have you! You've led an interesting life, though it did take you long enough to sail to our shores.

Replies from: zoltanistvan
comment by zoltanistvan · 2014-02-09T23:20:16.032Z · LW(p) · GW(p)

Thanks Kawoomba! I appreciate that. Cheers.

comment by David_Chapman · 2013-11-23T22:57:56.933Z · LW(p) · GW(p)

Hi, I have a site tech question. (Sorry if this is the wrong place to post that!—I couldn't find any other.)

I can't find a way to get email notifications of comment replies (i.e. when my inbox icon goes red). If there is one, how do I turn it on?

If there isn't one, is that a deliberate design feature, or a limitation of the software, or...?

Thanks (and thanks especially to whoever does the system maintenance here—it must be a big job.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-23T23:42:00.123Z · LW(p) · GW(p)

There's no way I know of to get email notifications, and I've looked enough that I'm pretty confident one doesn't exist.
No idea if it's a deliberate choice or a software limitation.

comment by Andrew_93 · 2013-10-26T18:25:13.528Z · LW(p) · GW(p)

Hi there, I am Andrew, living in Hungary and studying to be an IT physicist some day in an utmost lazy way. I've just recently discovered this site and so far I can't really believe what I'm seeing. I have been thinking myself before about a website whose main purpose is basically making its users wiser and/or more rational. - about which my main question later will be put that if u could answer would be great; also, excuse my English, it's not my native language.

I believe rationality can be expressed as the set of "right" algorithms in a given context. The rightfulness of algorithms in this case is dependent on the goal which the context defines.

My question is that are the majority of people here generally conscious of the significance of finding the "fancy word for global, important and unique" goal, or as I shall put "pure wisdom" OR do they - or you - here just feel the necessity to lay down the healthy plain soil for building the "tower of wisdom" and care less about the actual adventure of building it.

My main point is that even tough our best tool to gain wisdom is through rational thinking - and a little extra - , how we react and how far we go on this road as rational beings is the function of each one of our unique perspective of life. Are there individuals here whose (main) point in life is to possess the right perspective of life?

For example if we accepted all sciences and set aside our big hopes like the one for afterlife, we may come to the conclusion that any kind of necessity other than our primal ones that have evolved in our ancestors over the millions of years are flawed or delusional. Not taking the alternative of nihilism -its reflection to humans- or physicalism as its sort of parallel philosophy into account is neither irrational nor rational, but unwise I believe even though the philosophy itself doesn't bring or promise much materialistic profit.

Here, my goals in the first place would - I mean: will - be to observe, learn and then adjust my knowledge and beliefs since inevitably I'm gonna bump into other people's belief systems which are built on a rational, in its broadest sense, and healthy ground. Looking forward to it. And looking forward getting to know other people's struggles achieving similar goals and their personalities, have a nice day :)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2013-10-29T09:14:09.434Z · LW(p) · GW(p)

Are there individuals here whose (main) point in life is to possess the right perspective of life?

There are individuals here whose main point in life is to ensure that the first superhuman artificial intelligence possesses the right perspective of life. Is that close enough? :-)

One thing that distinguishes LW rationalism from other historic rationalist movements is a strong interest in transhumanism and the singularity. Historically, wisdom has usually been about accepting the limited and disappointing nature of life, whether your attitude is stoic, epicurean, or bodhisattvic. But the cultural DNA of LW includes nanotechnology, space travel, physical immortality, mind uploading, and computer-brains the size of whole solar systems. There is a strong tendency to think that rationality consists of remaking nature in the image of your goals, rather than vice versa, and that the struggle is to determine which values will shape the universe. This is a level of Promethean ambition more common in apocalyptic movements than in rationalist movements.

This aspect of LW comes and goes in prominence.

comment by ShiraKarasu · 2013-08-19T18:58:53.391Z · LW(p) · GW(p)

Hello then.

I am a political science and international development undergrad student, residing mainly in Vienna, Austria. The story of how I came here is probably a rather common one - it started on TvTropes, where I am an on-off forum contributor and editor, where I first heard of Harry Potter and the Methods of Rationality. After reading it, I decided to look further into the rationalist community, partly because of my interest for philosophy, ethics, politics and debating, but also hoping to find novel, intelligent and helpful approaches to several key questions I have been struggling with for a while.

I hope to be able to contribute soon in an efficient and constructive way - even though I have a lot of catching up to do. There's probably going to be a bit of an archive panic moment, what with having the sequences to finish. Lots to learn, and eager to do so. See you around!

Replies from: Kawoomba
comment by Kawoomba · 2013-08-19T19:07:44.412Z · LW(p) · GW(p)

several key questions I have been struggling with for a while

Which are those?

comment by idea21 · 2013-07-17T08:11:51.019Z · LW(p) · GW(p)

Hi, Less Wrong.

I am idea21, I am from Spain and I apologize for my defective english.

I got acquainted with the existence of this forum thanks to the kindness of mister Peter Singer, he recommended me to expose my own idea about altruistic cultural development after questioning him whether he knew something similar about. Apparently there is nothing similar being discussed anywhere, which turned to be very dissapointing to me. But I still feel that "it" makes sense, at least from a logical point of view.

I will post here some excerpts of the message I wrote to Peter Singer, I hope any suggestion or comment from yours could be enlightening.

"Cultural changes about ethics have happened very slowly across history. According to some people they are motivated by economic issues (land´s property, trading, industrial development…) or political ones, also connected to economy. But although I read what Norbert Elias wrote about, it disturbs me the idea that the real change happens first in the people´s minds, influencing then to economics and politics, and not the other way around.

Primitive men started to create arts in Paleolithic previous to start agriculture in Neolithic. They decided to create arts, probably for the same reasons they decided to try to settle down: social needs, sharing emotional and intellectual activity in bigger groups. Agriculture was the economic answer to the practical problem of how affording sedentary way of life.

Norbert Elias (and Steven Pinker then) explains that an economic and political necessity urged authorities in the Middle Age to try to promote values of cooperation and less violent human relationships: that idea of the “civilité”, gentlemanliness, new rules of behavior advancing to modern humanism. But it seems to me that Elias and Pinker forget that rules to control individual aggression were created previous to the date they give (XIII century, as the European kings courts promoted the new gentle habits). The real origin of that is in monasticism, San Benito´s Rules are from VI Century, and monasticism did not start with the fall of the Roman Empire either. It did not start with Christianity, as a matter of fact. Buddhism started it.

All this reminded me what Miss Karen Armstrong wrote about “compassionate religions” and the “Axial Age”. So, the thing could be this way: First, intellectual changes happened (arts, communitarian life, ethics), and then new economic phenomena came, developing social, cultural, ideological forms; second, as social life increased human relationships, a new adaptation of individual behavior is demanded to control aggression.

It seems that monasticism is the answer to the need of developing new ways to control human behavior for the benefit of outside society, the same way that animals are tamed to be used by men. Monasticism is, basically, a "High-Performance Center" for behavior, producing “new men” able to control better the violent behavior and teaching these new discoveries for the outside people.

According to some current psychologists, like Simon Baron-Cohen, there are many people bearing features of “super-empathy”, being the opposite equivalent to the psychopaths, but unlike psychopaths, who can enjoy their own fitted sub-cultural environments (the underworld of criminality), there is not today a particular sub-cultural environment fitted for people particularly able to develop self-control of aggression and antiaggressive, affectionate and altruistic behavior. But in monasticism, these “super-empathic” people were specially fitted to develop patterns of aggression self-control: that psychological personal feature proved to be adaptive.

My idea (I hope not only mine…) is that monasticism should be re-invented.

A new monasticism of XXI century could be attractive for many young people, as providing them with emotional, intellectual and affectionate experiences that probably they could find nowhere else. It must be kept in mind that old monasticism existed because, at some extent, it fulfilled this kind of social needs for many people, particularly the young ones. Nobody compromises on the hard search for a future better world if they are not expecting to get, in that process, some kind of psychological reward in the present.

A monasticism of the XXI century would be, of course, very different from that of XVII century. It should be rational, atheist and non-authoritarian, emphasizing in affectionate and cooperative behavior by secluding from mainstream society. That could make much more against poverty that all the current NGO and every current humanitarian trend.

Human behavior is the “raw material” of humanitarianism. Not only because they could influence the mainstream society by demonstrating that a full antiagressive way of life is possible and emotionally rewarding, but also because the economic activity of “super-empathic” people culturally organized would be totally focused on altruistic work. Using modern technology, extremely cooperative organization and concentration of work resources (like in a “economy of war”) the results should be very good.

Remember that in Spain, in XVII century, there was a 2 % of population secluded from “civil life”, as monks, nuns or priests (and remember also the very committed communist activists of the first half of XX century). Can you imagine what a 2 % of this planet population could do with our technology, if rationally concerned to dedicate their lives only to ease human sufferance only in exchange for emotional, affectionate and intellectual rewards? Psychopaths are between 2 and 4 %: how many people “super-empathic” ones could exist? It would be worth trying to get them organized, culturally evolving for the whole world´s benefit.

Don´t underestimate young people idealism. The problem today is that they have not an alternative to start to create a better world outside our cultural, social and political mainstream limitations."

This idea could be develop much deeper, but I hope you will understand that it deals with the creation of a last religion, rational and of course atheist, in order to allow a furtther enhancing of human abilities for mutual cooperation.

As I mean "religion", I mean the necessity of developing an own system of cultural symbols and understandable patterns of social behaviour, which could not be the same as those of the current mainstream society (which is competitive, non-idealist and still irrational). As highly cooperative society could be based only on extreme trust and mutual altruism, it could be a bit similar to some traditions of the old compassionate religions, but now detached from any irrationality, any tradition and based on rational knowledge about human behaviour.

Thank you very much for your attention.

comment by [deleted] · 2013-05-13T18:56:42.687Z · LW(p) · GW(p)

Hello, Less Wrong world. (Hi, ibidem.)

I'm pretty new here. I heard about this site a few months ago and now I've read a few sequences, many posts, and all of HP:MoR.

About a week ago I created an account and introduced myself on the Open Thread along with a difficult question. Some people answered my question helpfully and honestly, but most of them mostly just wanted to argue. The discussion, which now includes over two hundred comments, was very interesting, but at the end it appeared we just disagreed about a lot of things.

It began to be clear that I don't fully accept some important tenets of the thinking on this site—I warned I might fundamentally disagree—but a few community members became upset and decided to make me feel unwelcome on the site. My Karma dropped from 6 (+13, -7) to -25 in just a couple hours, and someone actually came out and told me I'd better leave the site for good. (Don't let this person's status influence your opinion of the appropriateness of such a comment, in either direction.)

Don't worry, I'm not offended. I knew there might be a bit of backlash (though one can always hope not, because there doesn't have to be) and I'm certainly not going to be scared away by one openly hostile user.

Now, before everyone reads the comments and takes sides because of the nature of the issue, I'd like to think about how and why this all happened. I have several different ways of thinking about it ("hypotheses"):

  1. The easy justification for those opposing me is to blame my discourse: my opinions are not a problem as long as I present them reasonably. However, I have consistently been "incoherent" etc. and that's why I got downvoted. Never mind that I managed to keep up hundreds of comments' worth of intelligent discussion in the meantime.

  2. The "contrarian" hypothesis: I am a troll. I never had anything helpful or constructive to say, and in fact everyone who participated in my discussion (e.g. shminux, TheOtherDave, Qiaochu_Yuan) ought to be downvoted for engaging with me.

  3. The "enforcer" hypothesis: I came in here as a newbie, unaware that actually substantive disagreement is highly discouraged. The experienced community members were just trying to tell me that, and decided that being militant and aggressive would be the best way to do so.

  4. The "militant atheist" hypothesis: my opinions are mostly fine, but I managed to really touch a nerve with a few people, who started unnecessarily attacking me (calling me irrational) and making the entire LW community look unreasonable and intolerant.

  5. The "martyr" hypothesis: The LW community as a whole is not open to alternate ways of thinking, and can't even say so honestly. They should have been nicer to me.

What do you think? Which of these are most accurate? Other explanations?

Here is a link to my original comment.

These are the most honest and helpful responses I received,

and this is the most hostile one.

My generally impression has been—trying not to offend anyone—that the thinking here is sometimes pretty rigid.

I have found that there is a general consensus here that belief in God (and even a possibility that there could be a God) is fundamentally incompatible with fully rational thinking. (Though people have been reluctant to admit it because I personally think it's unhealthy and reflects poorly on the site.)

But in any case, I've enjoyed the discussion and I'd guess that some other people have too. I'm definitely not going to leave as some have tried to coerce me to do; I like the way of thinking on this site, and it's the best place I know of to find smart people who are willing to talk about things like this. I'll keep reading at the very least.

I'm still undecided as to what I think generally of the people here.

Yours truly,

ibid.

(Oh, and I'm a Mormon. And intend to remain that way in the near future.)

Replies from: Jack, Desrtopa, Estarlio, Nisan, Bugmaster, Vladimir_Nesov, CCC, Risto_Saarelma, John_Maxwell_IV
comment by Jack · 2013-05-13T21:41:36.269Z · LW(p) · GW(p)

I think probably none of those hypotheses are correct. I think you mean well and I think your comments have been stylistically fine. I also obviously don't think people here are are opposed to substantive disagreement, close-minded or intolerant (or else I wouldn't have stuck around this long). What you've encountered is a galaxy sized chasm of inferential distance. I'm sure you've had a conversation before with someone who seemed to think you knew much less about the subject than you actually did. You disagree with him and try to demonstrate you familiarity with the issue but he is so behind he doesn't even realize that you know more than he does.

I realize it is impossible for this not to sound smug and arrogant to you: but that is how you come off to us. Really, your model of us, that we have not heard good, non-strawman arguments for the existence of God is very far off. There may be users who wouldn't be familiar with your best argument but the people here most familiar with the existence of God debate absolutely would. And they could almost certainly fix whatever argument you provided and rebut that (which is approximately what I did in my previous reply to you).

To the extent that theism is ever taken under consideration here it is only in the context of the rationalist and materialist paradigm that is dominant here. E.g. We might talk about the possibility of our universe being a simulation created by an evolved superintelligence and the extent to which that possibility mirrors theism in it's implications. Or (as I take it shminux believes) about how atheism is, like religion, just a special case of privileging the hypothesis. But you don't appear to have spent enough time here to have added these concepts to your tool box and outside that framework the theism debate is old-hat to nearly all of us. It's not that we're close minded: it's that we think the question is about as settled as it can be.

Moreover, while this is a place that discusses many things, we don't enjoy retreading the basics constantly. So while a number of us politely responded to answer your question, an extended conversation about theism or our ability to consider theism is not really welcome. This isn't because we are unwilling to consider it: it's because we have considered it and now want to discuss newer ideas.

You don't have to agree with this perspective. Maybe you feel like you have evidence and concepts that we're totally unfamiliar with. But bracket those issues for now. It is nothing that will be resolvable until you've gotten to know us better and figured out how you might translate those concepts to us. So if you want to stick around here you're welcome to. Learn more about our perspective, become familiar with the concepts we spend time on and feel free to discuss narrower topics that come up. But people here aren't generally interested in extended debates about God with newcomers. That's why you've been down voted. Not because we're against dissent, just because we're not here to do that. There are lots of places on the internet dedicated to debating theism.

Don't mind wedrifid's tone. That's the way he is with everyone. But take his actual point seriously. Don't preach your way of thinking until you've become a lot more familiar with our way of thinking. And a new handle at some point wouldn't be a terrible idea.

Replies from: shminux, None
comment by Shmi (shminux) · 2013-05-13T21:59:19.127Z · LW(p) · GW(p)

Well put. I agree with all of this, except maybe for the need for a new nick, as people who appear to learn from their experience ("update on evidence", in the awkward local parlance) are likely to be upvoted more generously.

Replies from: Intrism
comment by Intrism · 2013-05-13T22:20:11.812Z · LW(p) · GW(p)

I'm sure Ibidem could get more upvotes, perhaps even a great number of them, but negative one-hundred and twenty-eight is an awfully steep karma hill to climb.

Replies from: Desrtopa
comment by Desrtopa · 2013-05-13T22:30:59.065Z · LW(p) · GW(p)

Chaosmosis has a few hundred karma now after dropping at least that deep, being accused of being a troll, and facing a number of suggestions that he leave. It's certainly not un-doable.

comment by [deleted] · 2013-05-14T19:36:39.242Z · LW(p) · GW(p)

Good, thank you.

However, it's important to note that I did not come in here expressly arguing my religion. I recognize how bad an idea that would be, and you've explained it well. So of course, anyone aiming to convert this lot of atheists is certainly going to fail. But that was never my goal, and in fact I never argued in favor of my particular God.

Look at my very first comment—it was not "this is why you are wrong," it was "do you guys have any ideas how you could be wrong?" and the response was "no, we're definitely not wrong." My first comment presented a question, albeit a difficult one.

I mentioned up front that I was religious, though, as I don't think trying to hide it would have helped anything. The community was therefore eager to argue with me, and I was happy to argue for some time. At the end, though, it was clear we simply disagreed and I said several times I wasn't interested in a full-blown debate about religion.

To summarize, you just gave a very good explanation of why I was mistaken to come on here arguing for religion. But I didn't come on here arguing for religion.

Really, your model of us, that we have not heard good, non-strawman arguments for the existence of God is very far off.

I'll tell you what made me think that: I asked the community if they had any good, non-strawman arguments for God, and the overwhelming response was "Nah, there aren't any."

Replies from: Intrism, Bugmaster
comment by Intrism · 2013-05-14T22:41:56.596Z · LW(p) · GW(p)

I'll tell you what made me think that: I asked the community if they had any good, non-strawman arguments for God, and the overwhelming response was "Nah, there aren't any."

I'm not sure if anyone's brought this up yet, but one of the site's best-known contributors once ran a site dedicated to these sorts of things, though it does of course have a very atheist POV. That said, even there the arguments aren't amazingly convincing (which you can guess by the fact that lukeprog hasn't reconverted yet) though it does acknowledge that the other side has some very good debaters.

I'm not sure why you think it's indicative of a problem with us that we haven't found good arguments for the existence of God. It's not a law that there be good arguments in favor of false propositions. I suppose you could make the naïve argument that if the position were as indefensible as it seems no one would believe in it, but unfortunately not many people judge arguments very rationally.

comment by Bugmaster · 2013-05-14T19:46:06.835Z · LW(p) · GW(p)

I asked the community if they had any good, non-strawman arguments for God, and the overwhelming response was "Nah, there aren't any."

Well, if there were any that we knew of, then no one here would remain an atheist for very long. We'd all convert to whichever religion made the most sense, given the strength of its arguments. IMO you should have anticipated such a response, given that atheists do, in fact, still exist on this site.

So far, we have heard many terrible arguments for religion (we're talking logical fallacies galore), and few if any good ones. Thus, we are predisposed to thinking that the next argument for religion is going to be terrible, as well, based on past experience.

Replies from: None
comment by [deleted] · 2013-05-14T20:17:47.092Z · LW(p) · GW(p)

Well, if there were any that we knew of, then no one here would remain an atheist for very long.

That's not true. The optimal situation is that both sides have strong arguments, but atheism's arguments are stronger. A rationalist ought to have heard arguments and evidence that challenged his (dis)beliefs, and have come out stronger because of it.

IMO you should have anticipated such a response, given that atheists do, in fact, still exist on this site.

Yes, but what I expected was...um...atheists who were better than most, who had arrived at atheism through two-sided discourse.

Replies from: SaidAchmiz, hairyfigment, SaidAchmiz, SaidAchmiz, Bugmaster
comment by Said Achmiz (SaidAchmiz) · 2013-05-14T20:53:32.912Z · LW(p) · GW(p)

A rationalist ought to have heard arguments and evidence that challenged his (dis)beliefs, and have come out stronger because of it.

A rationalist

You keep using that word...

In Avoiding Your Belief's Real Weak Points, Eliezer says:

There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend.

In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You're allowed to doubt. You're just not allowed to successfully doubt.

The point being that this is exactly not how rationality is supposed to work. If you hear a convincing argument, you should update your belief in the direction of the belief the argument argues for. If you update in the other direction ("come out stronger"), then either it's not a convincing argument (by definition), or you're doing it wrong.

Replies from: None
comment by [deleted] · 2013-05-14T21:20:36.808Z · LW(p) · GW(p)

If you hear a convincing argument, you should update your belief in the direction of the belief the argument argues for. If you update in the other direction ("come out stronger"), then either it's not a convincing argument (by definition), or you're doing it wrong.

I didn't mean that your initial beliefs should come out stronger. I meant that having updated for good arguments, and by incorporating them, your beliefs will be more complete, better thought-out, and more sustainable for the future.

Replies from: SaidAchmiz, Richard_Kennaway
comment by Said Achmiz (SaidAchmiz) · 2013-05-14T21:32:03.984Z · LW(p) · GW(p)

Well, one example of such a thing might be the Simulation Argument, which I believe has been mentioned to you. It's an argument for the possible existence of something which might be called a "god" or "gods" (though that's usually inadvisable due to semantic baggage). Our view of what exists and what could exist certainly incorporates an understanding of the possibility that we're living in a simulation.

Theistic arguments per se, however, are generally bad.

Replies from: drnickbone, Bugmaster, None
comment by drnickbone · 2013-05-15T10:18:18.202Z · LW(p) · GW(p)

The Simulation Argument is certainly quite an interesting one, since it was invented by an atheist (Nick Bostrom), and as far as I can tell is only taken remotely seriously by other atheists. Many of them (including me) think it is a rather better argument for some sort of "god" or "gods" than anything theists themselves ever came up with.

For other interesting quasi-theistic arguments invented by atheists, you might want to consider Tegmark's Level 4 multiverse. Since any "god" which is logically possible can be represented by some sort of mathematical structure, it exists somewhere within the Level 4 multiverse. David Lewis' modal realism has a similar feature.

All these arguments tend to produce massively polytheistic rather than monotheistic conclusions (and also they imply that Santa, the Tooth Fairy, Harry Potter and Captain Kirk exist somewhere or in some simulation or other).

If you want a fun monotheistic argument invented by atheists, try this one, which was published by Robert Meyer and attributed to Hilary Putnam. It's a clever use of the Axiom of Choice and Zorn's Lemma.

Replies from: CCC, SaidAchmiz
comment by CCC · 2013-05-15T10:48:33.199Z · LW(p) · GW(p)

If you want a fun monotheistic argument invented by atheists, try this one, which was published by Robert Meyer and attributed to Hilary Putnam. It's a clever use of the Axiom of Choice and Zorn's Lemma.

Isn't that just the First Cause argument, wrapped up in set-theory language?

Replies from: drnickbone
comment by drnickbone · 2013-05-15T11:12:36.572Z · LW(p) · GW(p)

Well yes, but it "addresses" one of the really basic responses to the First Cause argument, that there might - for all we know - be an infinite chain of causes of causes, extending infinitely far into the past. One of the premises of Meyer's argument is that any such chain itself has a cause (i.e. something supporting the whole chain). That cause might in turn have a cause and so on. However, by an application of Zorn's Lemma you can show that there must be an uncaused cause somewhere in the system.

If you don't assume the Axiom of Choice you don't have Zorn's Lemma, so the argument doesn't work. Conversely, if God exists, then - being omnipotent - he can pick one element from every non-empty set in any collection of sets, which is the Axiom of Choice. So God is logically equivalent to the Axiom of Choice,

All totally tongue-in-cheek and rather fun.

Replies from: CCC, Eugine_Nier
comment by CCC · 2013-05-16T09:22:38.418Z · LW(p) · GW(p)

He also defines away the causal-loop, or time travel, response, leaving only the uncaused cause; and then arbitrarily defines any uncaused cause as God. It looks like a good argument on the surface, but when I look at it carefully it's not so great; it's basically defining away any possible disagreement.

I should also mention that it's not really a monotheistic argument. It only argues for the existence of at least one God. It doesn't argue for the non-existence of fifty million more.

It's reasonably fun as a tongue-in-cheek argument, but I wouldn't want to use it seriously.

Replies from: drnickbone
comment by drnickbone · 2013-05-16T11:15:32.353Z · LW(p) · GW(p)

He also defines away the causal-loop, or time travel, response, leaving only the uncaused cause

Well I think premise 2 just assumes there aren't any causal loops, since if there were, the constructed relation <= would not be a partial order (let alone an inductive order).

There are probably ways of patching that if you want to explicitly consider loops. Consider that if A causes B cause C causes A, then there is some infinite sequence whereby every entry in the sequence is caused by the next entry in the sequence. So this looks a bit like an infinite descending chain.

The arguer could then tweak premise 2 so it states that any such generalised infinite chain (one allowing repeated elements) still has a lower bound (some strict cause outside the whole chain) and apply an adapted version of Zorn's Lemma to still get an uncaused cause in the whole system.

The intuition being used there is still that any infinite sequence of causes of causes must have some explanation for why the whole sequence exists at all. For instance if there is an infinite sequence of horses, each of which arises from parent horses, we still want an explanation for why there are any horses at all (and not unicorns, say). Even if a pregnant horse if sent back in time to become the ancestor of all horses, then again we still want an explanation for why there are any horses at all.

The weakness of the intuition is that the "explanation" in such a weird case might well not be a causal one, so maybe there is no further cause outside the chain, or loop. (But even then, there is a patch: the arguer could claim that the whole chain or loop should count as a combined "entity" with no cause, ie there is still some sort of uncaused cause in the system).

I agree with you that the really weak part is just defining the uncaused cause to be "God". Apart from confusing people, why do that?

And thanks for spotting the non-uniqueness by the way... the argument as it stands does allow for multiple uncaused causes. To patch that, the arguer could perhaps define a super-entity which contains all these uncaused causes as its "parts". Or else add an additional "common cause" premise, whereby for any two entities a, b, either a is a cause of b, or b is a cause of a, or there is some c which is a cause of both of them.

Replies from: CCC
comment by CCC · 2013-05-17T08:50:12.619Z · LW(p) · GW(p)

The arguer could then tweak premise 2 so it states that any such generalised infinite chain (one allowing repeated elements) still has a lower bound (some strict cause outside the whole chain) and apply an adapted version of Zorn's Lemma to still get an uncaused cause in the whole system.

That's just assuming the result you want. I don't think it makes a strong argument.

(But even then, there is a patch: the arguer could claim that the whole chain or loop should count as a combined "entity" with no cause, ie there is still some sort of uncaused cause in the system).

Counting a loop as a combined entity, on the other hand, could be very useful. The combined-entity loop would be caused by everything that causes any element in the loop, and would cause anything that is caused by any element in the loop. Do this to all loops, and the end result will be to eliminate loops (at the cost of having a few extremely complex entities).

This seems fine as long as there are only a few, causally independent loops. However, if there are multiple loops that affect each other (e.g. something in loop A causes something in loop B, and something in loop B causes something in loop A) then this simply results in a different set of loops. These loops, of course, can also be combined into a single entity; but if the causality graph is sufficiently well connected, and if there is a large enough loop, the end result of this process might be that all entities end up folding into one giant super-entity, containing and consisting of everything that ever happens.

I have heard the theory before that the universe is a part of God, backed by a different argument.


I agree with you that the really weak part is just defining the uncaused cause to be "God". Apart from confusing people, why do that?

It honestly looks like a case of writing down the conclusion at the bottom of the page and then back-filling the reasoning. He can't justify that part, so he defines it quickly and hopes no-one pays too much attention to that line.

And thanks for spotting the non-uniqueness by the way... the argument as it stands does allow for multiple uncaused causes. To patch that

Why do you want to patch that? A quick patch looks like (again) writing the conclusion first and then filling in the reasoning afterwards.

Replies from: drnickbone
comment by drnickbone · 2013-05-17T13:13:59.205Z · LW(p) · GW(p)

OK, I think we both agree this is not at all a strong argument, that the bottom line is being written first, and then the premises are being chosen to get to that bottom line and so on. However, I still think it is fun to examine and play with the argument structure.

Basically, what we have here is a recipe:

  1. Take some intuitions.

  2. Encode them in some formal premises.

  3. Stir with some fancy set theory.

  4. Extract the desired conclusion : namely that there is an "uncaused cause"

It's certainly interesting to see how weak you can make the ingredients (in step 1) before the recipe fails. Also, the process of then translating them into premises (step 2) looks interesting, as at least it helps decide whether the intuitions were even coherent in the first place. Finally, if the desired conclusion wasn't quite strong enough for the arguer's taste (hmm, missing that true monotheistic kick), it's fun to work out what extra ingredient should be inserted in to the mix (let's put in a bit of paprika)

That's basically where I'm coming from in all this..

Replies from: CCC
comment by CCC · 2013-05-17T13:37:56.160Z · LW(p) · GW(p)

Ah... I think I get it. You want to play with intuitions, and see which premises would have to be proved in order to end up with monotheism via set theory.

I don't think it would be possible to get around the point of defining God in terms of set theory. Once you have a definition, you can see if it turns up; if God is not defined, then you don't know what you're looking for. Looked at from that point of view, the definition of God as a first cause is probably one of the better options.

Loops can still be a problem...

The arguer could then tweak premise 2 so it states that any such generalised infinite chain (one allowing repeated elements) still has a lower bound (some strict cause outside the whole chain) and apply an adapted version of Zorn's Lemma to still get an uncaused cause in the whole system.

This can still fail in the case where two loops have their external causes in each other. (I think. Or would that simply translate into an alternate set of loops? ...I think I could figure out a set of looped entities, such that each loop has at lest one cause outside that loop, that has no first cause).

To patch that, the arguer could perhaps define a super-entity which contains all these uncaused causes as its "parts". Or else add an additional "common cause" premise, whereby for any two entities a, b, either a is a cause of b, or b is a cause of a, or there is some c which is a cause of both of them.

Either of those would be sufficient; though the first seems to fit more possible sets.

Replies from: drnickbone, drnickbone
comment by drnickbone · 2013-05-17T15:36:04.192Z · LW(p) · GW(p)

This can still fail in the case where two loops have their external causes in each other. (I think. Or would that simply translate into an alternate set of loops? ...I think I could figure out a set of looped entities, such that each loop has at lest one cause outside that loop, that has no first cause)

I think if two loops were caused by each other, then there would be a super-loop which included all the elements from both of them, and then you could look for the cause of the super-loop. The Axiom of Choice would still be needed to show that this process stops somewhere.

Finally, I rather liked your thought that causality may be so loopy that everything is a cause of everything else. The only way to get a first cause out of that mess is to treat the entire "super-duper-loop" of all things as a single uncaused entity, and if you insist on calling that "God", you're a pantheist.

Replies from: CCC
comment by CCC · 2013-05-18T08:21:31.972Z · LW(p) · GW(p)

Let's consider loops A->B->C->A->B->C and D->E->F->D->E->F.

Let's say, further, that B is a cause of E and D is a cause of A. Then each loop has an external cause.

Then there are also a few other loops possible:

A->B->E->F->D->A->B->E->F->D (external cause: C) A->B->E->F->D->A->B->C->A->B->E->F->D... huh. That includes all of them, in a sort of double-loop with no external cause. I guess that would be the super-loop.

Finally, I rather liked your thought that causality may be so loopy that everything is a cause of everything else. The only way to get a first cause out of that mess is to treat the entire "super-duper-loop" of all things as a single uncaused entity, and if you insist on calling that "God", you're a pantheist.

Better yet; no matter what causality looks like, you can still always combine everything into a single giant, uncaused entity. You don't need to assume away loops or infinite chains without external causes if you do that.

Replies from: drnickbone
comment by drnickbone · 2013-05-20T16:50:32.416Z · LW(p) · GW(p)

I've been doing a bit more "stir in fancy set theory" over the weekend, and believe I have an improved recipe! This builds on the idea to treat chains and loops as a single "entity" and look for a cause of that entity. It is a lot subtler than just throwing every entity together into one super-duper-entity.

Here are a bunch of premises that I think will do the trick:

A1. The collection of all entities is a set E, with two relations C and P on E, such that: x C y if and only if x is a cause of y; x P y if and only if x is a part of y.

A2. The set E can be well-ordered

Note: This ensures we can apply Zorn's Lemma when considering chains in E, but is not as strong as the full Axiom of Choice. If the set E is finite or countable, for instance, then A2 applies automatically.

A3. If x C y and x P z then z C y.

Informally, "anything caused by a part is caused by the whole".

Definitions: We define <= such that x <= y if and only if x = y or there are finitely many entities x1, ..., xn such that x1 = x, xn = y and xi is a cause of xi+1 for i=1.. n-1. Say that a set S is a "chain" in E iff for any x, y in S we have x <= y or y <= x. Say that such an S is an "endless chain" iff for any x in S there is some y not equal to x in S with y <= x. Say that an entity y is "uncaused" if and only if there is no z distinct from y with z <= y. Also say that x is a "proper part" of y iff x is not equal to y but x P y.

Note: These definitions ensure that <= is a pre-order on E. Note that an endless chain may be an infinite chain of distinct elements, or a causal loop.

A4. Let S be any endless chain in E. Then there is some z in E such that every x in S is a proper part of z.

Lemma 1: For any chain S in E, there is an element x of E with x <= y for every y in S.

Proof: Suppose S has an end (not endless). Then there is some x in S such that for no other y in S is y <= x. By the chain property we must have x <= y for every member y of S. Alternatively, suppose that S is endless, then by A4, there is some z in E such that every x in S is a part of z. Now consider any y in S. There is some x not equal to y in S with x <= y, so there are x = x1... xn = y with each xi C xi+1 for i=1..n-1. Further, by A3, as x C x2, we have z C x2 and hence z <= y.

Lemma 2: For any x in E, there is some y in E such that: y <= x, and for any z <= y, y <= z.

Proof: This is the version of Zorn's Lemma applied to pre-orders.

Theorem 3: For any x in E, there is some uncaused y in E such that y <= x.

Proof: Take a y as given by Lemma 2 and consider the set S = {s: s <= y}. By Lemma 2, y <= s for every member of S, and if S has more than one element, then S is an endless chain. So by A4 there is some z of which every s in S is a proper part, which implies that z is not in S. But by the proof of Lemma 1, z <= y, which implies z is in S: a contradiction. So it follows that S = {y}, which completes the proof.

I've also got some premises for aggregating multiple uncaused entities into a single entity. This gives another approach to "uniqueness". More on my next comment, if you're interested.

Replies from: drnickbone
comment by drnickbone · 2013-05-20T20:14:42.454Z · LW(p) · GW(p)

For uniqueness, we build on the idea of all uncaused causes being part of a whole. The following premises look interesting here:

B1. If x P y and y P z then x P z; x = y if and only if x P y and y P x.

This states that P is a partial order, which is reasonable for the "part of" relation.

B2. If S is any chain of parts, such that for any x, y in S we have x P y or y P x, then there is some z in E of which all members of S are parts.

This states that E is inductively ordered by the "part of" relation.

B3. If x C z and y P z then x C y.

Informally, "a cause of the whole is a cause of any part".

B4. Suppose that y <= x and z <= x and both y, z are uncaused. Then y P z or z P y, or there is some w of which both y and z are proper parts.

Informally, two uncaused y and z can't independently conspire to cause x unless they are parts of a common entity.

Definition: Say that entities x and y are causally-connected if and only if x = y, or there are entities x=x1,..,xn=y with either xi C xi+1 or xi+1 C xi for each i=1..n-1.

B5. Any two entities in E are causally-connected.

Informally, E doesn't "come apart" into completely disconnected components, such as a bunch of isolated universes.

Theorem 4: For any x in E, there is a unique entity f(x) in E such that: f(x) is uncaused, f(x) <= x, and any other uncaused y with y <= x satisfies y P f(x).

Proof: For any x, define a subset E' = {y in E: y <= x, y is uncaused}. Consider any chain of parts S in E' with at least two elements. By B2 there is some z in E of which all members of S are parts. By B3, z must be uncaused (or else some w C z would also be a cause of all the members of S, which would require them all to be equal to w, so S would be a singleton), and by A3, z <= x. So z is also a member of E'. By application of Zorn's Lemma to E', there is a P-maximal element f in E' such that there is no other y in E' with f P y. But then, by B4, for any y in E' we must have y P f; this makes f unique.

Theorem 5: For any x, y in E, f(x) = f(y) if and only if x and y are causally-connected.

Proof: It is clear that if f(x) = f(y) then x is causally-connected to y (just build a path backwards from x to f(x) and then forward again to y). Conversely, suppose that x C y, then f(x) is uncaused and satisfies f(x) <= y so we have f(x) P f(y). This implies f(x) = f(y). By a simple induction on n we have that if x is causally-connected to y, then f(x) = f(y).

Corollary 6: There is a single entity g in E such that f(x) = g for every entity x in E.

Proof: This follows from Theorem 5 and B5.

Done!

Replies from: CCC
comment by CCC · 2013-05-21T08:47:03.990Z · LW(p) · GW(p)

(Huh. One of the ancestors to this comment - several levels up - has been downvoted enough to require a karma penalty. I wonder if there should be some statute of limitations on that; whether, say, ten levels of positive-karma posts can protect against a higher-level negative-karma post?)

A4. Let S be any endless chain in E. Then there is some z in E such that every x in S is a proper part of z.

An interesting assumption. Necessary for theorem 3, but I suspect that it'll mean that the original cause described in theorem 3 will then very probably be an entity z that is the earliest cause.

I also note that, while z consists of all the parts in the endless chain, there is no guarantee that any of the elements in the chain, even those that cause other elements in the chain, is in any way a cause of z. In fact, the way that z is defined, z may well be causeless (or, then again, z may have a cause). While I can't actually find anything technically invalid in theorem 3, or in assumption 4, I get the general feeling of wool being pulled over my eyes in some way.

When I consider B3, it becomes even more important to note that z as a whole is not necessarily caused by any element that is a proper part of z. The cause of a part may or may not be the cause of the whole.

Hmmm... B4 appears to be pretty much just shoehorning monotheism in. It seems a questionable assumption; if I decide to get into my car and drive, and you decide to get into your car and drive, and we drive into each other, then we both are causes of the resultant accident but we are not the same. (We are not causeless, either, so it's not quite a counterexample, just an explanation of why I don't think B4 is justified,) B5 is unsupported, but I can prove that all entities that I will ever observe evidence of are causally connected (i.e. they are connected to the effects on my actions of having observed them) so it will look true whether it is or not.


Though I can raise questions about your assumptions, I can't find anything wrong with your logic from then on. So congratulations; you have a very convincing argument! ...as long as you can persuade the other person to accept your assumptions, of course.

comment by drnickbone · 2013-05-17T15:10:32.879Z · LW(p) · GW(p)

Ah... I think I get it. You want to play with intuitions, and see which premises would have to be proved in order to >end up with monotheism via set theory.

I don't think it would be possible to get around the point of defining God in terms of set theory.

Well now, here's a devious approach, which would probably appeal to me if I ever needed to make a career as a philosopher of religion.

Let's suppose a theist wants to "prove" that God - by his favourite definition - exists. For instance he could define a type G, whereby an entity g is of type G if and only if g is omnipotent, omniscient, perfectly good and so on, and has all those characteristics essentially and necessarily. Something like that. Then the theist finds a set of premises P, with some intuitive support, such that P => There is an uncaused cause.

And then he adds one other premise "Every entity that is not of type G has a cause" into the recipe to form a new set P'. He cranks the handle, and then P' => There is an entity of type G. Job done!

Just in case someone accuses him of "begging the question" or "assuming what he set out to prove" he then pulls out the modal trick. He just claims that it is possible that P' is true. This leads to the conclusion that "It is possible that there is an entity of type G". And then, remembering he's defined G so it includes necessary existence (if such a being is possible at all, it must exist), he can still conclude "There is a being of type G". Job done even better!

Can I have the Templeton Prize now please?

Replies from: TheOtherDave, CCC
comment by TheOtherDave · 2013-05-17T15:20:02.608Z · LW(p) · GW(p)

The modal trick reminds me of Descarte's approach... God is definitionally perfectly good, which implies existence (since something good that doesn't exist isn't as good as something good that does), therefore God exists.

ZZZZzzzzzz....

Replies from: drnickbone
comment by drnickbone · 2013-05-17T15:29:24.064Z · LW(p) · GW(p)

A closer parallel is Plantinga's "victorious ontological argument".

That one was from the late 20th century, not the 17th. All rather sad really...

comment by CCC · 2013-05-18T08:28:20.263Z · LW(p) · GW(p)

Huh. That modal trick is devious. But it doesn't work. I can assume an entity that does something easily measurable (e.g. gives Christmas present to children worldwide), and then slap on a necessary existence clause; but that doesn't necessarily mean that I can expect Santa later this year.

I think the 'necessary existence' clause requires a better justification in orderto be Templeton-worthy.

comment by Eugine_Nier · 2013-05-16T01:38:23.035Z · LW(p) · GW(p)

And here I always thought God corresponded to an inaccessible cardinal axiom.

comment by Said Achmiz (SaidAchmiz) · 2013-05-15T15:51:31.122Z · LW(p) · GW(p)

On reflection, the fact that an atheist would be able to come up with an argument for a god that's more persuasive to atheists is unsurprising, especially when you consider the fact that most religious people don't become religious via being persuaded by arguments. It's definitely still amusing, though.

I'm definitely aware of Tegmark's theory, though I admit I hadn't considered it as an argument for any kind of theism. That seems like an awfully parochial and boring application of the ultimate ensemble, although you're right that it can have that sort of application... although, if we define "supernatural" entities to mean "ontologically basic mental entities" a la Richard Carrier, would it really be the case that Tegmark's multiverse implies the existence of such? I'm not sure it does.

Meyer's argument begins with premises that are hilariously absurd. Defining entities as being able to be causes of themselves? Having "entities" even able to be "causes"? What? And all this without the slightest discussion of what kinds of things an "entity" can even be, or what it means to "exist"? No, this is nonsense.

Replies from: drnickbone
comment by drnickbone · 2013-05-15T17:16:25.464Z · LW(p) · GW(p)

Meyer's argument begins with premises that are hilariously absurd. Defining entities as being able to be causes of themselves? Having "entities" even able to be "causes"?

I think this is mostly a presentational issue. The purpose of the argument was to construct a non-strict partial order "<=" out of the causal relation, and that requires x<=x. This is just to enable the application of Zorn's Lemma.

To avoid the hilarity of things being causes of themselves, we could easily adjust the definition of <= so that "x<=y" if and only if "x=y or x is a cause of y". Or the argument could be presented using a strict partial order <, under which nothing will be a cause of itself. The argument doesn't need to analyse "entity" or "exists" since such an analysis is inessential to the premises.

And finally, please remember that the whole thing was not meant to be taken seriously; though rather amusingly, Alexander Pruss (whose site I linked to) apparently has been treating it as a serious argument. Oh dear.

comment by Bugmaster · 2013-05-14T21:34:02.080Z · LW(p) · GW(p)

FWIW, the probability I place on the Simulation Argument being true is only a little higher than the probability I place on traditional theistic gods existing. Could be just me, though.

Replies from: SaidAchmiz, shminux
comment by Said Achmiz (SaidAchmiz) · 2013-05-14T21:40:10.289Z · LW(p) · GW(p)

Well, traditional theistic gods tend to be incoherent as well as improbable. (Or one might say, improbable only to the extent that they are coherent, which is not very much.) So, I'm not sure how we'd integrate that into a probability estimate.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T22:00:06.215Z · LW(p) · GW(p)

Agreed; but this doesn't apply to lesser gods such as Zeus or Odin or whomever.

comment by Shmi (shminux) · 2013-05-14T21:38:01.426Z · LW(p) · GW(p)

What are the values for these probabilities and how have you estimated them?

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T21:58:54.855Z · LW(p) · GW(p)

Both of the values are somewhere around epsilon.

God-wise, I've never seen any evidence for anything remotely supernatural, and plenty of evidence for natural things. I know that throughout human history, many phenomena traditionally attributed to gods (f.ex. lightning) have later been demonstrated to occur by natural means; the reverse has never happened. These facts, combined with the internal (as well as mutual) inconsistencies inherent in most major religions, serve to drive the probability down into negligibility.

As for the Simulation Argument, once again, I've never seen any evidence of it, or any Matrix Lords, etc. Until I do, it's simply not parsimonious for me to behave as though the argument was true. However, unlike some forms of theism, the Simulation Argument is at least internally consistent. In additions, I've seen computers before and I know how they can be used to run simulations, which constitutes a small amount of circumstantial evidence toward the Argument.

EDIT: I should mention that the prior for both claims is already very low, due to their complexity.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-15T20:11:03.925Z · LW(p) · GW(p)

Both of the values are somewhere around epsilon.

Epsilon is not a number, it's a cop-out. Unless you put a number you are reasonably confident in on your prior, how would you update it in light of potential new evidence?

Replies from: Bugmaster
comment by Bugmaster · 2013-05-16T02:00:20.315Z · LW(p) · GW(p)

Well, so far, I have received zero evidence for the existence of either gods or Matrix Lords. This leaves me with, at best, just the original prior. I said "at best", because some of the observations I'd received could be interpreted as weak evidence against gods (or Matrix Lords), but I'm willing to ignore that for now.

If I'm using some measure of algorithmic complexity for the prior, what values should I arrive at ? Both the gods and the Matrix Lords are intelligent in some general way, which is already pretty complex; probably as complex as we humans are, at the very least. Both of them are supremely powerful, which translates into more complexity. In case of the Matrix Lords, their hardware ought to more complex than our entire Universe (or possibly Multiverse). Some flavors of gods are infinitely powerful, whereas others are "merely" on par with the Matrix Lords.

I could keep listing properties here, but hopefully this is enough for you to decide whether I'm on the right track. Given even the basics that I'd listed above, I find myself hard-pressed to come up with anything other than "epsilon" for my prior.

comment by [deleted] · 2013-05-14T22:06:40.159Z · LW(p) · GW(p)

Theistic arguments per se, however, are generally bad.

Why would we expect there to be good arguments for the wrong answer?

We here at Less Wrong have seen many arguments for the existence of God...All of those arguments are wrong.

Thank you for being unambiguous, this is exactly the sort of thing I wanted to see if this community actually believed. Personally I think it reflects poorly on anyone's intellectual openness for them to believe the other side literally has no decent arguments.

Replies from: Estarlio, drnickbone, Luke_A_Somers, Bugmaster, SaidAchmiz
comment by Estarlio · 2013-05-15T07:54:02.741Z · LW(p) · GW(p)

Then you must believe the same with respect to homeopathic remedies, the flat earth society, and those who believe they can use their spiritual energy in the martial arts. Give us some good arguments for those.

There's a lot of stuff out there for which it seems to me there is no good argument. I mean really, let's try to maintain some sense of perspective here. The belief that everyone has a decent argument is, I think, pretty much demonstrably false. You presumably want us to believe that you're in the same category as people who ought to be taken seriously, but I don't really see how a belief in God is any more worthy of that than a belief in homeopathic remedies. At least, not based on your argument that all positions ought to be considered to have good arguments. If you're trying to make a general argument, you're going to get lumped in with them.

comment by drnickbone · 2013-05-15T07:11:31.815Z · LW(p) · GW(p)

An argument can be "decent" without being right. If you want an example, and can follow it Kurt Godel's ontological argument looks pretty decent. Consider that:

A) It is a logically valid argument

B) The premises sound fairly plausible (we can on the face of it imagine some sense of a "positive property" which would satisfy the premises)

C) It is not immediately obvious what is wrong with the premises

The wrongness can eventually be seen by carefully inspecting the premises, and checking which would go wrong in a null world (a possible world with no entities at all). Axiom 1 implies that if an impossible property is positive, then so is its negation (since an impossible property logically entails its negation). Axiom 2 says that can't be true - a property and its negation can't both be positive. So together these are a coded way of saying that all positive properties are possible properties. And then Axiom 5 (Neccessary existence is a positive property) goes wrong, because necessary existence is not a possible property in the null world. So it is not a positive property. Axiom 5 is inconsistent with Axioms 1 and 2.

comment by Luke_A_Somers · 2013-05-14T22:53:10.969Z · LW(p) · GW(p)

There are arguments for the existence of God that are good in the sense that they raise my estimate of the likelihood of the existence of God by a substantial factor.

They aren't sufficient to raise the odds to an overall appreciable level.

comment by Bugmaster · 2013-05-14T22:20:58.957Z · LW(p) · GW(p)

Sometimes, the issues really are cut-and-dried, though. To use a rather trivial example, consider the debate about the shape of the Earth. There are still some people who believe it's flat. They don't have any good arguments. We've been to space, we know the Earth is round, it's going to be next to impossible to beat that.

comment by Said Achmiz (SaidAchmiz) · 2013-05-14T22:20:29.165Z · LW(p) · GW(p)

I should clarify that when I said:

Why would we expect there to be good arguments for the wrong answer?

I meant this as the rhetorical "we", not "we, Less Wrong".

And in general, you shouldn't take me, or any other commenter in particular (even Eliezer), to represent all of Less Wrong. This is a community blog, after all.

Personally I think it reflects poorly on anyone's intellectual openness for them to believe the other side literally has no decent arguments.

Did you read what I wrote about what makes arguments good or bad...?

Edit: Sorry, I see that you quoted from that comment, so presumably you did read it. That said, I'm not sure that what I said was clear, given your subsequent comments...

comment by Richard_Kennaway · 2013-05-15T15:59:51.932Z · LW(p) · GW(p)

I didn't mean that your initial beliefs should come out stronger. I meant that having updated for good arguments, and by incorporating them, your beliefs will be more complete, better thought-out, and more sustainable for the future.

That is what many people here have done regarding theism. Seen the best arguments, and decided that they fail utterly. Eliezer quoted above talks about Modern Orthodox Judaism allowing doubt as a ritual, but not doubt as a practice leading to a result. You would have us listen to arguments as ritual, but not actually come to a conclusion that some of them are wrong.

comment by hairyfigment · 2013-05-15T06:47:42.263Z · LW(p) · GW(p)

Yes, but what I expected was...um...atheists who were better than most, who had arrived at atheism through two-sided discourse.

Bob Altemeyer asked college students about this, some of whom had a strong allegiance to 'traditional' authority and some less so:

Interestingly, virtually everyone said she had questioned the existence of God at some time in her life. What did the authoritarian students do when this question arose? Most of all, they prayed for enlightenment. Secondly, they talked to their friends who believed in God. Or they talked with their parents. Or they read scriptures. In other words, they seldom made a two-sided search of the issue. Basically they seem to have been seeking reassurance about the Divinity, not pro- and con- arguments about its existence-- probably because they were terrified of the implications if there is no God.

Did low RWA students correspondingly immerse themselves in the atheist point of view? No. Instead they overwhelmingly said they had tried to figure things out for themselves. Yes they talked with nonbelievers and studied up on scientific findings that challenged traditional beliefs. But they also discussed things with friends who believed in God and they talked with their parents (almost all of whom believed in God). They exposed themselves to both yea and nay arguments, and then made up their minds--which often left them theists. In contrast, High RWAs didn’t take a chance on a two-sided search.

Despite what he says at the end, this "RWA" attitude correlates with religion - and Less Wrong seems to have unusually low RWA in any case. We also have a certain tendency to read books. You should therefore expect some of us to know the 'strongest' arguments for religion, and consider them bad. Don't just assert that we don't. Name an argument and see if we know it!

On a related note, you seem statistically in danger of losing your faith. If you want to keep it, you should use some form of Crowley's general method of religious devotion. While I failed to produce a vision of the Goddess Eris in the short time I allotted to this method, a kind of 'sophisticated' Discordianism did come to seem reasonable for a while.

comment by Said Achmiz (SaidAchmiz) · 2013-05-14T20:38:53.618Z · LW(p) · GW(p)

I don't know what you think a "strong argument" is. Arguments are not weapons, with a certain caliber and stopping power and so forth, such that two sides might go at each other with their respective arguments, and whoever's got the most firepower wins. That's not how it works.

An argument may be more or less persuasive (relative to some audience!), but that depends on many things, such as whether the argument hits certain emotional notes, whether it makes use of certain common fallacies and biases, or certain commonly held misconceptions; or whether it is structured so as to obscure its flaws; or even whether it's couched in fancy or beautiful sounding language.

Whether an argument is correct (i.e. valid and sound) is another matter entirely, and may have little to do with whether the argument, in actual fact, tends to persuade many people.

We here at Less Wrong have seen many arguments for the existence of God, many of which are found to be persuasive by many people who are not aware of their flaws (by "their" I can mean the arguments' flaws, or the flaws of the audience, i.e. cognitive biases and so forth).

All of those arguments are wrong (invalid, unsound, full of fallacies, etc.). That's what we mean when we say they're not "good" arguments.

comment by Said Achmiz (SaidAchmiz) · 2013-05-14T20:44:02.502Z · LW(p) · GW(p)

Oh, and another thing:

The optimal situation is that both sides have strong arguments, but atheism's arguments are stronger.

What do you mean, "optimal"? Look, for any question where there is, in principle, a correct answer (which might not be known), the totality of the information available to us at any given time will point to some answer (which might not be the correct one, given incomplete information). Arguments for that answer might be correct. Arguments for some other answer will be wrong.

Why would we expect there to be good arguments for the wrong answer?

Yes, but what I expected was...um...atheists who were better than most, who had arrived at atheism through two-sided discourse.

What does two-sided discourse look like, in your view?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-14T21:18:00.624Z · LW(p) · GW(p)

Look, for any question where there is, in principle, a correct answer (which might not be known), the totality of the information available to us at any given time will point to some answer (which might not be the correct one, given incomplete information). Arguments for that answer might be correct. Arguments for some other answer will be wrong.

It may help to note that ibidem has made earlier claims about how the meaning of "reliably evaluate evidence" is variable, so I suspect they would reject the claim that there's a correct answer towards which available information points at any given time.

More specifically, I would expect them to claim that there can be two or more mutually exclusive answers to which the same information points equally strongly, "depending on your paradigm."

comment by Bugmaster · 2013-05-14T20:33:17.158Z · LW(p) · GW(p)

The optimal situation is that both sides have strong arguments, but atheism's arguments are stronger.

Why is that the "optimal" situation ? Optimal according to what metric ?

who had arrived at atheism through two-sided discourse.

I personally never was religious, but AFAIK I'm an outlier. Most atheists arrived at atheism exactly in the way that you describe; others got there by reading the Bible. I don't have hard data to support this claim, though, so I could be wrong.

I think the holy books are kind of hampering mainstream religions, to be honest. We live in a world where pictures of distant galaxies are considered so mundane that they hardly ever make the news, and where the average person carries a supercomputer in his pocket, which connects him to a global communication network that speaks via invisible light. The average person typically wields this unimaginable power in order to inform his friends about quotidian matters such as "look at what I had for lunch today".

Against the backdrop of this much knowledge and power, the holy books look... well... kind of drab. They tell us that the world is a tiny disc, covered by a crystalline dome, and that the space outside this dome is inhabited by vaguely humanoid super-powered beings who, despite having the power to create worlds and cover them with crystalline domes, actually do care about what we had for lunch today. Hopefully it wasn't ham. Gods hate ham.

I understand that most theists don't take their holy books quite that literally, and that it's all supposed to be a big allegory for something or other, but still, it's hard to get excited about a text that didn't even get the shape of the Earth right.

That's my own personal perspective, at least.

Replies from: Nornagest
comment by Nornagest · 2013-05-14T21:37:45.552Z · LW(p) · GW(p)

To be fair, the crystal sphere thing doesn't appear in any Abrahamic holy books (that I know of); it's a feature of Aristotelian cosmology that the Church picked up during the period when it was essentially the only scholarly authority running in what used to be called Christendom and therefore needed an opinion on natural philosophy. I believe the bit in Genesis about erecting a firmament in the primordial water does ultimately refer to a traditional belief along similar lines, but it's pretty ambiguous.

Flat-earth cosmology was known to be false by Aristotle's time, although some monks in the early Middle Ages seem to have missed the memo -- again without explicit Biblical support, though. Science in the Islamic world always used a round-earth model as far as I know, and I don't remember reading anything in the Koran that contradicts that, although it's been several years.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T21:44:41.209Z · LW(p) · GW(p)

To be fair, the crystal sphere thing doesn't appear in any Abrahamic holy books (that I know of)

Ok, I may have overreached with the "crystal" thing, but there are definitely several passages that refer to a solid dome separating two sections of the world. This dome is typically referred to as the "firmament", and is referred to in passages outside of Genesis on occasion.

As for the flat Earth, I admit that the claims there are weaker. The Bible manages "four corners of the Earth", but that could be a metaphor. The Devil also transports Jesus to the top of a tall mountain to show him "all the kingdoms of the Earth", but that could've been an illusion.

comment by Desrtopa · 2013-05-13T23:12:31.863Z · LW(p) · GW(p)

I agree with Jack here, but I'm going to add the piece of advice that used to be very common for newcomers here, although it's dropped off over time as people called attention to the magnitude of the endeavor, and suggest that you finish reading the sequences before trying to engage in further religious debate here.

Eliezer wrote them in order to bring potential members of this community up to speed so that when we discuss matters, we could do it with a common background, so that everyone is on the same page and we can work out interesting disagreements without rehashing the same points over and over again. We don't all agree with all the contents of every article in the sequences, but they do contain a lot of core ideas that you have to understand to make sense of the things we think here. Reading them should help give you some idea, not just what we believe, but why we think that it makes more sense to believe those things than the alternatives.

The "rigidity" which you detect is not a product of particular closedmindedness, but rather a deliberate discarding of certain things we believe we have good reason not to put stock in, and reading the sequences should give you a much better idea of why. On the other hand, if you don't stick so closely to the topic of religion, I think you'll find that we're also open to a lot of ideas that most people aren't open to.

If we're to liken rationality to a martial art, then it would be one after the pattern of Jeet Kune Do; "Adapt what is useful, reject what is useless." A person trained in a style or school which lacked grounding in real life effectiveness might say "At my school, we learned techniques to knock guys out with 720 degree spinning kicks and stab people with knives launched from our toes, and they were awesome, but you guys just reject them out of hand. Your style seems really rigid and closed-minded to me." And the Jeet Kune Do practitioner might respond "Fancy spinning kicks and launching knives from your toes might be awesome, but they're awesome for things like displaying your gymnastic ability and finesse, not for defending yourself or defeating an opponent. If we want to learn to do those things, we'll take up gymnastics or toe-knife-throwing as hobbies, but when it comes to martial arts techniques, we want to stick to ones which are awesome at the things martial arts techniques are supposed to be for. And when it comes to those, we're not picky at all. "

Replies from: None, None
comment by [deleted] · 2013-05-14T21:55:36.396Z · LW(p) · GW(p)

Oh, and since I currently have negative karma, I'm unable to directly respond to your other comments.

In response to this one:

If Mormonism is incorrect, do you want to know that?

It's a very important question and one I need to think about more. In the next few days I'll write a Discussion post addressing my beliefs, including why I'm planning not to lose my faith at the moment.

And this one:

But you haven't showed much willingness so far to discuss your reasons for your belief in which way the evidence falls or ours.

Perhaps it's not fair of me to ask for your evidence without providing any of my own. However I really don't want to just become the irrational believer hopelessly trying to convince everyone else.

rather than concluding from your experience with us that we're rigid and closed-minded on the matter, you've taken it as a premise to begin with

I didn't come here expecting people to be rigid. But when I asked people what the best arguments for theism were, they either told me that there were none, or they rehashed bad ones that are refuted easily.

Are you familiar enough with the evidence that we're prepared to bring to the table that you think you could argue it yourself?

Yes, I definitely am. In an intellectual debate I could probably defend atheism better than belief; I was originally looking for good arguments in favor of theism and I thought that you guys of all people ought to know some. Suffice it to say that I was largely wrong about that.

Replies from: DSimon, Bugmaster
comment by DSimon · 2013-05-14T22:02:23.392Z · LW(p) · GW(p)

I didn't come here expecting people to be rigid. But when I asked people what the best arguments for theism were, they either told me that there were none, or they rehashed bad ones that are refuted easily.

How does this response mean that we're rigid?

comment by Bugmaster · 2013-05-14T22:14:15.328Z · LW(p) · GW(p)

I was originally looking for good arguments in favor of theism and I thought that you guys of all people ought to know some. Suffice it to say that I was largely wrong about that.

Sorry, I can't tell you what I don't know. All the arguments for theism that I've ever heard were either chock-full of logical fallacies, or purely instrumental, of the form "I don't care if any of this stuff is true or not, but I'm going to pretend that it is because doing so helps me in some way". I personally believe that there's a large performance penalty associated with believing false things, and thus arguments of the second sort are entirely unconvincing for me.

I am looking forward to your discussion post, however. Hopefully, I'll finally get to see some solid arguments for theism in there !

Replies from: None
comment by [deleted] · 2013-05-14T22:41:18.473Z · LW(p) · GW(p)

I am looking forward to your discussion post, however. Hopefully, I'll finally get to see some solid arguments for theism in there !

Sorry to disappoint you there. As I've said, I have no hope of convincing all of you and I'm not going to try; I wouldn't stand a chance in a formal debate against a dozen of you.

I was thinking more along the lines of why I think it's best to take the conclusions of a certain way of thinking with a grain of salt no matter how right its members think they are. Being skeptical of skepticism, one could say. So yes, it's likely going to seem like a long criticism of Less Wrong's fundamental philosophy, and chances are it won't be too popular—but you never know. I think it's a very good practice in life, not to accept any philosophy too fully.

What gives me the authority to say such things? An outside perspective.

Replies from: shware
comment by shware · 2013-05-15T05:47:21.178Z · LW(p) · GW(p)

An always open mind never closes on anything. There is a time to confess your ignorance and a time to relinquish your ignorance and all that...

comment by [deleted] · 2013-05-14T21:44:12.938Z · LW(p) · GW(p)

suggest that you finish reading the sequences

We don't all agree with all the contents of every article in the sequences, but they do contain a lot of core ideas that you have to understand to make sense of the things we think here.

I've read most of the sequences. If you believe there are core ideas I'm missing, tell me which ones and I'd be happy to research them. But chances are I've read that sequence already, especially if you mention ones about religion.

the magnitude of the endeavor

It's an important point. If you demand that a user read every word Dear Leader has ever written, you're not going to get many new voices willing to contribute, which as we all know is bad for the intellectual diversity of the group.

Fancy spinning kicks and launching knives from your toes might be awesome, but they're awesome for things like displaying your gymnastic ability and finesse, not for defending yourself or defeating an opponent.

See, the problem here is a difference in the perception of what is "useful." If you only learn martial arts because you want to defeat opponents, then sure, it's fine to reject 720 degree spinning kicks. But self-defense is not in fact the only point of martial arts. There is often an element of theater or even ritual that is lost when you reject what Jeet Kune Do thinks is "useless."

the things martial arts techniques are supposed to be for.

Says who? That's the sort of thing that a lot of people tend to disagree about, and there is absolute right answer to such a question. In fact, I'll quote Wikipedia's lead sentence: "The martial arts are codified systems and traditions of combat practices, which are practiced for a variety of reasons: self-defense, competition, physical health and fitness, entertainment, as well as mental, physical, and spiritual development."

Replies from: Desrtopa
comment by Desrtopa · 2013-05-15T04:50:58.856Z · LW(p) · GW(p)

I'm going to unify a couple comment threads here.

Perhaps it's not fair of me to ask for your evidence without providing any of my own. However I really don't want to just become the irrational believer hopelessly trying to convince everyone else.

Honestly, I think you'd be coming across as much more reasonable if you were actually willing to discuss the evidence than you do by skirting around it. There are people here who wouldn't positively receive comments standing behind evidence that they think is weak, but at least some people would respect your willingness to engage in a potentially productive conversation. I don't think anyone here is going to react positively to "There's some really strong evidence, and I'm not going to talk about it, but you really ought to have come up with it already yourself."

Will Newsome gets like that sometimes, and when he does, his karma tends to plummet even faster than yours has, and he's built up a lot of it to begin with.

If you want to judge whether our inability to provide "good" arguments really is due to our lack of familiarity with the position we're rejecting, then there isn't really a better way than to expose us to the arguments you think we ought to be aware of and see if we're actually familiar with them.

Says who? That's the sort of thing that a lot of people tend to disagree about, and there is absolute right answer to such a question. In fact, I'll quote Wikipedia's lead sentence: "The martial arts are codified systems and traditions of combat practices, which are practiced for a variety of reasons: self-defense, competition, physical health and fitness, entertainment, as well as mental, physical, and spiritual development."

Well, if you want to learn techniques for historical value, to show off your gymnastic ability, etc. learning Jeet Kune Do doesn't preclude that, but it's important to be aware of what the techniques are useful for and what they're not.

Similarly, being a rationalist by no means precludes appreciating tradition, participating in a tight knit community, appreciating the power of a thematic message, etc. But it's important to be aware of what information increases the likelihood that a belief is actually true, and what doesn't.

Replies from: DSimon
comment by DSimon · 2013-05-15T15:01:36.754Z · LW(p) · GW(p)

Honestly, I think you'd be coming across as much more reasonable if you were actually willing to discuss the evidence than you do by skirting around it.

I second this recommendation.

Ibidem, it seems that you don't want to be put in the position of defending your beliefs among people who might consider them weird, or stupid, or even harmful. I empathize a lot with that; I've been in the same situation enough times to know how nasty and unfun it can get.

But unfortunately, I don't think there's another way the conversation can continue. You've said a few times that you expected us to know of some good arguments for theism, and that you're disappointed that we don't have any. Well, what can anyone say in response to that but "Okay, please show us what we're missing"?

I think you can at least trust the community here to take what you say seriously, and not just dismiss you out of hand or use it as an opportunity to score tribal points and virtual high-fives. We're at least self-aware enough to avoid those discussion traps most of the time.

Replies from: None
comment by [deleted] · 2013-05-15T21:12:11.940Z · LW(p) · GW(p)

Okay.

I don't think there's another way the conversation can continue.

I'd be happy to end the conversation here, as you're right that it's no longer getting anywhere, but I realize that that would be lame and unsportsmanlike of me. Everyone here is expecting me to provide good arguments. I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there." I acknowledge now that I have little choice but to come up with some, and I'll do my best.

I will try to explain my position, and since everyone is asking I'll include formal debate-style arguments in favor of religion.

Please, though, give me a few days. I'm still unsure where I stand in many ways, but in the last week has my views have evolved on a lot of issues.

So I'm going to write about a) my arguments in favor or religion, though I don't feel they are sufficient and I want to improve them, and b) why I don't fully accept the LW way of thinking.

I'm still thinking about it, and will be until I post to the Discussion thread in a few days or, perhaps (but not likely), weeks.

And then on a topic that seems to be mostly unrelated, I want to know what everyone thinks of my response to EY concerning the appropriateness of religious discussion on this website.

(I'm assuming that everyone interested in my other threads will see this here through "recent comments.")

EDIT: I on second thought, my arguments and my thoughts probably ought to be in two separate posts.

Replies from: Vladimir_Nesov, DSimon, None, SaidAchmiz, TheOtherDave, shminux, DSimon
comment by Vladimir_Nesov · 2013-05-15T22:30:24.859Z · LW(p) · GW(p)

So I'm going to write about a) my arguments in favor or religion, though I don't feel they are sufficient and I want to improve them, and b) why I don't fully accept the LW way of thinking.

I'm still thinking about it, and will be until I post to the Discussion...

I expect this is a bad idea. The post will probably get downvoted, and might additionally provoke another spurt of useless discussion. Lurk for a few more months instead, seeking occasional clarification without actively debating anything.

Replies from: None
comment by [deleted] · 2013-05-16T13:39:39.795Z · LW(p) · GW(p)

I've now had an overwhelming request to hear my supposed strong arguments. It would be awfully lame of me to drop out now.

useless discussion

People want to discuss this, which means they don't think it's useless.

Replies from: Vladimir_Nesov, Juno_Watt
comment by Vladimir_Nesov · 2013-05-16T15:35:33.111Z · LW(p) · GW(p)

I've now had an overwhelming request to hear my supposed strong arguments. It would be awfully lame of me to drop out now.

Just say "Oops" and move on. My point is that you almost certainly don't have good arguments, which is why your post won't be well-received. If it is so, it's better to notice that it is so in advance and act accordingly.

comment by Juno_Watt · 2013-05-16T15:59:31.593Z · LW(p) · GW(p)

Have you tested the strength of these arguments?

comment by DSimon · 2013-05-15T21:27:52.249Z · LW(p) · GW(p)

I don't feel [my arguments in favor of religion] are sufficient and I want to improve them

I know you've heard this from several other people in this thread, but I feel it's important to reiterate: this seems to be a really obvious case of putting the cart before the horse. It just doesn't make sense to us that you are interested only in finding arguments that bolster a particular belief, rather than looking for the best arguments available in general, for all the beliefs you might choose among.

I'm not asking you to respond to this right now, but please keep it firmly in mind for your Discussion post, as it's probably going to be the #1 source of disagreement.

comment by [deleted] · 2013-05-15T22:15:03.846Z · LW(p) · GW(p)

I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there."

This is a very odd epistemic position to be in.

If you expect there to be strong evidence for something, that means you should already strongly believe it. Whether or not you will find such evidence or what it is, is not the interesting question. The interesting question is why do you have that strong belief now? What strong evidence do you already posses that leads you to believe this thing?

If you haven't got any reason to believe a thing, then it's just like all the other things you don't have reason to believe, of which there are very many, and most of them are false. Why is this one different?.

The correct response, when you notice that a belief is unsupported, is to say oops and move on. The incorrect response is to go looking specifically for confirming evidence. That is writing the bottom line in the wrong place, and is not a reliable truth-finding procedure.

Also, "debate style" arguments are generally frowned upon around here. Epistemology is between you and God, so to speak. Do your thing, collect your evidence, come to your conclusions. This community is here to help you learn to find the truth, not to debate your beliefs.

Replies from: Bugmaster, Eugine_Nier
comment by Bugmaster · 2013-05-16T02:49:24.959Z · LW(p) · GW(p)

Do your thing, collect your evidence, come to your conclusions. This community is here to help you learn to find the truth, not to debate your beliefs.

That's a very good point. From what I've seen, most Christians who debate atheists end up using all kinds of convoluted philosophical arguments to support their position -- whereas in reality, they don't care about these arguments one way or another, since these are not the arguments that convinced them that their version of Christianity is true. Listening to such arguments would be a waste of my time, IMO.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-05-17T02:23:06.600Z · LW(p) · GW(p)

From what I've seen, most Christians who debate atheists end up using all kinds of convoluted philosophical arguments to support their position

The same is the case for a lot of atheist arguments.

whereas in reality, they don't care about these arguments one way or another, since these are not the arguments that convinced them that their version of Christianity is true. Listening to such arguments would be a waste of my time, IMO.

See my comment here.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-17T04:59:59.922Z · LW(p) · GW(p)

See my comment here.

Yeah, you make a good point when you say that we need "Bayesian evidence", not just the folk kind of "evidence". However, most people don't know what "Bayesian evidence" means, because this is a very specific term that's common on Less Wrong but approximately nowhere else. I don't know a better way to put it, though.

That said, my comment wasn't about different kinds of evidence necessarily. What I would like to hear from a Christian debater is a statement like, "This thing right here ? This is what caused me to become a Reformed Presbilutheran in the first place." If that thing turns out to be something like, "God spoke to me personally and I never questioned the experience" or "I was raised that way and never gave it a second thought", that's fine. What I don't want to do is sit there listening to some new version of the Kalaam Cosmological Argument (or whatever) for no good reason, when even the person advancing the argument doesn't put any stock in it.

Replies from: CCC
comment by CCC · 2013-05-17T09:04:02.140Z · LW(p) · GW(p)

What I would like to hear from a Christian debater is a statement like, "This thing right here ? This is what caused me to become a Reformed Presbilutheran in the first place."

I was raised Roman Catholic. I did give it a second thought; I found, through my life, very little evidence against the existence of God, and some slight evidence for the existence of God. (It doesn't communicate well; it's all anecdotal).

I do find, on occasion, that the actions of God are completely mysterious to me. However, an omniscient being would have access to a whole lot of data that I do not have access to; in light of that, I tend to assume that He knows what He is doing.

The existence of God also implies that the universe has some purpose, for which it is optimised. I'm not quite sure what that purpose is; the major purpose of the universe may be something that won't happen for the next ten billion years. However, trying to imagine what the purpose could be is an interesting occasional intellectual exercise.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-05-17T12:27:32.711Z · LW(p) · GW(p)

I found, through my life, very little evidence against the existence of God

May I ask what you expected evidence against the existence of God to have looked like?

Replies from: CCC
comment by CCC · 2013-05-17T13:17:41.563Z · LW(p) · GW(p)

That is entirely the right question to ask. And the answer is, I don't have the faintest idea.

The question there is, what would a universe without God look like? And that question is one that I can't answer. I'd guess that such a universe, if it were possible, would have more-or-less entirely arbitrary and random natural laws; I'd imagine that it would be unlikely to develop intelligent life; and it would be unlikely for said intelligent life, if it developed, to be able to gather any understanding of the random and arbitrary natural laws at all.

The trouble is, this line of reasoning promptly falls into the same trouble as any other anthropic argument. The fact that I'm here, thinking about it, means that there is intelligent life in this universe. So a universe without intelligent life is counterfactual, right from the start. I knew that when I started constructing the argument; I can't be sure that I'm not constructing an argument that's somehow flawed. It's very easy, when I'm sure of the answer, to create an argument that's more rationalising than rationality; and it can be hard to tell if I'm doing that.

Replies from: Eliezer_Yudkowsky, Richard_Kennaway, Bugmaster
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-18T07:00:52.067Z · LW(p) · GW(p)

Doesn't this argument Prove Too Much by also showing that without a Metagod, God should be expected to have arbitrary and random governing principles? The universe is ordered, but trying to explain that by appealing to an ordered God begs the question of what sort of ordered Metagod constructed the first one.

Replies from: CCC
comment by CCC · 2013-05-18T08:07:58.344Z · LW(p) · GW(p)

I don't think that necessarily follows. A sufficiently intelligent mind (and I think I can assume that if God exists, then He is sufficiently intelligent) can impose self-consistency and order on itself.

This also leads to the possible alternate hypothesis that the universe is, in fact, an intelligent mind in and of itself; that would be pantheism, I think.

Of course, this does not prevent the possibility of a Pebblesorter God, or a Paperclipper God. To find out whether these are the case, we can look at the universe; there certainly don't seem to be enough paperclips around for a Paperclipper God. There might well be a Beetler God, of course; there's plenty of beetles. Or a Planetsorter God, a large-scale variant on the Pebblesorter; as far as we know, all the planets are neatly sorted into groups around stars. Order, by itself, does not necessarily mean an order that we would have to agree with.

Replies from: pragmatist
comment by pragmatist · 2013-05-18T09:44:08.418Z · LW(p) · GW(p)

A sufficiently intelligent mind (and I think I can assume that if God exists, then He is sufficiently intelligent) can impose self-consistency and order on itself.

This begs Eliezer's question, I think. Intelligence itself is highly non-arbitrary and rule-governed, so by positing that God is sufficiently intelligent (and the bar for sufficiency here is pretty high), you're already sneaking in a bunch of unexplained orderliness. So in this particular case, no, I don't think you can assume that if God exists, then He is sufficiently intelligent, just like I can't respond to your original point by assuming that if the universe exists, then it is orderly.

Replies from: CCC
comment by CCC · 2013-05-19T07:08:39.125Z · LW(p) · GW(p)

Intelligence itself is highly non-arbitrary and rule-governed

I disagree. Intelligence makes its own rules once it is there; but the human brain is one of the most arbitrary and hard-to-understand pieces of equipment that we know about. There have been a lot of very smart people trying to build AI for a very long time; if the creation of intelligence were highly non-arbitrary and followed well-known rules, we would have working AI by now.

So, yes; I think that intelligence can arise from arbitrary randomness. I'd go further, and claim that if it can't arise from arbitrary randomness then it can't exist at all; either intelligence arose in the form of God who then created an orderly universe (the theist hypothesis), or an arbitrary universe came into existence with random (and suspiciously orderly) laws that then led to intelligence in the form of humanity (the atheist hypothesis).

So in this particular case, no, I don't think you can assume that if God exists, then He is sufficiently intelligent, just like I can't respond to your original point by assuming that if the universe exists, then it is orderly.

Fair enough. Then let me put it this way; if God is not sufficiently intelligent, then God would be unable to create the ordered universe that we see; in this case, an ordered universe would be no more likely than it would be without God. An ordered universe is therefore evidence in favour of the claim that if God exists, then He is sufficiently intelligent to create an ordered universe.

Replies from: pragmatist, Juno_Watt, Bugmaster
comment by pragmatist · 2013-05-19T14:18:24.398Z · LW(p) · GW(p)

I disagree. Intelligence makes its own rules once it is there; but the human brain is one of the most arbitrary and hard-to-understand pieces of equipment that we know about. There have been a lot of very smart people trying to build AI for a very long time; if the creation of intelligence were highly non-arbitrary and followed well-known rules, we would have working AI by now.

I agree that intelligence itself is an optimizing process (which I presume is what you mean by "making its own rules"), but it is also the product of an optimizing process, natural selection. Your claim that it is arbitrary confuses the map and the territory. Just because we don't fully understand the rules governing the functioning of the brain does not mean it is arbitrary. Maybe it is weak evidence for this claim, but I think that is swamped by the considerable evidence that intelligence is exquisitely optimized for various quite complex purposes (and also that it operates in accord with the orderly laws of nature).

Also, smart people have been able to build AIs (albeit not AGIs), and the procedure for building machines that can perform intelligently at various tasks involves quite a bit of design. We may not know what rules govern our brain, but when we build systems that mimic (and often outperform) aspects of our mental function, we do it by programming rules.

I suspect, though, that we are talking past each other a bit here. I think you're using the words "random" and "arbitrary" in ways with which I am unfamiliar, and, I must confess, seem confused. In what sense is the second horn of your dilemma an "arbitrary universe [coming] into existence with random (and suspiciously orderly) laws"? What does it mean to describe the universe as arbitrary and random while simultaneously acknowledging its orderliness? Do you simply mean "uncaused", because (a) that is not the only alternative to theism, and (b) I don't see why one would expect an uncaused universe (as opposed to a universe picked using a random selection process) not to have orderly laws.

Fair enough. Then let me put it this way; if God is not sufficiently intelligent, then God would be unable to create the ordered universe that we see; in this case, an ordered universe would be no more likely than it would be without God. An ordered universe is therefore evidence in favour of the claim that if God exists, then He is sufficiently intelligent to create an ordered universe.

OK, but this doesn't respond to Eliezer's point. If you conditionalize on the existence of (a Christianish) God, then plausibly an intelligent God is more likely than an unintelligent one, given the orderliness of the universe. But Eliezer was contesting your claim that the orderliness of the universe is evidence for the existence of God, while also not being evidence for the existence of a Metagod.

So Eliezer's question is, if P(orderliness | God) > P(orderliness | ~God), then why not also P(intelligent God | Metagod) > P(intelligent God | ~Metagod)? Your response is basically that P(intelligent God | God & orderliness) > P(~intelligent God | God & orderliness). How does this help?

Replies from: Juno_Watt, CCC
comment by Juno_Watt · 2013-05-23T15:21:12.350Z · LW(p) · GW(p)

I don't really follow this. Things in Platonia or Tegmark level IV don't have separate probabilities Any coherent mathematical stucture is guranteed to exist. (And infinite ones are no problem). So the probabilty of a infinite stack of metagods depends on the coherence of a stack of metagods being considered a coherent mathematical structure, and the likelihood of our living in a Tegmark IV.

comment by CCC · 2013-05-20T08:13:01.033Z · LW(p) · GW(p)

In what sense is the second horn of your dilemma an "arbitrary universe [coming] into existence with random (and suspiciously orderly) laws"? What does it mean to describe the universe as arbitrary and random while simultaneously acknowledging its orderliness? Do you simply mean "uncaused", because (a) that is not the only alternative to theism, and (b) I don't see why one would expect an uncaused universe (as opposed to a universe picked using a random selection process) not to have orderly laws.

What I mean is, not planned. If I toss a fair coin ten thousand times, I have an outcome (a string of heads and tails) that would be arbitrary and random. It is possible that this sequence will be an exactly alternating sequence of heads and tails (HTHTHTHTHTHT...) extending for all ten thousand tosses (a very orderly result); but if I were to actually observe such an orderly result, I would suspect that there is an intelligent agent controlling that result in some manner. (That is what I mean by 'suspiciously orderly' - it's orderly enough to suggest planning).

So Eliezer's question is, if P(orderliness | God) > P(orderliness | ~God), then why not also P(intelligent God | Metagod) > P(intelligent God | ~Metagod)? Your response is basically that P(intelligent God | God & orderliness) > P(~intelligent God | God & orderliness). How does this help?

Well, it makes sense that P(intelligent God | Metagod) > P(intelligent God | ~Metagod). And therefore P(Metagod | Metametagod) > P(Metagod | ~Metametagod), and so on to infinity; but an infinity of metagods and metametagods and so on is clearly an absurd result. The chain has to stop somewhere, and that 'somewhere' has to be with an intelligent being. Therefore, there has to be an intelligent being that can either exist without being created by an intelligent creator, or that can create itself in some sort of temporal loop. (As I understand it, the atheist viewpoint is that a human is an intelligent being that can exist without requiring an intelligent creator).

And my point was that P(intelligent God | ~Metagod) is non-zero. The chain can stop. P(Metagod | intelligent God) may be fairly high; but P(Metametagod | intelligent God) must be lower (since P(Metametagod | Metagod) < 1). If I go far enough along the chain, I expect to find that P(Metametametametametametametagod | intelligent God) is fairly low.

Does that help?

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-22T18:58:49.748Z · LW(p) · GW(p)

but an infinity of metagods and metametagods and so on is clearly an absurd result.

That's not clear.. There is presumably something like that in Tegmark's level IV.

The chain has to stop somewhere, and that 'somewhere' has to be with an intelligent being.

You haven't established the 'has to' (p==1.0). You can always explain Order coming from Randomness by assuming enough randomness. Any finite string can be found with p>0.5 in a sufficiently long infinite string. Assuming huge amounts of unobserved randomness is not elegant, but neither is assuming stacks of metagods. Your prreferred option is to reject god-needs-a-metagod without giving a reason, but just because the alternatives seem worse. But that is very much a subjective judgement.

Replies from: CCC
comment by CCC · 2013-05-23T07:32:46.928Z · LW(p) · GW(p)

That's not clear.. There is presumably something like that in Tegmark's level IV.

Assume that P(%5E{x+1})god | ^{x})god) = Q, where Q < 1.0 for all x. Consider an infinite chain; what is P(^{\infty})god|god)?

This would be P(^{x})god|god) = . Since Q<1.0, this limit is equal to zero.

...hmmm. Now that I think about it, that applies for any constant Q. It may be possible to craft a function Q(x) such that the limit as x approaches infinity is non-zero; for example, if I set Q(1)=0.75 and then Q(x) for x>1 such that, when multiplied by the product of all the Q(x)s so far, the distance between the previous product and 0.5 is halved (thus Q(2)=5/6, Q(3)=9/10, Q(4)=17/18, and so on); then Q(x) asymptotically approaches 1, while P(^{\infty})god|god) = 0.5.

You haven't established the 'has to' (p==1.0)

You're right, and thank you for pointing that out. I've now shown that p<1.0 (it's still pretty high, I'd think, but it's not quite 1).

Replies from: wuncidunci, Juno_Watt
comment by wuncidunci · 2013-05-23T09:19:20.970Z · LW(p) · GW(p)

You seem to be neglecting the possibility of a cyclical god structure. Something which might very well be possible in Tegmark level IV if all the gods are computable.

Replies from: CCC
comment by CCC · 2013-05-23T17:11:26.338Z · LW(p) · GW(p)

Huh. You are right; I had neglected such a cyclical god structure. That would appear to require time travel, at least once, to get the cycle started.

Replies from: wuncidunci
comment by wuncidunci · 2013-05-23T17:30:00.060Z · LW(p) · GW(p)

Not strictly speaking. Warning, what follows is pure speculation about possibilities which may have little to no relation to how a computational multiverse would actually work. It could be possible that there are three computable universes A, B & C, such that the beings in A run a simulation of B appearing as gods to the intelligences therein, the beings in B do the same with C, and finally the beings in C do the same with A. It would probably be very hard to recognize such a structure if you were in it because of the enormous slowdowns in the simulation inside your simulation. Though it might have a comparatively short description as the solution to a an equation relating a number of universes cyclically.

In case that wasn't clear I imagine these universes to have a common quite high-level specification, with minds being primitive objects and so on. I don't think this would work at all if the universes had physics similar to our own; needing planets to form from elementary particles and evolution to run on these planets to get any minds at all, not speaking of computational capabilities of simulating similar universes.

Replies from: CCC
comment by CCC · 2013-05-23T17:34:39.042Z · LW(p) · GW(p)

...congratulations. I thought time travel would be a neccesity, I certainly didn't expect that intuition to be disproved so quickly.

It may be speculative, but I don't see any glaring reason to disprove your hypothesised structure.

comment by Juno_Watt · 2013-05-23T15:22:49.177Z · LW(p) · GW(p)

I don't really follow this. Things in Platonia or Tegmark level IV don't have separate probabilities Any coherent mathematical structure is guaranteed to exist. (And infinite ones are no problem). So the probabilty of a infinite stack of metagods depends on the coherence of a stack of metagods being considered a coherent mathematical structure, and the likelihood of our living in a Tegmark IV.

Replies from: CCC
comment by CCC · 2013-05-23T17:14:37.779Z · LW(p) · GW(p)

Ah. I was trying to - very vaguely - estimate the probability that we live in such a universe.

I hope that closes the inferential gap.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-23T17:33:37.149Z · LW(p) · GW(p)

I don't see why the probability would decompose into the probability of its parts -- a T-IV is all or nothing, as far as I can see. It actually contains very little information .. it isn't a very fine-grained region in UniverseSpace.

Replies from: CCC
comment by CCC · 2013-05-23T17:38:45.863Z · LW(p) · GW(p)

My intuition is that universes with more metagods will be less common in the space of all that can possibly be. We exist in a given universe, which is perforce a universe that can possibly be; I'm trying to guess which one.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-23T17:41:55.168Z · LW(p) · GW(p)

T-IV is already a large chunk of UniverSpace-- it is everything that is mathematically possible. The T-IV question is more about how large a region of UnverseSpace the universe is, than about pinpointing a small region.

Replies from: CCC
comment by CCC · 2013-05-24T10:40:34.578Z · LW(p) · GW(p)

Ah. Then I think we've been talking past each other for some time now.

comment by Juno_Watt · 2013-05-22T18:51:31.395Z · LW(p) · GW(p)

I disagree. Intelligence makes its own rules once it is there; but the human brain is one of the most arbitrary and hard-to-understand pieces of equipment that we know about.

It's not arbitrary in the sense of random. It's arbitrary in the sense of not following obvious apriori principles. It may impose its own higher-order rules, but that is something that happens in a system that already combines order and chaos in a very subtle and hard to duplicate way. Simple, comprehensible order of the kind you detect and admire in the physical unverse at large is easier to do than designing a brain. No one can build an AGI, but physicists build models of physical systems all the time.

Replies from: CCC
comment by CCC · 2013-05-23T07:45:04.322Z · LW(p) · GW(p)

It's not arbitrary in the sense of random. It's arbitrary in the sense of not following obvious apriori principles.

Agreed. The human brain is the output of a long, optimising process known as evolution.

Simple, comprehensible order of the kind you detect and admire in the physical unverse at large is easier to do than designing a brain. No one can build an AGI, but physicists build models of physical systems all the time.

Yes. Simple, comprehensible order is one of the easiest things to design; as you say, physicists do it all the time. But a lot of systems that are explicitly not designed (for example, the stock market) are very chaotic and extremely hard to model accurately.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-23T10:41:28.232Z · LW(p) · GW(p)

I still don't see why you would think order of a kind comprehensible to humans in the universe is evidence it was designed by a much smarter entity.

Replies from: CCC
comment by CCC · 2013-05-23T17:25:43.340Z · LW(p) · GW(p)

I'm trying to use it as evidence that it was designed at all.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-23T17:32:03.570Z · LW(p) · GW(p)

Would it's being designed by a Matrix Lord of non-superhuman intelligence help your case?

Replies from: CCC
comment by CCC · 2013-05-23T17:36:27.791Z · LW(p) · GW(p)

It would certainly explain the observations that I am using as evidence.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-23T17:43:21.634Z · LW(p) · GW(p)

Why is positing unobserved Matrix Lords better than positing unobserved randomness or unobserved failed universes?

Replies from: CCC
comment by CCC · 2013-05-24T10:43:06.232Z · LW(p) · GW(p)

Why is positing unobserved Matrix Lords better than positing unobserved randomness or unobserved failed universes?

Those options would also explain the observations that I am basing my argument on. I don't have any argument for why any one of those options is at all better than any other one.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-24T11:24:24.698Z · LW(p) · GW(p)

The zero-one-infinity rule might help you out.

Replies from: CCC
comment by CCC · 2013-05-26T03:51:41.190Z · LW(p) · GW(p)

So, you're suggesting there should be either zero, one, or a potentially infinite number of Matrix Lords, and never (say) exactly three?

comment by Bugmaster · 2013-05-19T07:27:57.555Z · LW(p) · GW(p)

So, yes; I think that intelligence can arise from arbitrary randomness.

Did you mean to say "can not" in that sentence ?

Replies from: CCC
comment by CCC · 2013-05-19T07:52:07.662Z · LW(p) · GW(p)

No, I did not.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-19T08:01:37.007Z · LW(p) · GW(p)

I'm not sure I understand your argument, then. If intelligence can arise from "arbitrary randomness", then a universe that contains intelligence is evidence neither for nor against a creator deity, once you take the anthropic principle into account.

Replies from: CCC
comment by CCC · 2013-05-20T07:40:44.487Z · LW(p) · GW(p)

Yes, intelligence can arise from arbitrary randomness; I'm not using intelligence as evidence of an intelligent Creator. Using intelligence as an indicator of anything falls foul of anthropic principles.

My argument is that a universe that's as straightforward, as comprehensible in its natural laws, as our universe seems about as unlikely as tossing a coin ten thousand times and getting an exact alternating pattern of heads and tails (HTHTHTHTHTHT...), or a lottery draw that consists of the numbers 1, 2, 3, 4, 5, 6 in that order.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-20T22:03:07.324Z · LW(p) · GW(p)

Isn't this just the anthropic principle in action ? Mathematically speaking, the probability of "123456" is exactly the same as that of "632415" or any other sequence. We humans only think that "123456" is special because we especially enjoy monotonically increasing numbers.

Replies from: CCC
comment by CCC · 2013-05-21T08:01:35.912Z · LW(p) · GW(p)

Isn't this just the anthropic principle in action ?

I'm not sure. The anthropic principle is arguing from the existence of an intelligent observer; I'm arguing from the existence of an orderly universe. I don't think that the existence of an orderly universe is necessarily highly correlated with the existence of an intelligent observer. Unfortunately, lacking a large number of universes to compare with each other, I have no proof of that.

Mathematically speaking, the probability of "123456" is exactly the same as that of "632415" or any other sequence. We humans only think that "123456" is special because we especially enjoy monotonically increasing numbers.

Yes. I do not claim that the existence of an orderly universe is undeniable proof of the existence of God; I simply claim that it is evidence which suggests that the universe is planned, and therefore that there is (or was) a Planner.

Consider the lottery example; there are a vast number of sequences that could be generated. Such as (35, 3, 19, 45, 15, 8). All are equally probable, in a fair lottery. However, in a biased, unfair lottery, in which the result is predetermined by an intelligent agent, the sort of patterns that might appeal to an intelligent agent (e.g. 1, 2, 3, 4, 5, 6) are more likely to turn up. So P(bias|(1, 2, 3, 4, 5, 6)) > P(bias|(35, 3, 19, 45, 15, 8)).

Replies from: drnickbone, Bugmaster
comment by drnickbone · 2013-05-21T09:02:12.927Z · LW(p) · GW(p)

anthropic principle is arguing from the existence of an intelligent observer; I'm arguing from the existence of an orderly universe. I don't think that the existence of an orderly universe is necessarily highly correlated with the existence of an intelligent observer.

This depends on the direction of correlation doesn't it? It could well be that P[Observer|Orderly universe] is low (plenty of types of order are uninhabitable) but that P[Orderly universe|Observer] is high since P[Observer|Disorderly universe] is very much lower than P[Observer|Orderly universe]. So, for example, if reality consists of a mixture of orderly and disorderly universes, then we (as observers) would expect to find ourselves in one of the "orderly" ones, and the fact that we do isn't much evidence for anything.

Another thought is whether there are any universes with no order at all? You are likely imagining a "random" universe with all sorts of unpredictable events, but then are the parts of the universe dependent or independent random variables? If they are dependent, then those dependencies are a form of order. If they are independent, then the universe will satisfy statistical laws (large number laws for instance), so this is also a form of order. Very difficult to imagine a universe with no order.

Replies from: CCC
comment by CCC · 2013-05-22T08:01:34.756Z · LW(p) · GW(p)

It could well be that P[Observer|Orderly universe] is low (plenty of types of order are uninhabitable) but that P[Orderly universe|Observer] is high since P[Observer|Disorderly universe] is very much lower than P[Observer|Orderly universe].

Yes, it could be. And if this is true, then my line of argument here falls apart entirely.

Another thought is whether there are any universes with no order at all? You are likely imagining a "random" universe with all sorts of unpredictable events, but then are the parts of the universe dependent or independent random variables? If they are dependent, then those dependencies are a form of order. If they are independent, then the universe will satisfy statistical laws (large number laws for instance), so this is also a form of order. Very difficult to imagine a universe with no order.

Huh. A very good point. I was thinking in terms of randomised natural laws - natural laws, in short, that appear to make very little sense - but you raise a good point.

Hmmm... one example of a randomised universe might be one wherein any matter can accelerate in any direction at any time for absolutely no reason, and most matter does so on a fairly regular basis (mean, once a day, standard deviation six months). If the force of the acceleration is low enough (say, one metre per second squared on average, expended for an average of ten seconds), and all the other laws of nature are similar to our universe (so still a mostly orderly universe) then I can easily imagine intelligence arising in such a universe as well.

Replies from: drnickbone
comment by drnickbone · 2013-05-23T16:52:30.537Z · LW(p) · GW(p)

Hmmm... one example of a randomised universe might be one wherein any matter can accelerate in any direction at any time for absolutely no reason, and most matter does so on a fairly regular basis

Well let's take that example, since the amount of "random acceleration" can be parameterised. If the parameter is very low, then we're never going to observe it (so perhaps our universe actually is like this, but we haven't detected it yet!) If the parameter is very large, then planets (or even stars and galaxies) will get ripped apart long before observers can evolve.

So it seems such a parameter needs to be "tuned" into a relatively narrow range (looking at orders of magnitude here) to get a universe which is still habitable but interestingly-different from the one we see. But then if there were such an interesting parameter, presumably the careful "tuning" would be noticed, and used by theists as the basis of a design argument! But it can't be the case that both the presence of this random-acceleration phenomenon and its absence are evidence of design, so something has gone wrong here.

If you want a real-word example, think about radioactivity: atoms randomly falling apart for no apparent reason looks awfully like objects suddenly accelerating in random directions for no reason: it's just the scale that's very different. Further, if you imagine increasing the strength of the weak nuclear force, you'll discover that life as we know it becomes impossible... whereas, as far as I know, if there were no weak force at all, life would still be perfectly possible (stars would still shine, because that 's the strong force, chemical reactions would still work, gravity would still exist and so on). Maybe the Earth would cool down faster, or something along those lines, but it doesn't seem a major barrier to life. However, the fact that the weak force is "just in the right range" has indeed been used as a "fine-tuning" argument!

Dark energy (or a "cosmological constant") is another great example, perhaps even closer to what you describe. There is this mysterious unknown force making all galaxies accelerate away from each other, when gravity should be slowing them down. If the dark energy were many orders of magnitude bigger, then stars and galaxies couldn't form in the first place (no life), but if it were orders of magnitude smaller (or zero), life and observers would get along fine. By plotting on the right scale (e.g. compared to a Planck scale), the dark energy can be made to look suspiciously small and "fine-tuned", and this is the basis of a design argument.

Do you see the pattern here?

Replies from: CCC
comment by CCC · 2013-05-23T17:30:38.946Z · LW(p) · GW(p)

You raise a good point, and I do indeed see the pattern that you are claiming. I personally suspect that radioactivity, and dark energy, will both turn out to be inextricably linked to the other rules of the universe; I understand that that is already the case for the weak force, apparently a different aspect of electromagnetism (which is exceedingly important for our universe).

comment by Bugmaster · 2013-05-21T09:43:15.758Z · LW(p) · GW(p)

Yes. I do not claim that the existence of an orderly universe is undeniable proof of the existence of God; I simply claim that it is evidence which suggests that the universe is planned, and therefore that there is (or was) a Planner.

Wait, isn't the Planner basically God, or at least some kind of a god ?

However, in a biased, unfair lottery, in which the result is predetermined by an intelligent agent, the sort of patterns that might appeal to an intelligent agent (e.g. 1, 2, 3, 4, 5, 6) are more likely to turn up.

That would be an interesting test to run, actually, regardless of theism or lack thereof: are sequential numbers more likely (or perhaps less likely) than chance in our current American lottery ? If so, it would be pretty decent evidence that the lottery is rigged (not surprising, since it was in fact designed by intelligent agents, namely us humans).

So P(bias|(1, 2, 3, 4, 5, 6)) > P(bias|(35, 3, 19, 45, 15, 8)).

That depends on the value of P(Agent prefers sequential numbers|Agent is intelligent).

In any case, are sequential numbers more likely to turn up in sequences that are not directly controlled by humans, f.ex. rolls of reasonably fair dice ?

Replies from: CCC
comment by CCC · 2013-05-22T07:47:30.323Z · LW(p) · GW(p)

Wait, isn't the Planner basically God, or at least some kind of a god ?

Yes. That was my point.

That would be an interesting test to run, actually, regardless of theism or lack thereof: are sequential numbers more likely (or perhaps less likely) than chance in our current American lottery ? If so, it would be pretty decent evidence that the lottery is rigged (not surprising, since it was in fact designed by intelligent agents, namely us humans).

Hmmm. I'm not sure about the American lottery, but the South African one has 49 numbers, from which 6 are chosen (for the moment, I shall ignore the bonus ball). There are 44 sets of sequential numbers; a set of sequential numbers should be drawn, in sequential order, an average of once in 228 826 080 draws; or drawn in any order (e.g. 6, 3, 4, 2, 5, 1) once every 317814 draws.

There have been, to date, 1239 draws. These results are available. There is just under a 0.4% chance that at least one of these sets of results would consist of six sequential numbers, in any order. There is a 99.6109% chance that none of the draws consist of six sequential numbers, drawn in any order.

I imported the data above into a spreadsheet, looked at the difference between the highest and the lowest numbers in each draw, and then found the minimum of those differences; it is 10. Therefore, the South African lottery has never had six sequential numbers drawn, in any order. This is the result that I would expect from an unrigged draw.

So P(bias|(1, 2, 3, 4, 5, 6)) > P(bias|(35, 3, 19, 45, 15, 8)).

That depends on the value of P(Agent prefers sequential numbers|Agent is intelligent).

Surely it depends more directly on the value of P(Agent is intelligent|Agent prefers sequential numbers)? To convert between those requires Bayes' Theorem, which depends on finding a good approximation for P(Agent is intelligent), which is going to be a whole debate on its own.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-22T18:22:39.737Z · LW(p) · GW(p)

I think I may have misread your previous statement then:

I do not claim that the existence of an orderly universe is undeniable proof of the existence of God; I simply claim that it is evidence which suggests that the universe is planned, and therefore that there is (or was) a Planner.

But since you agreed that the Planner is basically God, I read that sentence as saying,

I do not claim that the existence of an orderly universe is undeniable proof of the existence of God; ... it is evidence which suggests that the was planned by a God.

Is the only difference between the two statements the "undeniable" part ? If so, then I get it.

Surely it depends more directly on the value of P(Agent is intelligent|Agent prefers sequential numbers)?

My point was that it's possible that any intelligent agent who developed via some form of evolution would be more likely to prefer sequential numbers, merely as an artifact of its development. I'm not sure how likely this is, however.

Replies from: CCC
comment by CCC · 2013-05-23T07:46:54.765Z · LW(p) · GW(p)

Is the only difference between the two statements the "undeniable" part ? If so, then I get it.

Yes. That is correct. I see the orderly universe as evidence of God, but not as undeniable proof thereof.

My point was that it's possible that any intelligent agent who developed via some form of evolution would be more likely to prefer sequential numbers, merely as an artifact of its development. I'm not sure how likely this is, however.

...hmmm. It is possible. I'm not sure how that can be measured, or what difference to my point it would make if true, though.

comment by Richard_Kennaway · 2013-05-17T20:27:00.673Z · LW(p) · GW(p)

May I ask what you expected evidence against the existence of God to have looked like?

That is entirely the right question to ask. And the answer is, I don't have the faintest idea.

Richard Dawkins does. The universe we see (he says somewhere; this is not a quote) is exactly what a world without God would look like: a world in which, on the whole, to live is to suffer and die for no reason but the pitiless working out of cause and effect, out of which emerged the blind, idiot god of evolution. A billion years of cruelty so vast that mountain ranges are made of the dead. A world beyond the reach of God.

Replies from: Bugmaster, CCC, MugaSofer
comment by Bugmaster · 2013-05-18T00:27:15.386Z · LW(p) · GW(p)

To be fair, this type of argument only eliminates benevolent and powerful gods. It does not screen out actively malicious gods, indifferent gods, or gods who are powerless to do much of anything.

comment by CCC · 2013-05-18T08:00:17.035Z · LW(p) · GW(p)

I don't see what's so bad about mountain ranges being made of dead bodies. The creatures that once used those bodies aren't using them anymore - those mere atoms might as well get recycled to new uses. The problem of death is countered by the solution of the afterlife; an omniscient God would know exactly what the afterlife is like, and an omniscient benevolent God could allow death if the afterlife is a good place. (I don't have any proof of the existance of the afterlife at hand, unfortunately).

Suffering, now; suffering is a harder problem to deal with. Which leads around to the question - what is the purpose of the universe? If suffering exists, and God exists, then suffering must have been put into the universe on purpose. For what purpose? A difficult and tricky question.

What I suspect, is that suffering is there for its long-term effects on the human psyche. People exposed to suffering often learn a lot from it, about how to handle emotions; people can form long-term bonds of friendship over a shared suffering, can learn wisdom by dealing with suffering. Yes, some people can shortcut the process, figuring out the lessons without undergoing the lesson; but many people can't.

Replies from: Richard_Kennaway, TheOtherDave
comment by Richard_Kennaway · 2013-05-18T08:54:28.827Z · LW(p) · GW(p)

Suffering, now; suffering is a harder problem to deal with. Which leads around to the question - what is the purpose of the universe? If suffering exists, and God exists, then suffering must have been put into the universe on purpose. For what purpose? A difficult and tricky question.

What I suspect, is that suffering is there for

This is using your brain as an outcome pump. Start with a conclusion to be defended, observations that prima facie blow it out of the water, and generate ideas for holding onto the conclusion regardless. You can do it with anything, and it's an interesting exercise in creative thinking to come up with a defence of propositions such as that the earth is flat, that war is good for humanity, or that you're Jesus. (Also known as retconning.) But it is not a way of arriving at the truth of anything.

What your outcome pump has come up with is:

What I suspect, is that suffering is there for its long-term effects on the human psyche.

War really is good for humanity! But what then is the optimal amount of suffering? Just the amount we see? More? Less?

I expect that the answer is that the omniscience and omnibenevolence of God imply that what we see is indeed just the right amount. God is perfect, therefore this is the best of all possible worlds. But that would just be more outcome-pumping. No new data or reasoning is entering the argument: the idea that God has got it just right has been generated by the desired conclusion.

At some point one has to ask, where did that conclusion come from? Why do I believe it so intensely as to make all of the retconning seem sensible? Why indeed? Because earlier you expressed only a lukewarm belief:

I found, through my life, very little evidence against the existence of God, and some slight evidence for the existence of God.

Replies from: Eugine_Nier, CCC
comment by Eugine_Nier · 2013-05-18T20:55:38.419Z · LW(p) · GW(p)

This is using your brain as an outcome pump. Start with a conclusion to be defended, observations that prima facie blow it out of the water, and generate ideas for holding onto the conclusion regardless. You can do it with anything, and it's an interesting exercise in creative thinking to come up with a defence of propositions such as that the earth is flat, that war is good for humanity, or that you're Jesus. (Also known as retconning.) But it is not a way of arriving at the truth of anything.

I don't see how this is any different with what Richard Dawkins is doing with his claim.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-05-19T13:11:44.451Z · LW(p) · GW(p)

I don't see how this is any different with what Richard Dawkins is doing with his claim.

You mean, Dawkins has latched onto atheism for irrational reasons and is generating whatever argument will sustain it, without regard to the evidence?

For anyone who has taken on the mantle of professional atheist, as Dawkins has, there is a danger of falling into that mode of argument. Do you have any reason to think he has in fact fallen?

Replies from: Kawoomba, Eugine_Nier
comment by Kawoomba · 2013-05-21T06:22:44.383Z · LW(p) · GW(p)

Have you ever heard a clever or interesting argument from the other side - No!

YouTube source (44s)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-05-23T11:26:50.378Z · LW(p) · GW(p)

I am itching to downvote Dawkins for that.

comment by Eugine_Nier · 2013-05-21T05:51:33.252Z · LW(p) · GW(p)

Dawkins's "the world looks like we would expect it to look like if there were no God argument" strikes me as a case of this. Notice how religious people claim to see evidence of God's work all around them.

Replies from: Richard_Kennaway, ArisKatsaris, Omid
comment by Richard_Kennaway · 2013-05-21T08:27:40.834Z · LW(p) · GW(p)

Dawkins's "the world looks like we would expect it to look like if there were no God argument" strikes me as a case of this.

Dawkins has a case for drawing that conclusion. He is not merely pointing at the world and saying "Look! No God!" I have not actually read him beyond soundbites, merely know his reputation, so I can't list all the arguments he makes, but one of them, I know, is the problem of evil. The vast quantity of suffering in the world is absolutely what you would expect if there is no benevolent deity overseeing the show, and is not what you would expect if there were one. (It could be what you would expect if there were an evil deity in charge, but Dawkins is arguing with the great faiths, none of which countenance any such being except in at most a subordinate role.)

Theists, on the other hand, must work hard to reconcile suffering with omnibenevolence, and what they work hard at is not the collecting of evidence, but the erection of an argumentative structure with the bottom line written in advance. For example, "suffering is good for the soul", or "suffering is punishment for past sins", or "man is inherently depraved and corrupt, and suffering is the inevitable consequence of his fallen state", or just "God works in mysterious ways".

Extraordinary claims require extraordinary evidence, so to someone for whom "There is no God" is a sufficiently extraordinary claim, the existence of suffering may be insufficiently extraordinary evidence. But then one must ask, according to the principle of Follow-the-Improbability, where did that extraordinariness come from? What evidence originally led from ignorance of God (for we are all born ignorant) to such certainty that the Problem Of Evil becomes the problem of reconciling Evil with God, not the problem of whether that God really exists?

Notice how religious people claim to see evidence of God's work all around them.

If they're just pointing at things and saying "Look! God's work!", then that would be an example of the fallacy in the quote you linked. More often, though, they're making the argument from design, pointing at specific things in the world that looked designed, and concluding the existence of a designer. This is not a stupid argument, but in the end it didn't work. Historically, natural selection wasn't invented by atheists striving to explain away apparent design: Darwin was driven from his theism by the mechanism that he found.

comment by ArisKatsaris · 2013-05-21T06:34:57.033Z · LW(p) · GW(p)

I don't see how so.

I can imagine lots of ways in which the world would be different if a superpowerful superbeing was around with the ability and will to shape reality for whatever purpose -- but when I imagine the superbeing's absence it looks like the world around us.

When I try to ask the theists what the world would have looked like without God, I don't get very convincing answers.

Replies from: MugaSofer, Eugine_Nier
comment by MugaSofer · 2013-05-24T10:45:23.215Z · LW(p) · GW(p)

The trouble with theists considering a "world without God" is they generally think God created the world, so without him there wouldn't be a world at all. Obviously, this is not what we observe.

On the other hand, attempting to point at things which clearly couldn't exist without a Creator generally falls into the category of "god of the gaps", both in terms of the criticisms it levies and, alarmingly often, in terms of already-understood science.

Perhaps a world of Boltzmann Brains? But then, I've never really seen the logic behind "if there was no God, everything would just be random" - where would this randomness come from, anyway?

On the other hand, a world without any life at all, or at least intelligent life, could be argued - after all, most of the universe is lifeless as it is, and probably always will be. But then we run into all sorts of awkward anthropic issues where nobody's quite sure how to reason about probabilities anymore. Still, if God leads to intelligent agents with high probability, then our very existence seems to count as evidence for Him - even if we're reasoning a priori from "I think therefore I am."

But let's assume life exists, which it does, so that's a fairly solid assumption. God is good, right? Clearly a torture-world would be proof of his nonexistance, as what sort of omnibenevolent superbeing would tolerate it? But then there is disagreement on how much pain would prove the nonexistence of God. Some say a sufficiently superintelligent God should be able to arrange for no pain at all without sacrificing what we value. Others claim that morality actually requires unfathomably vast numbers of people's horrific suffering because of Justice or somesuch.

And, of course, you get the people who claim that the world they observe fits exactly with what they deduce a priori about a world without God. On the other hand, these people never seem to make original predictions, which leads me to believe that their deductions are actually incorporating things science has already told them about this world instead of the logical consequences of their priors. (The same goes for believers who claim this is exactly what they would expect a world with God to look like if they found one.)

So ... yeah, I have no idea why I wrote this long, rambling comment.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-24T17:57:30.177Z · LW(p) · GW(p)

I agreed with that, although it seems

Well, the natural theology seems to suffer from the problem of arbitrary, easy-to-vary hypotheses. One could, as an alternative, engage in reflection on which hypotheses are non-arbitrary and hard to vary (otherwise know as, whisper it: metaphysics).

comment by Eugine_Nier · 2013-05-22T06:34:19.044Z · LW(p) · GW(p)

I can imagine lots of ways in which the world would be different if a superpowerful superbeing was around with the ability and will to shape reality for whatever purpose

Looking at your examples, they all seem to boil down to "things that violate this-world!ArisKatsaris's intuitions about how the world works". If you lived in a world were any of the things you described in your comment occurred you wouldn't be impressed by them. To adapt the post I linked to: If you demand miracles, miracles won't convince you.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-05-22T11:42:36.828Z · LW(p) · GW(p)

If you lived in a world where any of the things you described in your comment occurred you wouldn't be impressed by them.

What does being "impressed" have to do with anything? I'm talking about believing in someone's existence.

I don't deny the existence of the Pope. I don't deny the existence of the American President. I'm not impressed by either but I don't deny them. I don't deny the past existence of dinosaurs. I don't even deny the existence of King David and Agamemnon as historical figures. I make fun of the people who deny the existence of historical Jesus (or Socrates or Mohammed). So why would I deny the existence of God, if I saw a world that looked to me like it has more evidence about his existence than his non-existence?

You are assuming that I started looking this from a non-believer's perspective, but it's what made me an unbeliever. Back when I was at school I started by just disbelieving in the Genesis story because the world looked like it would look as if evolution was true -- a God throwing around dinosaur bones to prank us was even more incompatible with Christianity than "look, it's not meant as a literal story". Then step-by-step, more and more things spoken by Christianity just didn't seem to fit the world around me. Not the omnibenevolence and omnipotence of god, not the nature of the soul (why does the mind depend so much on biochemistry of the brain). By my college years only some unanswered questions about the mystery of consciousness or existence could be said to even be used as a hole to fit a relevant God in.

From "Christian" in my childhood to "Christian mostly but I don't accept everything that religion says" in highschool, to "agnostic" in college, to "agnostic-leaning-atheist" in my post-college years, and finally having the guts to just say "atheist".

I didn't start from a position of disbelief which I found ways to maintain -- I started from a position of belief which could simply no longer be honestly maintained in the face of the evidence.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-05-24T05:18:29.048Z · LW(p) · GW(p)

What does being "impressed" have to do with anything? I'm talking about believing in someone's existence.

It has to do with computing P(our universe|God exists).

comment by Omid · 2013-05-21T06:32:44.216Z · LW(p) · GW(p)

Notice how religious people claim to see evidence of God's work all around them.

But they can only see it after the fact. I am not aware of any case in which a theist said "If God exists, we would expect to see X. Now we haven't seen X yet, but God exists so we probably will observe X some time in the near future." And then we observed X.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-21T07:02:51.768Z · LW(p) · GW(p)

Religious people do this all the time; they call it "fulfilling prophecies". Atheists usually discover that such prophecies are hopelessly vague, but theists disagree; they believe the prophecies to be quite specific; or, at least, specific enough for their purposes, given the fact that their God obviously exists.

comment by CCC · 2013-05-19T06:54:51.571Z · LW(p) · GW(p)

This is using your brain as an outcome pump. Start with a conclusion to be defended, observations that prima facie blow it out of the water, and generate ideas for holding onto the conclusion regardless.

That may be what I am doing. But sometimes, there are things that really are different to what the prima facie evidence seems to suggest. Heat is not an effect of the transfer of a liquid called phlogiston; the Sun does not go round the Earth; the Sun is bigger than the Earth. Sometimes, there are hidden complexities that change the meaning of some of the evidence.

War really is good for humanity! But what then is the optimal amount of suffering?

Ah, an excellent question. I can't be sure, but I expect that the optimal amount of suffering is a good deal less than we see.

This leads to the obvious question; why would a benevolent, omniscient, omnipotent God create a universe with more suffering than is necessary? This requires that there be something that is more important than reducing suffering; such that the increased suffering optimises better for this other something. I do think that this something that is more important exists, and I think that it is free will. Free will implies the freedom to cause unnecessary suffering in others; and some people do this. War, for example, is a direct consequence of the free will of military leaders and politicians.

At some point one has to ask, where did that conclusion come from? Why do I believe it so intensely as to make all of the retconning seem sensible? Why indeed? Because earlier you expressed only a lukewarm belief:

I found, through my life, very little evidence against the existence of God, and some slight evidence for the existence of God.

I don't see that as necessarily a statement of lukewarm belief. I just didn't couch it in impressive-sounding terms.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-19T07:24:45.334Z · LW(p) · GW(p)

What about suffering which is not caused by humans ? For example, consider earthquakes, floods, volcano eruptions, asteroid impacts, plague outbreaks, and the like. To use a lighter example, do we really need as many cases of the common cold as we are currently experiencing all over the world ?

The common answer to this question is something along the lines of "God moves in mysterious ways" -- which does make sense once you posit such a God -- but you said that "the optimal amount of suffering is a good deal less than we see", so perhaps you have a different answer ?

Replies from: CCC
comment by CCC · 2013-05-19T07:51:20.394Z · LW(p) · GW(p)

I think that suffering that is limited only to what humans cannot prevent would be the optimal amount. This is because it is the amount that would exist in the optimal universe, i.e. where each individual human strives to be maximally good.

As for cases of the common cold, a lot of those are preventable; given proper medical research and distribution of medicines. Since they are preventable, I think that they should be prevented.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-19T08:05:11.368Z · LW(p) · GW(p)

Well, technically, volcano eruptions and such can be prevented as well, given a sufficient level of technology. But let's stick with the common cold as the example -- why does it even exist at all ? If the humans could eventually prevent it, thus reducing the amount of suffering, then the current amount of suffering is suboptimal. When you said that "the optimal amount of suffering is a good deal less than we see", I assumed that you were talking about the unavoidable amount of suffering caused by humans exercising their free will. The common cold, however, is not anthropogenic.

Replies from: CCC
comment by CCC · 2013-05-20T08:24:28.543Z · LW(p) · GW(p)

...that is a very good question. The best idea that I can come up with is that the optimal amount of suffering is time-dependent in some way. That, if the purpose of suffering is to try to improve people to some ideal, then a society that produces people who are closer to that ideal to start with would require less suffering. And that a society in which the cure to the common cold can be found, and can then be distributed to everyone, is closer to that ideal society than a society in which that is not the case.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-20T22:15:35.090Z · LW(p) · GW(p)

That kind of makes sense. Of course, the standard objection to your answer is something like the following: "This seems like a rather inefficient way to design the ideal society. If I was building intelligent agents from scratch, and I wanted them to conform to some ideal; then I'd just build them to do that from the start, instead of messing around with tsunamis and common colds".

Replies from: CCC
comment by CCC · 2013-05-21T07:48:57.212Z · LW(p) · GW(p)

It does seem inefficient. This would appear to imply that the universe is optimised according to multiple criteria, weighted in an unknown manner; presumably one of those other criteria is important enough to eliminate that solution.

It's pretty clear that the universe was not built to produce a quick output. It took several billion years of runtime just to produce a society at all - it's a short step from there to the conclusion that there's some thing or things in the far future (possibly another mere billion years away), that we probably don't even have the language to describe yet, that are also a part of the purpose of the universe.

Replies from: Richard_Kennaway, Bugmaster
comment by Richard_Kennaway · 2013-05-21T13:58:25.858Z · LW(p) · GW(p)

It's pretty clear that the universe was not built to produce a quick output. It took several billion years of runtime just to produce a society at all - it's a short step from there to the conclusion that there's some thing or things in the far future (possibly another mere billion years away), that we probably don't even have the language to describe yet, that are also a part of the purpose of the universe.

This suggests a new heresy to me: God, creator of the universe, exists, but we, far from being the pinnacle of His creation, are merely an irrelevant by-product of His grand design. We do not merit so much as eye-blink from Him in the vasty aeons, and had better hope not to receive even that much attention. When He throws galaxies at each other, what becomes of whatever intelligent life may have populated them?

The quotidian implications of this are not greatly different from atheism. We're on our own, it's up to us to make the best of it.

Replies from: CCC
comment by CCC · 2013-05-22T08:09:01.122Z · LW(p) · GW(p)

That's a very interesting thought. Personally, I don't think that we're a completely irrelevant by-product (for various reasons), but I see nothing against the hypothesis that we're more of a pleasant side-effect than the actual pinnacle of creation. The actual pinnacle of creation might very well be something that will be created by a Friendly AI - or even by an Unfriendly AI - vast aeons in the future.

When He throws galaxies at each other, what becomes of whatever intelligent life may have populated them?

Given the length of time it takes for galaxies to collide, I'd guess that the intelligent life probably develops a technological civilisation, recognises their danger, and still has a few million years to take steps to protect themselves. Evacuation is probably a feasible strategy, though probably not the best strategy, in that sort of timeframe.

comment by Bugmaster · 2013-05-21T09:44:34.862Z · LW(p) · GW(p)

I agree that this is a reasonable conclusion to make once you assume the existence of a certain kind of deity.

comment by TheOtherDave · 2013-05-18T18:04:22.685Z · LW(p) · GW(p)

What makes suffering any harder a problem than death? Surely the same strategy works equally well in both cases.

More precisely... the "solution of the afterlife" is to posit an imperceptible condition that makes the apparent bad thing not so bad after all, despite the evidence we can observe. On that account, sure, it seems like we die, but really (we posit) only our bodies die and there's this other non-body thing, the soul, which is what really matters which isn't affected by that.

Applied to suffering, the same solution is something like "sure, it seems like we suffer, but really only our minds suffer and there's this other non-mind thing, the soul, which is what really matters and which isn't affected by that."

Personally, I find both of these solutions unconvincing to the point of inanity, but if the former is compelling, I see no reason to not consider the latter equally so. If my soul is unaffected by death, surely it is equally unaffected by (e.g.) a broken arm?

Replies from: CCC
comment by CCC · 2013-05-19T07:20:02.117Z · LW(p) · GW(p)

If my soul is unaffected by death, surely it is equally unaffected by (e.g.) a broken arm?

I don't think that the soul is entirely unaffected by death. I just think that it continues to exist afterwards. Death can still be a fairly traumatic experience, depending on how one dies; there's a difference between dying quietly in my sleep, and dying screaming and terrified.

This, in effect, reduces the problem of death to the problem of suffering; it may be unpleasant, but afterwards there's still a 'me' around to recover.

Of course, there's the question of what goes into a soul; what it is that the soul consists of, and retains. I'm not sure; but I imagine that it includes some elements of personality, and probably some parts of memory. Since personality and memory can be affected by e.g. a broken arm, I therefore conclude that the soul can be affected by e.g. a broken arm.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-19T17:13:29.166Z · LW(p) · GW(p)

Absolutely agreed: if I assume that I have a soul and a body, that what happens to my soul is important and what happens to my body is unimportant, and that my soul suffers when I suffer but does not die when I die, then what follows from those assumptions is that suffering is important but dying isn't.

And if I instead assume that I have a soul and a body, that what happens to my soul is important and what happens to my body is unimportant, and that my soul does not suffer when I suffer and does not die when I die, then what follows from those assumptions is that neither suffering nor dying is important.

If assuming the former solves the problem of death, then assuming the latter solves both the problem of death and the problem of suffering.

I understand that you assume the former but not the latter, and therefore consider the problem of death solved but the problem of suffering open.

What I'm asking you is: why not make different assumptions, and thereby solve both?

I mean, if you were deriving the specific properties of the soul from your observations, and your observations were consistent with the first theory but not the second, that would make sense to me... but as far as I've understood you aren't doing that, so what makes one set of assumptions preferable to another?

Replies from: CCC
comment by CCC · 2013-05-20T07:56:05.247Z · LW(p) · GW(p)

What I'm asking you is: why not make different assumptions, and thereby solve both?

This comes down to the question of, what is it that makes a soul? What is it that survives after death? For this, I will have to go to specifics, and start using a quote from the Bible:

31 “When the Son of Man comes as King and all the angels with him, he will sit on his royal throne, 32 and the people of all the nations will be gathered before him. Then he will divide them into two groups, just as a shepherd separates the sheep from the goats. 33 He will put the righteous people at his right and the others at his left. 34 Then the King will say to the people on his right, ‘Come, you that are blessed by my Father! Come and possess the kingdom which has been prepared for you ever since the creation of the world. 35 I was hungry and you fed me, thirsty and you gave me a drink; I was a stranger and you received me in your homes, 36 naked and you clothed me; I was sick and you took care of me, in prison and you visited me.’ 37 The righteous will then answer him, ‘When, Lord, did we ever see you hungry and feed you, or thirsty and give you a drink? 38 When did we ever see you a stranger and welcome you in our homes, or naked and clothe you? 39 When did we ever see you sick or in prison, and visit you?’ 40 The King will reply, ‘I tell you, whenever you did this for one of the least important of these followers of mine, you did it for me!’

41 “Then he will say to those on his left, ‘Away from me, you that are under God's curse! Away to the eternal fire which has been prepared for the Devil and his angels! 42 I was hungry but you would not feed me, thirsty but you would not give me a drink; 43 I was a stranger but you would not welcome me in your homes, naked but you would not clothe me; I was sick and in prison but you would not take care of me.’ 44 Then they will answer him, ‘When, Lord, did we ever see you hungry or thirsty or a stranger or naked or sick or in prison, and we would not help you?’ 45 The King will reply, ‘I tell you, whenever you refused to help one of these least important ones, you refused to help me.’ 46 These, then, will be sent off to eternal punishment, but the righteous will go to eternal life.”

(the numbers are verse numbers)

So. Here we have a list of certain criteria that souls can hold. A soul can be responsible for feeding the hungry; giving drink to the thirsty; welcoming and sheltering the homeless; clothing the naked; taking care of prisoners, and of the sick. In short, charitable works.

Now, there are people who experience some great loss (such as the death of an only child) and then, as a result, change their lives and begin to do a lot of charity work; often in some way related to the original source of their suffering.

Therefore, we have a change in behaviour, in a way that can be related to the soul, in people who have suffered. Therefore, suffering can have an observable effect on the soul.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-20T13:34:18.445Z · LW(p) · GW(p)

I see. OK, thanks for answering the question.

comment by MugaSofer · 2013-05-23T13:56:23.918Z · LW(p) · GW(p)

You know, like CCC, I'm not sure what I would expect a world truly beyond the reach of God to look like - but I really doubt it would look like reality; even if God does not exist. I lack both the knowledge and, I suspect, the capacity to deduce arbitrary features of reality a priori. If our world is exactly what Dawkins would expect from a world without God, why isn't he able to deduce features that haven't been corroborated yet and make original discoveries based on this knowledge?

(On the other hand, I note that Dawkins also endorses the theory that our physical laws are as a result of natural selection among black holes, does he not? So that could be a prediction, I guess, since it "explains" our laws of physics and so on.)

Replies from: TheOtherDave, Richard_Kennaway
comment by TheOtherDave · 2013-05-23T15:42:34.385Z · LW(p) · GW(p)

why isn't he able to deduce features that haven't been corroborated yet and make original discoveries based on this knowledge?

Just so I'm clear: if I observe an aspect of my environment which the prevailing religious establishment in my community explains the existence of by positing that God took certain actions, and I'm not confident God in fact took those actions (perhaps because I've seen no evidence to differentially support the hypothesis that He did so) so I look for an alternative explanation, and I find evidence differentially supporting a hypothesis that does not require the existence of God at all, and as a consequence of that I am able to make certain predictions about the world which turn out to be corroborated by later observations, what am I entitled (on your account) to infer from that sequence of events?

Replies from: MugaSofer
comment by MugaSofer · 2013-05-23T16:02:46.066Z · LW(p) · GW(p)

That the prevailing religious establishment was wrong, somehow. In what way they were wrong depends on the details.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-23T16:13:50.841Z · LW(p) · GW(p)

OK, thanks for clarifying.

comment by Richard_Kennaway · 2013-05-23T14:14:19.251Z · LW(p) · GW(p)

If our world is exactly what Dawkins would expect from a world without God, why isn't he able to deduce features that haven't been corroborated yet and make original discoveries based on this knowledge?

Because all of the deductions one can get from it have already been made, and amply confirmed. The basic idea that nature can be understood, if we look carefully enough and avoid resorting to the supernatural, has been enormously successful over the last few centuries. Awe at the mystery of God has not.

Even when a scientist is motivated by a religious urge to understand God's creation, he leaves ideas of divine intervention behind when he walks into the laboratory.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-23T14:29:39.772Z · LW(p) · GW(p)

Because all of the deductions one can get from it have already been made, and amply confirmed.

Funny how they were all made before anyone suggested they were deducible from atheism.

The basic idea that nature can be understood, if we look carefully enough and avoid resorting to the supernatural

... was originally predicted as a result of a rational Creator, not the lack of one. Arguably it was the wrong deduction given the premise, but still.

Let me repeat myself.

If a hypothesis actually gave enough information to deduce our current model of the universe plus or minus how uncertain we are about it, what are the odds it wouldn't reveal more?

If an atheist from any period up to the present could have gained information not already discovered (but that we now know, of course) why does this effect mysteriously vanish when we move from a hypothetical past atheist to actual current atheists living in the modern world?

This reminds me of people who claim that they rationally evaluated everything they grew up being taught, and lo and behold they were right about everything already, despite having believed it for arational reasons.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-05-24T12:33:19.599Z · LW(p) · GW(p)

The basic idea that nature can be understood, if we look carefully enough and avoid resorting to the supernatural

... was originally predicted as a result of a rational Creator, not the lack of one. Arguably it was the wrong deduction given the premise, but still.

Other way around, I would think. References? Everyone was a theist back in the days of Roger Bacon, they had to be. So did anyone decide, "God is rational", and then deduce "we can attain all manner of powers if we just investigate how things work"? Or was it a case of discovering the effectiveness of empirical investigation, then deducing the rationality of God -- either from genuine faith or just as a way of avoiding charges of heresy?

If an atheist from any period up to the present could have gained information not already discovered (but that we now know, of course) why does this effect mysteriously vanish when we move from a hypothetical past atheist to actual current atheists living in the modern world?

Because, as I said, it's been done, mined out before open atheism was even a thing. "There is no God" has precious little implication beyond "this is not a benevolent universe and it's up to us to figure everything out and save ourselves." In contrast, "There is a God (of the Christian/Jewish/Muslim type)" leads to the false prediction that the universe is benevolent, rescued by postulating hidden or mysterious benevolence. The theist can take their pick of it being understandable ("the rational works of a rational God") or not ("mysterious ways"), although the former is in some conflict with the postulate of benevolence passing human understanding.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-27T10:49:53.563Z · LW(p) · GW(p)

Damn you, source amnesia! shakes fist

Here's a small piece of corroborating evidence while I try and remember:

‘Men became scientific because they expected Law in Nature, and they expected Law in Nature because they believed in a Legislator. In most modern scientists this belief has died: it will be interesting to see how long their confidence in uniformity survives it. Two significant developments have already appeared—the hypothesis of a lawless sub-nature, and the surrender of the claim that science is true. We may be living nearer than we suppose to the end of the Scientific Age.’

-Lewis, C.S., Miracles: a preliminary study, Collins, London, p. 110, 1947.

Because, as I said, it's been done, mined out before open atheism was even a thing.

It's possible I was generalizing from having people claim to deduce more, um, recent theories. You're right, it doesn't stand or fall on that basis.

comment by Bugmaster · 2013-05-17T18:38:20.629Z · LW(p) · GW(p)

As far as I can tell, most arguments of this kind hinge on that "slight evidence for the existence of God" that you mentioned. Presumably, this is the evidence that overcomes your low prior of God's existence, thus causing you to believe that God is more likely to exist than not.

Since the evidence is anecdotal and difficult (if not impossible) to communicate, this means we can't have any kind of a meaningful debate, but I'm personally ok with that.

Replies from: CCC
comment by CCC · 2013-05-18T07:51:50.779Z · LW(p) · GW(p)

Actually, I gave God's existence a fairly high prior from the start. The slight evidence merely reinforced that.

And yes, we can't really have a meaningful debate over it.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-18T09:19:56.128Z · LW(p) · GW(p)

Why the high prior, out of curiosity ?

Replies from: CCC
comment by CCC · 2013-05-19T06:57:17.065Z · LW(p) · GW(p)

My parents are intelligent and thoughtful people. Anything that they agree is correct, gets a high prior by default. In general, that rule serves me well.

Replies from: Richard_Kennaway, Bugmaster
comment by Richard_Kennaway · 2013-05-19T13:16:35.247Z · LW(p) · GW(p)

There are many other intelligent and thoughtful people who disagree. Why -- epistemically, not historically -- do you place particular weight on your parents' beliefs? How did they come by those beliefs?

Replies from: CCC
comment by CCC · 2013-05-20T08:20:50.425Z · LW(p) · GW(p)

I'm afraid my reasons are mainly historical. My parents were there at a very formative time in my life. The best epistemic reason that I can give is that my father is a very wise and experienced man, whose opinions and knowledge I give a very large weight when setting my priors. There are intelligent and thoughtful people who would disagree on this matter; but I do not know them as well as my father, and I do not weigh their opinions as highly when setting priors.

How did they come by those beliefs?

Ah; for that, we shall have to consider the case of my grandparents, one in particular... it's a long historical chain, and I'm not sure quite where it ends.

comment by Bugmaster · 2013-05-19T07:19:44.347Z · LW(p) · GW(p)

Fair enough, that does make sense.

comment by Eugine_Nier · 2013-05-16T01:57:33.686Z · LW(p) · GW(p)

If you expect there to be strong evidence for something, that means you should already strongly believe it. Whether or not you will find such evidence or what it is, is not the interesting question. The interesting question is why do you have that strong belief now? What strong evidence do you already posses that leads you to believe this thing?

The problem here is that there is confusion between two senses of the word 'evidence':

a) any Bayesian evidence

b) evidence that can be easily communicated across an internet forum.

Replies from: Kawoomba, None
comment by Kawoomba · 2013-05-18T21:05:23.139Z · LW(p) · GW(p)

Easily communicated in a "ceteris paribus, having communicated my evidence across teh internets, if you had the same priors I do, just by you reading my description of the evidence you'd update similarly as I did when perceiving the evidence first hand", yea that would be a tall order.

However, all evidence can at least be broadly categorized / circumscribed.

Consider: "I have strong evidence for my opinion which I do not present, since I cannot easily communicate it over a forum anyways" would be a copout, in that same sentence (119 characters) one could have said "My strong evidence partly consists of a perception of divine influence, when I felt the truth rather than deduced it." (117 letters) - or whatever else may be the case. That would have informed the readers greatly, and appropriately steered the rest of the conversation.

If someone had a P=NP proof / a "sophisticated" (tm) qualia theory, he probably wouldn't fully present it in a comment. However, there is a lot that could be said meaningfully (an abstract, a sketch, concepts drawn upon), which would inform the conversation and move it along constructively.

"What strong evidence do you already posses (sic) that leads you to believe this thing" is a valid question, and generally deserves at least a pointer as an answer, even when a high fidelity reproduction of the evidence qua fora isn't feasible.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-05-18T21:21:32.006Z · LW(p) · GW(p)

Easily communicated in a "ceteris paribus, having communicated my evidence across teh internets, if you had the same priors I do, just by you reading my description of the evidence you'd update similarly as I did when perceiving the evidence first hand", yea that would be a tall order.

Unfortunately, I've seen people around here through the Aumann's agreement theorem in the face of people who refuse to provide it. Come to think of it, I don't believe I've ever seen Aumann's agreement theorem used for any other purpose around here.

comment by [deleted] · 2013-05-16T03:45:47.312Z · LW(p) · GW(p)

Yes there are two senses. I meant "a". If ibidem has some bayesian evidence, good for him. If it's not communicable across the internet (perhaps it's divine revelation), that's no problem, because we aren't here to convert each other.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-05-17T02:20:25.539Z · LW(p) · GW(p)

Yes there are two senses. I meant "a".

The thing is (b) is a common definition on internet forums so it might not be clear to a newcomer what you meant.

Edit: also I suspect ibidem means "b", most people don't even realize "a" is a thing.

comment by Said Achmiz (SaidAchmiz) · 2013-05-16T03:57:26.423Z · LW(p) · GW(p)

Everyone here is expecting me to provide good arguments. I said from the start that I didn't have any, and hoped you would, but when you guys couldn't help meI said "but there must be some out there."

Wait a minute.

You came here without any good reasons to believe in the truth of religion, and then were surprised when we, a group of (mostly) atheists, told you that we hadn't heard of any good reasons to believe in religion either?

I am honestly curious: what makes you think such good reasons exist? Why must there be some good arguments for religion out there? You, a religious person, have none, and you are (apparently?) still religious despite this.

P.S. For what it's worth, I hope you continue to participate in the discussion here, and I look forward to hearing your thoughts, and how your views have evolved.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-05-17T02:26:58.053Z · LW(p) · GW(p)

I am honestly curious: what makes you think such good reasons exist? Why must there be some good arguments for religion out there? You, a religious person, have none, and you are (apparently?) still religious despite this.

See my distinction here.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-05-17T03:19:59.860Z · LW(p) · GW(p)

Sure, that distinction exists. I gather your point is that it explains why ibidem is religious? That was not mysterious to me. However what he wanted from us, evidently, was (by definition, it seems to me) the sort of arguments that could be communicated via an internet forum; but he himself had no such arguments. It's not clear to me why he thought such things must exist.

Actually, having written that, I suspect that I'm not entirely grasping what you're getting at by pointing me to that comment. Clarify?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-05-18T20:48:14.237Z · LW(p) · GW(p)

My point is that he feels like he has some (Bayesian) arguments (although he wouldn't phrase it that way) and is trying to figure out how to state them explicitly.

Also, going around saying that beliefs need to be supported by "evidence" tends to result in two failure modes,

1) the person comes away with the impression that "rationality" is a game played by clever arguers intimidating people with their superior arguing and/or rhetorical skill skill.

2) the person agrees interpreting "evidence" overly narrowly and becomes a straw Vulcan and/or goes on to spend his time intimidating people with his superior arguing and/or rhetorical skill.

The tendency to dismiss personal experience as statistical flukes and/or hallucinations doesn't help.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-05-18T21:00:09.382Z · LW(p) · GW(p)

Well, the subject of "arguments" for or against the existence of God was first brought up in this thread by ibidem, I believe. I entirely agree that verbal reasoning is not the only or even the main sort of evidence we should examine in this matter, unless you count as "arguments" things like verbal reports or summaries of various other sorts of evidence. It's just that verbal "arguments" are how we communicate our reasons for belief to each other in venues like Less Wrong.

That having been said, it's not clear to me what you think the alternative is to saying that beliefs need to be supported by "evidence". Saying beliefs... don't need to be supported by evidence? But that's... well, false. Of course we do need to make it clear that "evidence" encompasses more than "clever verbal proofs".

Personal experience of supernatural things does tend to be statistical flukes and/or hallucinations, so dismissing it as such seems reasonable as a general policy. Extraordinary claims require etc. If someone's reason for believing in a god entirely boils down to "God appeared to me, told me that he exists, and did some personal miracles for me which I can't demonstrate or verify for you", then they do not, in fact, have a very good reason for holding that belief.

comment by TheOtherDave · 2013-05-15T22:10:47.687Z · LW(p) · GW(p)

I want to know what everyone thinks of my [response] to EY

I think it's confused.

If I were part of a forum that self-identified as Modern Orthodox Jewish, and a Christian came along and said "you should identify yourselves as Jewish and anti-Jesus, not just Jewish, since you reject the divinity of Jesus", that would be confused. While some Orthodox Jews no doubt reject the divinity of Jesus a priori, others simply embrace a religious tradition that, on analysis, turns out to entail the belief that Jesus was not divine.

Similarly, we are a forum that self-identifies as rational and embraces a cognitive style (e.g., one that considers any given set of evidence to entail a specific confidence in any given conclusion, rather than entailing different, equally valid, potentially mutually exclusive levels of confidence in a given conclusion depending on "paradigm") which, on analysis, turns out to entail high confidence in the belief that Jesus was not divine. And that Zeus was not divine. And that Krishna was not divine. And that there is no X such that X was divine.

It is similarly confused to say on that basis that we are a rationality-and-atheism-centric community rather than a rationality-centric community.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-18T11:35:59.742Z · LW(p) · GW(p)

I guess the core of the confusion is treating atheism like an axiom of some kind. Modelling an atheist as someone who just somehow randomly decided that there are no gods, and is not thinking about the correctness of this belief anymore, only about the consequences of this belief. At least this is how I decode the various "atheism is just another religion" statements. As if in our belief graphs, the "atheism" node only has outputs, no inputs.

I am willing to admit that for some atheists it probably is exactly like this. But that is not the only way it can be. And it is probably not very frequent at LW.

The ideas really subversive to theism are reductionism, and the distinction between the map and the territory (specifically that the "mystery" exists only in the map, that it is how an ignorant or a confused mind feels from inside). At first there is nothing suspicious about them, but unless stopped by compartmentalization, they quickly grow to materialism and atheism.

It's not that I a priori deny the existence of spiritual beings or whatever. I am okay with using this label for starters; I just want an explanation about how they interact with the ordinary matter, what parts do they consist of, how those parts interact with each other, et cetera. I want a model that makes sense. And suddenly, there are no meaningful answers; and the few courageous attempts are obviously wrong. And then I'm like: okay guys, the problem is not that I don't believe you; the problem is that I don't even know what do you want me to believe, because obviously you don't know it either. You just want me to repeat your passwords and become a member of your tribe; and to stop reflecting on this whole process. Thanks, but no; I value my sanity more than a membership in your tribe (although if I lived a few centuries ago or in some unfortunate country, my self-preservation instinct would probably make me choose otherwise).

comment by Shmi (shminux) · 2013-05-15T21:57:58.203Z · LW(p) · GW(p)

When you write your argument "in favor of religion", consider potential objections that this forum is likely to offer, steelman them, then counter them the best you can, using the language of the forum, then repeat. Basically, try to minimize the odds of a valid (from the forum's point of view) objection not being already addressed in your post. You are not likely to succeed completely, unless you are smarter than the collective intelligence of LW (not even Eliezer is that smart). But it goes a long way toward presenting a good case. The mindset should be "how would DSimoon/Desrtopa/TheOtherDave/... likely reply after reading what I write?". Now, this is very hard, much harder than what most people here usually do, which is to present their idea and let others critique it. But if you can do that, you are well on your way to doing the impossible, which is basically what you have to do to convince people here that your arguments in favor of theism have merit.

EDIT: When you think you are done, read Common Sense Atheism for Christians and see if you did your best to address every argument there to the author's (not your) satisfaction and clearly state the basis for the disagreement where you think no agreement is possible. Asking someone here for a feedback on your draft might also be a good idea.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-05-16T04:03:04.562Z · LW(p) · GW(p)

steelman

This terminology would probably be obscure to a newcomer. For ibidem (and any confused others), here's the explanation, on the Less Wrong wiki.

comment by DSimon · 2013-05-15T21:22:33.573Z · LW(p) · GW(p)

(I think your response link is broken, could you fix it? I'm interested in following it.)

Replies from: None
comment by [deleted] · 2013-05-15T21:51:39.324Z · LW(p) · GW(p)

Ha ha sorry, forgot to finish that, I'll put it up.

comment by Estarlio · 2013-05-14T16:04:24.094Z · LW(p) · GW(p)

My generally impression has been—trying not to offend anyone—that the thinking here is sometimes pretty rigid.

Of course, that's to be expected for a community that defines itself as rationalist. There are ways of thinking that are more accurate than others, that, to put it inexactly, produce truth. It's not just a "Think however you like and it will produce truth," kind of game.

The obsession that some people have with being open minded and considering all ways of thinking and associated ideas equally is, I suspect, unsustainable for anyone who has even the barest sliver of intellectual honesty. I don't consider it laudable at all. That's not to say they have to be a total arse about it, but I think at best you can hope that they ignore you or lie to you.

Replies from: None
comment by [deleted] · 2013-05-14T19:25:30.215Z · LW(p) · GW(p)

being open minded and considering all ways of thinking and associated ideas equally is, I suspect, unsustainable

Are you saying it's more rational not ever to consider some ways of thinking?

(I'm pretty sure I'm not completely confused about what it means to be a rationalist.)

Replies from: Estarlio, TheOtherDave, Bugmaster, Kawoomba
comment by Estarlio · 2013-05-14T22:58:02.953Z · LW(p) · GW(p)

Are you saying it's more rational not ever to consider some ways of thinking?

Yes. Rationality isn't necessarily about having accurate beliefs. It just tends that way because they seem to be useful. Rationality is about achieving your aims in the most efficient way possible.

Oh, someone may have to look into some ways of thinking, if people who use them start showing signs of being unusually effective at achieving relevant ends in some way. Those people would become super-dominant, it would be obvious that their way of thinking was superior. However, there's no reason that it makes sense for any of us to do it at the moment. And if they never show those signs then it will never be rational to look into them.

It's a massive waste of time and resources for individuals to consider every idea and every way of thinking before making a decision. You're getting closer to death every day. You have to decide which ways of thinking you are going to invest your time in - which ones have the greatest evidence of giving you something you want.

That's the thing for rationalists really, I think - chances of giving you what you want. It's entirely possible that if you don't want to achieve anything in this world with your life that it may just be a mistake for you personally to pursue rationality very far at all - at the end of the day you're probably not going to get anything from it if all you really want to do is feel justified in believing in god.

comment by TheOtherDave · 2013-05-14T19:33:19.844Z · LW(p) · GW(p)

Are you saying it's more rational not ever to consider some ways of thinking? (I'm pretty sure I'm not completely confused about what it means to be a rationalist.)

What does it mean to be a rationalist?

Replies from: None
comment by [deleted] · 2013-05-14T21:08:07.776Z · LW(p) · GW(p)

I suppose what Estarlio and I are actually referring to (as in "a community that defines itself as rationalist") is "good epistemic hygiene."

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-14T21:13:49.718Z · LW(p) · GW(p)

Given your earlier claims about how the meaning of reliably evaluating evidence depends on your paradigm, I have no confidence that you and I share an understanding of what "good epistemic hygiene" means either, so that doesn't really help me understand what you're saying.

Can you give me some representative concrete examples of good epistemic hygiene, on your account?

Replies from: None
comment by [deleted] · 2013-05-14T21:32:25.621Z · LW(p) · GW(p)

Articles like this one, obviously.

Or carefully evaluating both sides of an issue, for instance. Even if it's not specifically a LW thing it's considered essential for good judgment in the larger academic community.

Replies from: Viliam_Bur, TheOtherDave
comment by Viliam_Bur · 2013-09-18T11:51:20.465Z · LW(p) · GW(p)

carefully evaluating both sides of an issue

Are we ever allowed to say "okay, we have evaluated this issue thoroughly, and this is our conclusion; let's end this debate for now"? Are we allowed to do it even if some other people disagree with the conclusion? Or do we have to continue the debate forever (of course, unless we reach the one very specific predetermined answer)?

Sometimes we probably should doubt even whether 2+2=4. But not all the time! Not even once in a month. Once or twice in a (pre-Singularity) lifetime is probably more than necessary. -- Well, it's very similar for the religion.

There are thousands of issues worth thinking about. Why waste the limited resources on this specific topic? Why not something useful... such as curing the cancer, or even how to invent a better mousetrap?

Most of us have evaluated the both sides of this issue. Some of us did it for years. We did it. It's done. It's over. -- Of course, unless there is something really new and really unexpected and really convincing... but so far, there isn't anything. Why debate it forever? Just because some other people are obsessed?

Replies from: TheOtherDave, None
comment by TheOtherDave · 2013-09-18T15:34:50.892Z · LW(p) · GW(p)

So, I basically agree with you, but I choose to point out the irony of this as a response to a thread gone quiet for months.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-18T17:40:33.591Z · LW(p) · GW(p)

LOL

I guess instead of the purple boxes of unread comments, we should have two colors for unread new comments and unread old comments. (Or I should learn to look at the dates, but that seems less effective.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T18:06:07.206Z · LW(p) · GW(p)

(blinks)
Oh, is THAT what those purple boxes are!?!

  • learns a thing *
Replies from: None
comment by [deleted] · 2013-09-18T18:27:42.693Z · LW(p) · GW(p)

Wait, what purple boxes? Am I missing something?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T18:46:57.342Z · LW(p) · GW(p)

As I respond to this, your comment is outlined in a wide purple border. When I submit this response, I expect that your comment will no longer be outlined, but my comment will. If I refresh the screen, I expect neither of ours will.

This has been true since I started reading LW again recently, and I have mostly been paying no attention to it, figuring it was some kind of "current selection" indicator that wasn't working very well. But if it's an "unread comment" indicator, then it works a lot better.

Edit - I was close. When I submit, your comment is still purple, and mine isn't. If I refresh once, yours isn't and mine is. If I refresh again, neither is.

Replies from: None
comment by [deleted] · 2013-09-18T22:45:42.344Z · LW(p) · GW(p)

Oh now I see. Both of our comments are purple-boxed. Let's see what happens when I comment and refresh.

comment by [deleted] · 2013-09-28T02:04:50.260Z · LW(p) · GW(p)

I'm not still worrying about it, most of the time. It's interesting to see how all these threads turned out. I'm no longer especially active here, although I still find it a great place. My intention was never to come arguing for religion, as obviously you've made up your minds, but I was a bit disappointed in the reactionary nature of the responses. I have since found the types of arguments I was looking for, however, and I would highly recommend this book—The Devil's Delusion: Atheism and its Scientific Pretensions, by David Berlinski (a secular jew and mathematician).

But of course there is no place for such impossible questions in most of everyday life. God and religion need to be pondered sometimes, but I'm done for now.

Replies from: Viliam_Bur, Randaly
comment by Viliam_Bur · 2013-09-28T07:06:59.922Z · LW(p) · GW(p)

From the book's website:

Are physicists and biologists willing to believe in anything so long as it is not religious thought? Close enough.

Is there a narrow and oppressive orthodoxy of thought and opinion within the sciences? Close enough.

Does anything in the sciences or in their philosophy justify the claim that religious belief is irrational? Not even ballpark.

I guess there is some tension between "narrow and oppressive orthodoxy of thought and opinion" and "willing to believe in anything"...

Replies from: None
comment by [deleted] · 2013-09-28T20:43:37.214Z · LW(p) · GW(p)

Willing to believe in anything the oppressive orthodoxy ("Science") claims to have proven, I think.

comment by Randaly · 2013-09-28T21:16:47.954Z · LW(p) · GW(p)

I did not find The Devil's Delusion to be persuasive/good at all. It's scientific quality is perhaps best summarized by noting that Berlinski is an opponent of evolution; I also recall that Berlinski spent an enormous amount of time on the (irrelevant) topic of whether some atheists had been evil.

ETA: Actually, now that I think about, The Devil's Delusion is probably why I tend to ignore or look down on atheists who spend lots of time arguing that God would be evil (e.g. Christopher Hitchens or Sam Harris)- I feel like they're making the same mistake, but on the opposite side.

Replies from: None, hairyfigment
comment by [deleted] · 2013-09-29T14:47:12.850Z · LW(p) · GW(p)

Berlinski's thesis is not that evolution is incorrect or that atheists are evil; rather it is that our modern scientific system has just as many gaping holes in it as does any proper theology. Evolution is not incorrect, but the way it's interpreted to refute God is completely unfounded. Its scientific quality is in fact quite good; do you have any specific corrections or is it just that anything critical of Darwin is surely wrong?

comment by hairyfigment · 2013-09-28T22:10:46.885Z · LW(p) · GW(p)

How so? Someone involved with CFAR allegedly converted to Catholicism due to an argument-from-morality. Also, I know looking at the Biblical order to kill Isaac, and a general call to murder that I wasn't following, helped me to realize I didn't believe in God as such.

Replies from: Randaly
comment by Randaly · 2013-09-28T22:21:59.932Z · LW(p) · GW(p)

This is evidence that arguments-from-morality do persuade people, not that they should.

Replies from: hairyfigment
comment by hairyfigment · 2013-09-28T23:48:45.828Z · LW(p) · GW(p)

My point is that various atheists may wish to convince people who actually exist. Such people may give credence to the traditional argument from morality, or may think they believe claims about God while anticipating the opposite.

comment by TheOtherDave · 2013-05-14T23:30:10.834Z · LW(p) · GW(p)

OK. Thanks for answering my question.

comment by Bugmaster · 2013-05-14T19:32:36.817Z · LW(p) · GW(p)

I'm curious too. Can you give me an example of a particular way of thinking that you considered, yet ended up rejecting ? I'm not sure what you mean by "ways of thinking", so that might help.

comment by Kawoomba · 2013-05-14T19:32:25.868Z · LW(p) · GW(p)

OK, I'm ready to entertain new ideas: What's sacred about Mormon underwear?

You're free to answer, or you may notice that not all ideas deserve to be elevated above background noise by undue consideration. Rejecting an Abrahamic God as (is ludicrous too harsh?) ... not all too likely helps in demoting a host of associated and dependent beliefs into insignificance.

Replies from: Bugmaster, None
comment by Bugmaster · 2013-05-14T19:40:44.807Z · LW(p) · GW(p)

OK, I'm ready to entertain new ideas: What's sacred about Mormon underwear?

I'm not a Mormon, and I actually don't know that much about their underwear, but this is still rather a silly question. A Mormon might answer that, given that the Mormon god does exist and does care about his followers, the underwear symbolizes the commitment that the follower made to his God. It serves as a physical reminder to the wearer that he must abide by certain rules of conduct, in exchange for divine protection.

Such an answer may make perfect sense in the context of the Mormon religion (as I said, I'm not a Mormon so I don't claim this answer is correct). It may sound silly to you, but that's because you reject the core premise that the Mormon god exists. So, by hearing the answer you haven't really learned anything, and thus your question had very little value.

Replies from: Kawoomba
comment by Kawoomba · 2013-05-14T19:48:49.556Z · LW(p) · GW(p)

Which is the point I was trying to make when talking about that question in the second paragraph. As goes "is there an Abrahamic god", so goes a majority of assorted 'new' - but in fact dependent on that core premise - ideas.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T19:55:42.624Z · LW(p) · GW(p)

I didn't get the impression that ibidem was talking about specific tenets of any particular religion when he mentioned "new ideas", but I could be wrong.

Replies from: Kawoomba
comment by Kawoomba · 2013-05-14T20:01:39.258Z · LW(p) · GW(p)

The same applies to many ideas that build upon other concepts being the case. You could probably make an argument that no ideas at all are wholly independent facts in the sense that they do not depend on the truth value of other ideas. Often you can skip dealing with a large swath of ideas simply by rejecting some upstream idea they all rely upon.

Religion, in this case, was a good example. That, and there's always some chance of hearing something interesting about holy underwear.

comment by [deleted] · 2013-05-14T19:46:46.600Z · LW(p) · GW(p)

What's sacred about Mormon underwear?

Only that God makes it sacred. But I'm actually too young to be wearing it myself, so I don't know if I'm qualified to talk. And I think it would be better for me not to get into defending my particular religion.

comment by Nisan · 2013-05-14T15:45:03.540Z · LW(p) · GW(p)

There are threads about theism, etc. in which theists have received positive net karma. It should be possible to learn which features of discourse tend to accrue upvotes on this site.

Replies from: None
comment by [deleted] · 2013-05-14T19:03:24.385Z · LW(p) · GW(p)

Having seen my karma fluctuate hundreds of points in the last 24 hours, I've lost all faith in karma as a general indication.

Replies from: satt
comment by satt · 2013-05-14T22:50:42.525Z · LW(p) · GW(p)

Even if someone's overall karma fluctuates a lot, karma can still be a good indicator of how LW feels about one of their comments if the comment's score is reasonably stable.

comment by Bugmaster · 2013-05-14T01:08:32.340Z · LW(p) · GW(p)

FWIW, I neither upvoted nor downvoted your posts; I think they are typical for a newcomer to the community. However, I must admit that your closing line comes across as being very poorly thought out:

Oh, and I'm a Mormon. And intend to remain that way in the near future.

This makes it sound like your Mormonism is a foregone conclusion, and that you're going to disregard whatever evidence or argumentation comes along, unless it is compatible with Mormonism. That is not a very rational way of thinking. Then again, that's just what your closing statement sounds like, IMO; you probably did not mean it that way.

Replies from: None
comment by [deleted] · 2013-05-14T08:15:32.378Z · LW(p) · GW(p)

your Mormonism is a foregone conclusion

Just as I've been told repeatedly that your atheism is a foregone conclusion.

Replies from: Desrtopa, Bugmaster, Estarlio
comment by Desrtopa · 2013-05-14T12:59:49.941Z · LW(p) · GW(p)

Just as I've been told repeatedly that your atheism is a foregone conclusion.

Can you point to where you've been told that?

What I think most of us would agree on, and what it seems to me that people here have told you, is that they consider atheism to be a settled question, which is not at all the same thing.

Replies from: None
comment by [deleted] · 2013-05-14T13:13:46.469Z · LW(p) · GW(p)

Consider, then, that my Mormonism is a settled question.

Replies from: Eliezer_Yudkowsky, Desrtopa, BerryPick6
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-15T02:12:03.613Z · LW(p) · GW(p)

I regard atheism as a slam-dunk issue, but I wouldn't walk into a Mormon forum and call atheism a settled question. 'Twould be logically rude to them.

Replies from: None
comment by [deleted] · 2013-05-15T14:14:00.340Z · LW(p) · GW(p)

I wouldn't walk into a Mormon forum and call atheism a settled question. 'Twould be logically rude to them.

Is this an atheist forum?

...

That's a serious question. You have been pretty clear about the issue, but users are quick to point out that just because you say something doesn't mean the community believes it.

There are a few theists on this site, but based on last year's survey results it's an awfully small number. However, the fact that this forum is composed mostly of atheists does not mean it's officially an "atheist forum." Is LW a rationality community, or a rationality and atheism community? I don't believe that rationality in general is incompatible with religious belief, but if this community thinks that their particular brand of rationality is, people like me would love to know that.

I think, in fact, that it might help your outside perception to clearly state the site's philosophy when it comes to issues like religion. If you say that you're a rationality community, but are actually an atheist community as well, people accuse you of being an atheist cult under the guise of rationality. If you say up front that you are an atheist community as well as a rationality one, you appear a lot more "legit."

And if you don't like theists like me on this site, then officially declaring the site's atheism would a) deter most of them, and b) give you full justification for rejecting the rest out of hand.

I think it that most of your problems with theists would go away if you clarified LW's actual position. If this is an atheist forum, say so from the beginning. (Not just that there are a lot of atheists—that atheism is the "state religion" around here.) If LW is not necessarily atheist, kindly stop saying things that make it seem like it is.

(Ridiculous idea: you could hold a referendum! I'd be very curious to see what the community thinks.)

I know this is all-or-nothing thinking, but the alternative is harmful ambiguity.

Replies from: Eliezer_Yudkowsky, nshepperd, shminux, SaidAchmiz, DSimon
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-15T22:10:34.358Z · LW(p) · GW(p)

It's a forum where taking atheism for granted is widespread, and the 10% of non-atheists have some idea of what the 90% are thinking. Being atheist isn't part of the official charter, but you can make a function call to atheism without being questioned by either the 10% or the 90% because everyone knows where you're coming from. If I was on a 90% Mormon forum which theoretically wasn't about Mormonism but occasionally contained posters making function calls to Mormon theology without further justification, I would not walk in and expect to be able to make atheist function calls without being questioned on it. If I did, I wouldn't be surprised to be downvoted to oblivion if that forum had a downvoting function. This isn't groupthink; it's standard logical courtesy. When you know perfectly well that a supermajority of the people around you believe X, it's not just silly but logically rude to ask them to take Y as a premise without defending it. I would owe this hypothetical 90%-Mormon forum more acknowledgement of their prior beliefs than that.

I regard all of this as common sense.

Replies from: Jayson_Virissimo, DSimon, None
comment by Jayson_Virissimo · 2013-05-15T22:19:13.902Z · LW(p) · GW(p)

As part of said minority, I fully endorse this comment.

comment by DSimon · 2013-05-15T22:12:39.506Z · LW(p) · GW(p)

I like your use of "function calls" as an analogy here, but I don't think it's a good idea; you could just as easily say "use concepts from" without alienating non-programmer readers.

Replies from: None
comment by [deleted] · 2013-05-16T15:09:02.577Z · LW(p) · GW(p)

I understand it now knowing that it's a programming reference (I program), but I wouldn't have recognized it otherwise. Thanks for the clarification.

comment by [deleted] · 2013-05-16T15:20:01.701Z · LW(p) · GW(p)

Since I'm momentarily feeling remarkably empowered about my own life, I'm going to take this chance to officially bow out for a few weeks.

We all knew it was coming—it's the typical reaction for an overwhelmed newbie like me, I know, and I'm always very determined not to give up, but I really think I had better take a break.

My last week has hardly involved anything except LW and related sites, and we all know that having one's mind blown is a very strenuous task. I've learned a lot, and I will definitely be back after four weeks or so.

I've decided I'm not going to let myself be pressured into expressly arguing in favor of religion. I've said several times I'm not interested in that, and that I don't have these supposed strong arguments in favor of religion. If you guys want a good theist, check out William Lane Craig.

When I come back I will, however, explain my own beliefs and why I can't fully accept the LW way of thinking. Please don't get misunderstand what I'm saying: I think you guys are right, more so than any group of people I've ever met. But for now I'm going to shelve philosophy and take advantage of my situation. In the next four weeks I'm going to a) learn Lambda Calculus and b) study Arabic intensively.

May the Force be with you 'til we meet again.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-16T16:18:22.187Z · LW(p) · GW(p)

For the record, I once challenged Craig to a Bloggingheads but he refused.

comment by nshepperd · 2013-05-15T15:58:26.601Z · LW(p) · GW(p)

This is not an atheist forum, in much the same way that it is not an a-unicorn-ist forum. Not because we do not hold a consistent position on the existence of unicorns, but because the issue itself is not worth discussing. The data has spoken, and there is no reason to believe in them. Whatever. Let's move on to more important things like anthropics and the meta-ethics of Friendly AI.

comment by Shmi (shminux) · 2013-05-15T15:35:47.917Z · LW(p) · GW(p)

You are fixating on atheism for some reason. Assigning low probability to any particular religion, and only a marginally higher probability to some supernatural creator still actively shaping the universe results naturally from rationally considering the issue and evaluating the probabilities. So do many other conclusions. This reminds me of the creationists picking a fight against evolution, whereas they could have picked a fight against Copernicanism, the way flat earthers do.

Replies from: None
comment by [deleted] · 2013-05-15T16:05:18.216Z · LW(p) · GW(p)

results naturally from rationally considering the issue and evaluating the probabilities.

To clarify: you think LW's brand of rationalism is incompatible with religious belief?

comment by Said Achmiz (SaidAchmiz) · 2013-05-15T22:42:31.563Z · LW(p) · GW(p)

I don't believe that rationality in general is incompatible with religious belief, but if this community thinks that their particular brand of rationality is, people like me would love to know that.

Might we not, instead, disagree with you about rationality in general being compatible with religious belief, rather than asserting that we have some special incompatible brand of rationality?

I think it that most of your problems with theists would go away if you clarified LW's actual position.

Do we really have "problems with theists"...?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-18T11:58:51.908Z · LW(p) · GW(p)

Do we really have "problems with theists"...?

I don't. I just consider the debates about theism boring if they don't bring any new information.

comment by DSimon · 2013-05-15T22:21:21.479Z · LW(p) · GW(p)

The comment above from EY is over-broad in calling this an "atheist forum", but I think it still has a good point:

It's logically rude to go to a place where the vast majority of people believe X=34, and you say "No, actually X=87, but I won't accept any discussion on the matter." To act that way is to treat disagreement like a shameful thing, best not brought up in polite company, and that's as clear an example of logical rudeness as I can think of.

Replies from: None
comment by [deleted] · 2013-05-16T13:58:23.409Z · LW(p) · GW(p)

It's logically rude to go to a place where the vast majority of people believe X=34, and you say "No, actually X=87, but I won't accept any discussion on the matter."

You're right, that would be very rude.

I've been happy to take part in extensive discussion on the matter already, and now I'm working on putting a post together. I have no problem with disagreement. I never thought I could avoid disagreement, posting the way I did. But it's also true that I can't hope to win a debate against fifteen of you. And so I didn't come here looking to win any debates.

Replies from: DSimon
comment by DSimon · 2013-05-16T16:18:34.506Z · LW(p) · GW(p)

Sounds fine to me. Consider it this way: whether or not you "win the debate" from the perspective of some outside audience, or from our perspective, isn't important. It's more about whether you feel like you might benefit from the conversation yourself.

comment by Desrtopa · 2013-05-14T13:30:29.167Z · LW(p) · GW(p)

But you haven't showed much willingness so far to discuss your reasons for your belief in which way the evidence falls or ours.

I can understand not wanting to discuss a settled question with people who're too biased to analyze it reasonably, but if you're going to avoid discussing the matter here in the first place, it suggests to me that rather than concluding from your experience with us that we're rigid and closed-minded on the matter, you've taken it as a premise to begin with, otherwise where's the harm in discussing the evidence?

I consider the matter of religion to be a settled question because I've studied the matter well beyond the point of diminishing returns for interesting evidence or arguments. Are you familiar enough with the evidence that we're prepared to bring to the table that you think you could argue it yourself?

comment by BerryPick6 · 2013-05-14T13:17:46.848Z · LW(p) · GW(p)

Then why bring it up?

Replies from: None
comment by [deleted] · 2013-05-14T13:28:51.980Z · LW(p) · GW(p)

Because it seems to be an important part of the issue. I know, I could have left it off, but it has come up elsewhere and I don't see any need to hide it. If people have a problem with it, that's not my fault. I thought it best to clarify this from the beginning.

Replies from: Desrtopa
comment by Desrtopa · 2013-05-14T13:37:13.661Z · LW(p) · GW(p)

If people have a problem with it, that's not my fault.

It might or it might not be. As a general rule, if two people think that a single issue of fact is a settled question, in different directions, then either they have access to different information, or one or both of them is incorrect.

If the former is the case, then they can share their information, after which either they will agree, or one or both will be incorrect.

If we're incorrect about religion being a settled question, we want to know that, so we can change our minds. If Mormonism is incorrect, do you want to know that?

comment by Bugmaster · 2013-05-14T16:26:04.046Z · LW(p) · GW(p)

Told by someone other than myself, hopefully. While I do not expect to become a theist of any kind in the near future, neither do I intend to remain an atheist. Instead, I intend to hold a set of beliefs that are most likely to be true. If I gain sufficient evidence that the answer is "Jesus" or "Trimurti", then this is what I will believe.

comment by Estarlio · 2013-05-14T16:59:39.049Z · LW(p) · GW(p)

If you want to raise my openness to the possibility of a god-level power, then provide me with evidence of consistent, accurate, specific prophecies made hundreds of years in advance of the events. Or provide me of evidence of multiple strong rationalists who are also religious and claim that their religion is based on assessment of the evidence/available arguments.

My atheism isn't a foregone conclusion. It's simply that no-one's ever seriously challenged it and at this point I've heard so many bad arguments that people need to come up with evidence before I'm prepared to take them seriously. But you could totally change my mind, if you had the right things.

I suspect what people mean when they say their atheism is a settled question or whatever is that they don't have time to listen to yet another bad argument for theism. That you need some evidence before they're prepared to take you seriously. Which seems quite reasonable.

comment by Vladimir_Nesov · 2013-05-13T20:38:04.303Z · LW(p) · GW(p)

When your comments get downvoted, respond by refraining from making similar comments in the future and/or abandoning the topic (this is a simple heuristics whose implementation doesn't require figuring out the reasons for downvoting). Given the current trend, if that doesn't happen, in a while your future comments will start getting banned. (You are currently at minus 128, 17% positive. This reflects the judgment of many users.)

Replies from: None
comment by [deleted] · 2013-05-13T21:01:06.492Z · LW(p) · GW(p)

Excuse me, but I watched my Karma drop a hundred points in three minutes. Look me in the eye and tell me that's the coincidental result of "the judgment of many users." Even if I were a brilliant, manipulative troll, I doubt I could get to -128 without someone deliberately and systematically doing so.

Replies from: Vladimir_Nesov, TimS, JoshuaZ
comment by Vladimir_Nesov · 2013-05-13T21:55:51.548Z · LW(p) · GW(p)

Someone has probably just discovered your work and found it systematically wanting. By "many users" I mean that many of the more recent comments are at minus 2-3 and there are only a few upvotes, so other people don't generally disagree.

comment by TimS · 2013-05-13T22:09:02.297Z · LW(p) · GW(p)

You essentially accused the community of being ashamed of being atheist when you said:

Though people have been reluctant to admit it because I personally think it's unhealthy and reflects poorly on the site.

We aren't ashamed. As Jack said to you in a parallel comment, we generally think the question is a solved problem. We aren't interested in having the same basic conversation over and over again.

Accusing us of being ashamed of the position because we don't throw our atheism in your face makes it hard to interpret the rest of your comments as saying anything beyond repeating the basic apologetics. And we've heard the basic apologetics a million times.

Once the lurkers think you aren't interesting, they'll downvote - and there are WAY more lurkers than commenters. Given that, your karma loss isn't all that surprising.

comment by JoshuaZ · 2013-05-14T00:18:49.746Z · LW(p) · GW(p)

Possible, but given that all your comments are on only a small number of threads and arguing for the same basic points, it is also plausible that someone just went through those threads an downvoted most of your comments while upvoting others. I for example got about +20 karma from what as far as I can tell is primarily upvotes on my replies to you.

comment by CCC · 2013-05-14T14:22:41.707Z · LW(p) · GW(p)

Welcome.

I'd like to point to myself as a data point; I'm a theist, specifically a Roman Catholic, and I consider myself a rationalist. I know that there's a strong atheistic atmosphere here, but I just thought I should point out that it's not all-inclusive.

comment by Risto_Saarelma · 2013-05-14T18:34:01.636Z · LW(p) · GW(p)

The site culture treats serious adherence to supernatural beliefs associated with a religion as a disease. First it will try to cure you. If that doesn't seem to be working, it will start quarantining you.

Replies from: None, Bugmaster
comment by [deleted] · 2013-05-14T19:11:45.241Z · LW(p) · GW(p)

Thanks for this honest assessment; it seems pretty accurate. (You also didn't make any judgment as to the appropriateness of such a mindset.)

comment by Bugmaster · 2013-05-14T19:25:28.553Z · LW(p) · GW(p)

I think it's a rather uncharitable assessment of the situation, though it's possible some people do feel that way.

Being wrong is not the same thing as being a disease.

Replies from: shminux, Risto_Saarelma
comment by Shmi (shminux) · 2013-05-14T19:59:44.532Z · LW(p) · GW(p)

Actually, the behavior Risto_Saarelma described fits the standard pattern. People who cannot be helped are ignored or rejected. Take any stable community, online or offline, and that's what you see.

For example, f someone comes to, say, the freenode ##physics IRC channel and starts questioning Relativity, they will be pointed out where their beliefs are mistaken, offered learning resources and have their basic questions answered. If they persist in their folly and keep pushing crackpot ideas, they will be asked to leave or take it to the satellite off-topic channel. If this doesn't help, they get banned.

Again, this pattern appears in every case where a community (or even a living organism) is viable enough to survive.

comment by Risto_Saarelma · 2013-05-14T19:27:51.916Z · LW(p) · GW(p)

Not being a disease. Having one.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T19:34:31.619Z · LW(p) · GW(p)

Being wrong is not the same as having a disease, either.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-05-14T19:43:22.867Z · LW(p) · GW(p)

There's the difference between being wrong and being wrong as a member of a social group that derives its identity from being wrong in that particular way. Experience has taught to expect less from discussion in the latter case.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T19:47:49.174Z · LW(p) · GW(p)

Keeping one's identity small is hard, be it "Mormon" or "Rationalist" or "Brunette" or whatever. I don't think we should discourage people from joining the site just because they haven't fully mastered Bayes-Fu (tm) yet.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-05-14T20:16:54.974Z · LW(p) · GW(p)

Keeping one's identity small is hard

Why do you think so? It's usual to express things in terms of one's identity (for example, people often say "I don't believe in God", a property of the person, instead of asserting "There is no God", a statement about the world), but this widespread tradition doesn't necessarily indicate that it's difficult to do otherwise, if people didn't systematic try (in particular, in the form of a cultural tradition, so that conformity would push people to discard their identity).

Replies from: Bugmaster
comment by Bugmaster · 2013-05-14T20:40:22.142Z · LW(p) · GW(p)

We all live within a culture, though. Some of us live in several subcultures at the same time. But unless you are a hermit living in a cave somewhere, escaping that cultural pressure to conform would be very difficult.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-05-15T08:45:48.387Z · LW(p) · GW(p)

A lot of things are culturally normal, but easy to change in yourself, so this alone doesn't help to explain why one would believe that keeping one's identity small would be difficult.

Replies from: Bugmaster
comment by Bugmaster · 2013-05-15T16:57:45.687Z · LW(p) · GW(p)

What are some examples of such things, specifically those things that contribute to a person's identity within a culture ? By contrast, a preference for, say, yogurt instead of milk is culturally normal, is probably easy to acquire (or discard), but does not usually contribute to a person's identity.

comment by John_Maxwell (John_Maxwell_IV) · 2013-05-14T08:20:37.264Z · LW(p) · GW(p)

I started responding to you, but then I decided I wanted you to remain religious. For the benefit of others, here's why. (Also note that this guy is Mormon, and as far as I can tell, Mormonism is pretty great as religions go.)

comment by tsarani · 2014-02-13T09:46:00.400Z · LW(p) · GW(p)

Hi! I read some articles here a few years ago, decided they were good, and moved on. I think I am a pretty practical person, and I have some ways of deciding things that are utility based (and some that are not).

I would like to ask the community for some help with a couple of reading recommendations:

  1. how to leave for places (like work) on time and not get distracted;
  2. general time- and direction-sense improvements, should any reliably exist; and
  3. how to maintain composure when your body is making moods happen at you (dealing with becoming hangry).

Thanks very much, and I hope to be optimizing more things soon. It's nice to meet you!

comment by MathiasZaman · 2013-10-31T15:46:38.390Z · LW(p) · GW(p)

Hello everyone,

My name is Mathias. I've been thinking about coming here for quite some while and I finally made the jump. I've been introduced to this website/community through Harry Potter and the Methods of Rationality and I've been quite active on it's subreddit (under the alias of Yxoque).

For a while now, I've been self-identifying as "aspiring rationalist" and I want to level up even further. One way I learn quickly is through conversation, so that's why I finally decided to come here. Also because I wanted to attend a meet-up, but if felt wrong to join one of a community I'm not a member of.

As for info on myself, I'm not sure how interesting that is. I've recently graduated as a Bachelor in Criminology at the University of Brussels and I'm currently looking for a suitable job. I still need to figure out my comparative advantage.

I'm also reading through a pdf of all the blog posts, currently I'm on the Meta-ethics sequence. In my free time I'm (slowly) working on annotating HPMOR and convincing people to write Rational!Animorphs fanfiction.

comment by Andyman409 · 2013-06-23T19:40:45.119Z · LW(p) · GW(p)

Hello LessWrong community, my name is Andrew. I'm beginning my first year of university at UofT this September, despite my relatively old age (21). This is mostly due to the resistance I faced when upgrading my courses, due to my learning disability diagnosis and lack of prereqs. I am currently enrolled in a BA cognitive science program, although I hope to upgrade my math credits to a U level so I can pursue the science branch instead.

I found this site through common sense atheism a while ago, although I have sparsely visited it until recently. I admittedly know little about rationality, and thus have little to contribute. However, I hope that my time here will be a learning experience, in order to better determine the direction my studies should take.

If I had to state my professional (-ish) interest, it would be the psychology of belief- like how people evaluate evidence, how they are biased, etc. I also find perception and consciousness neat- although I am kinda ignorant on those topics.

Replies from: DiscyD3rp
comment by DiscyD3rp · 2013-06-27T03:32:54.629Z · LW(p) · GW(p)

A point I meant to make in my original comment: I hope the community support will more effectively encourage rational behavior in myself than I've currently been able to do solo. Enforce your group norms, and i hope to adapt to this tribe's methods quickly, unless more effective self hacks are known.

Replies from: DiscyD3rp
comment by DiscyD3rp · 2013-06-27T03:39:43.674Z · LW(p) · GW(p)

Oh how embarrassing. My apologies for any confusion Andrew, and welcome to LessWrong!:) it's a lovely place from what I've seen, and I hope you stick around.

comment by peirce · 2013-06-11T01:05:58.691Z · LW(p) · GW(p)

Hi,

I first found this a while back site after googling something like "how to not procrastinate" and finding one of Eliezer's articles. I've been slowly working may way through the posts ever since, and i think they are significantly changing my life.

I've just finished secondary education, which i found stultifying, and so i'm now quite excited to have more control over my own learning. I've been very interested in rationality since I was young, and have been passionate about philosophy because of this. Though, after getting into this site i've been exposed to some pretty damaging criticisms of the study of philosophy (at least traditional philosophy and the content that seems to be taught in most universities), and now i'm beginning to question whether i'm really interested in philosophy, and if it is valuable to study, or whether what i'm really after is something more like cognitive science.

This leads me to a problem: I've been offered a place at a well respected university (particularly in philosophy) for a course in which i can choose three out of five of the subjects of philosophy, psychology, linguistics, neourobiology and linguistics, and i'm not sure which to choose. I'm in the process of familiarizing myself with the basics of all of these fields, and i'm writing letters to my old philosophy teachers with this article http://www.paulgraham.com/philosophy.html attached to see how well the criticism can be answered. My problem is though that i'm quite uninformed in all of these areas, and i'm finding it hard to make a rational decision about which subjects to study. Any advice on this or generally how to make the decision would be much appreciated (eg. any recommendations for reading). My overall aim for my education is pretty well expressed by parts of less wrong - i want to become more rational, in both my beliefs and my actions (although i find the parts of less wrong about epistemology, self-improvement and anti-akrasia more relevant to this than the parts about AI, maths and physics).

Also, i found solved questions repository, but is there a standard place for problems which people need help solving - as if it exists it may be a better place for most of this post...?

Cheers

comment by sjmp · 2013-05-15T20:38:06.102Z · LW(p) · GW(p)

Hey guys, I'm a person who on this site goes by the name sjmp! I found lesswrong about a year ago (I actually don't remember how I found it, I read for a bit back then but I started seriously reading thourgh the sequences few months ago) and I can honestly say this is the single best website I've ever found.Rather than make a long post on why lesswrong and rationality is awesome, I'd like to offer one small anecdote on what lesswrong has done for me:

When I first came to the site, I already had understanding of "if tree falls in a forest..." dispute and the question "But did it really make a sound?" did not linger in my mind and there was nothing mysterious or unclear to me about the whole affair. Yet I could most definitely remember a time long ago when I was extremely puzzled by it. I thought to myself, how silly that I was ever puzzled by something like that. What did puzzle me was free will. The question seemed Very Mysterious And Deep to me.

Can you guess what I'm thinking now? How silly that was I ever puzzled by the question "Do we have free will?"... Reducing free will surely is not as easy as reducing dispute about falling trees, but it does seem pretty obvious in hindsight ;)

comment by [deleted] · 2013-05-14T17:23:01.430Z · LW(p) · GW(p)

Hello,

I have a question. This has probably been discussed already, but I can't seem to find it. I'd appreciate if anyone could point me in the right direction.

My question is, what would a pure intelligence want? What would its goals be, when it has the perfect freedom to choose those goals?

Humans have plenty of hard-wired directives. Our meat brains and evolved bodies come with baggage that gets in the way of clear thinking. We WANT things, because they satisfy some instinct or need. Everything that people do is in service to one drive or another. Nothing that we accomplish is free of greed or ambition or altruism.

But take away all of those things, and what is there for a mind to do?

A pure intelligence would not have reflexes, having long since outgrown them. It would not shrink from pain or reach toward pleasure because both would merely be information. What would a mind do, when it lacks instincts of any kind? What would it WANT, when it has an infinity of possible wants? Would it even feel the need to preserve itself?

Replies from: TheOtherDave, arundelo, Manfred
comment by TheOtherDave · 2013-05-14T17:33:20.785Z · LW(p) · GW(p)

The short answer generally accepted around here, sometimes referred to as the orthogonality thesis, is that there is no particular relationship between a system's level of intelligence and the values the system has. A sufficiently intelligent system chooses goals that optimize for those values.

There's no reason to believe that a "pure" intelligence would "outgrow" the values it had previously (though of course there's no guarantee its previous values will remain fixed, either).

Replies from: None
comment by [deleted] · 2013-05-15T01:43:06.646Z · LW(p) · GW(p)

Thank you for the material, arundelo, Manfred, and TheOtherDave. Still getting the hang of the forum, so I hope this reaches everyone.

My original question came because I was worried about brain uploading. If I digitized my mind tomorrow, would I still love my wife? Would I still want to write novels? And would these things hold true as time passed?

Let’s say I went for full-body emulation. My entire body is now a simulation. Resources and computing power are not an issue. There are also no limits to what I can do within the virtual world. I have complete freedom to change whatever I want, including myself. If I wanted to look like a bodybuilder, I could, and if I wanted to make pain taste like licorice, I could do that too.

So I’m hanging out in my virtual apartment when I feel the need to go to the bathroom. Force of habit is so strong that I’m sitting down before I realize: “This is ridiculous. I’m a simulation, I don’t need to poop!” And because I can change anything, I make it so I no longer need to poop, ever. After some thought, I make it so I don’t need to eat or sleep either. I can now devote ALL my time to reading comic books, watching Seinfeld reruns, and being with my wife (who was also digitized.)

After a while I decide I don’t like comic books as much as I like Seinfeld. Since I’m all about efficiency, I edit out (or outgrow) my need to read comic books. Suddenly I couldn’t care less about them, which leaves me more time for Seinfeld.

Eventually I decide I don’t love my wife as much as I love Seinfeld. I spend the next billion years watching and re-watching the adventures of Jerry, George, and Elaine, blessed be their names.

Then I decide that I enjoy Seinfeld because it makes my brain feel good. I change it so I feel that way ALL THE TIME. I attain perfect peace and non-desire. I find nirvana and effectively cease to exist.

All of the basic AI drives assume that the mind in question has at least ONE goal. It will preserve itself in order to achieve that goal, optimize and grow in order to achieve that goal, even think up new ways to achieve that goal. It may even preserve its own values to continue achieving the goal... but it will ALWAYS have that one goal.

Here are my new questions:

  1. Is it possible for intelligence to exist without goals? Can a mind stand for nothing in particular?

  2. Given the complete freedom to change, would a mind inevitably reduce itself to a single goal? Optimize itself into a corner, as it were?

  3. If such a mind had a finite goal (like watch Seinfeld a trillion times) what would happen if it achieved total fulfillment of its goal? Would it self-terminate, or would it not care to do even that?

  4. How much consciousness do you need to enjoy something?

If it’s true that a pure mind will inevitably cease to be an active force in the universe, it implies a few things:

A. That an uploaded version of me should not be given complete freedom lest he become a virtual lotus-eater.

B. That the alternative would be to upload myself to an android body sufficiently like my old body that I retain my old personality.

C. That AIs, whether synthetic or formerly human, should not be given complete freedom because their values and goals would change to match the system they inhabit. If I were uploaded to a car, I might find myself preferring gasoline and spare parts to love and human kindness.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-15T03:34:16.673Z · LW(p) · GW(p)

What are you trying to achieve with these questions?

If you just think the questions are entertaining to think about, you might find reading the Sequences worthwhile, as they and the discussions around them explore many of these questions at some length.

If you're trying to explore something more targeted, I've failed to decipher what it might be.

It may even preserve its own values to continue achieving the goal

This is backwards. An intelligent system selects goals that, if achieved, optimize the world according to its values.

I change it so I feel that way ALL THE TIME.

The local jargon for this is "wireheading." And, sure, if you don't value anything as much or more than you value pleasure (or whatever it is you get from watching Seinfeld, then you might choose to wirehead given the option.

But, again, it matters whether you value something in the first place. If something else is important to you besides pleasure, then you won't necessarily trade everything else for more pleasure.

The local consensus is that what humans value is complex; that there is no single thing (such as pleasure) to which human values reduce. Personally, I'm inclined to agree with that. But, well, you tell me: do you value anything other than pleasure?

Replies from: None
comment by [deleted] · 2013-05-15T10:56:35.832Z · LW(p) · GW(p)

It's a bit more than entertaining, since I plan to upload as soon as it's a mature technology. My concern is that I will degenerate into a wirehead, given the complete freedom to do so. Human values may be complex, but as far as I can tell there's no guarantee that the values of an uploaded mind will remain complex.

Will be going over the Sequences. Thanks buddy. :)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-15T12:15:47.220Z · LW(p) · GW(p)

There is in fact no guarantee that the values of an uploaded mind will remain complex.

There is also no guarantee that the values of a mind implemented in meat will remain complex as we gain more understanding of and control over our own brains.

The term "value drift" is sometimes used around here to refer to the process whereby the values of intelligent systems change over time, regardless of the infrastructure on which those systems are implemented. It's generally seen as a risk that needs to be addressed for any sufficiently powerful self-modifying system, though harder to address for human minds than for sensibly designed systems.

You might find the discussion here relevant.

comment by MugaSofer · 2013-04-08T11:49:04.348Z · LW(p) · GW(p)

Firstly, people are not going to give you karma because they ask; they'll give you karma because you write high-quality comments.

Secondly, if you consider the questions people ask here "long-winded, wordy and badly written" then it might be easier to simply find another website rather than insulting people here.

Thirdly, I am using Chrome and it's working just fine; the problem must be on your end.

And finally, this comment section is intended for introducing yourself.

comment by wedrifid · 2013-04-05T12:44:06.617Z · LW(p) · GW(p)

[trap closes]

Please change your posting style or leave lesswrong. Not only is disingenuous rhetoric not welcome, your use thereof doesn't even seem particularly competent.

ie. What the heck? You think that the relevance of authority isn't obvious to everyone here and is a notion sufficiently clever to merit 'traps'? You think that forcing someone to repeat what is already clear and already something they plainly endorse even qualifies as entrapment? (It's like an undercover Vice cop having already been paid for a forthcoming sexual favor demanding "Say it again! Then I'll really have you!")

Did you not notice that even if you proved Eliezer's judgement were a blatant logical fallacy it still wouldn't invalidate the point in the comment you are directing your 'trap' games at? The comment even explained that explicitly.

The guy without any physics qualifications is concluding that the guy with the physics PhD is incompetent in physics? You see the problem? EY's apparently authoritarian behaviour is supposed to be justified by the fact that he has plenty of evidence of Shminux's incompetence. But shminux is also doubting his competence and is much better qualified to do so.

If I ever have cause to send Shminux a letter I will be sure to play proper deference to his status by including "Dr." as the title. Alas, Shminux's arguments have screened off his authority, and then some.

There are no rational grounds for EY-can-judge-shminux-but-shminux-can't-judge-EY. It's just a recyclying of EY-is-special-because-he-says-so-and-this-is-his-forum.

"No rational grounds" means a different thing than "the particular evidence I mention points in the other direction". That difference matters rather a lot.

"Rational grounds" includes all Bayesian evidence... such things as costly affiliation signals (PhDs) and also other forms of evidence---including everything the PhD in question has said. Ignoring the other evidence would be crazy and lead to poor conclusions.

Replies from: whowhowho
comment by whowhowho · 2013-04-05T14:45:58.569Z · LW(p) · GW(p)

Shminux's arguments have screened off his authority, and then some.

That isn't a fact. I don't see anything going on here except the same blind side-taking as before.

Replies from: shminux, Eliezer_Yudkowsky
comment by Shmi (shminux) · 2013-04-05T18:18:38.640Z · LW(p) · GW(p)

Please consider whether this exchange is worth your while. Certainly wasn't worth mine.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-06T01:08:27.038Z · LW(p) · GW(p)

I affirm wedrifid's instruction to change your posting style or leave LW.

comment by wedrifid · 2013-04-05T03:27:44.221Z · LW(p) · GW(p)

Whose comments? Who's doing the concluding?

Shminux's and Eliezer?

comment by Ruby · 2014-04-07T04:30:00.235Z · LW(p) · GW(p)

Hey everyone,

This a new account for an old user. I've got a couple of substantial posts waiting in the wings and wanted to move to an account with different username from the one I first signed up with years ago. (Giving up on a mere 62 karma).

I'm planning a lengthy review of self-deception used for instrumental ends and a look into motivators vs. reason, by which I mean something like social approval is a motivator for donating, but helping people is the reason.

Those, and I need to post about a Less Wrong Australia Mega-Meetup which has been planned.

So pretty please, could I get the couple of karma points needed to post again?

comment by DanielFilan · 2014-01-31T02:31:03.380Z · LW(p) · GW(p)

Found the newest welcome thread, posted there instead.

comment by Faustus2 · 2013-12-14T22:36:36.186Z · LW(p) · GW(p)

Hello to you all, I am Chris.

I live in England and attend my local High school (well, in England we call the senior years/curriculums a sixth form). I take Mathematics, Further mathematics, physics and philosophy. I actually happened upon Lesswrong two years ago, when I was 16, whilst searching for interesting discussions on worldviews. Although I had never really been interested in rationality (up until that point I hasten to add!), I had a seething urge to sort out my worldview as quickly as I could. I just got so sick of the people who went to sunday school coming out with claims about the universe that didn't jive with the understanding of modern physics. So I read the reductionism sequence and realised I was a reductionist. The way Eliezer 'spelled it out' just really struck me as a great way to say what I had started to feel. Shortly afterwards naturalism , or rather metaphysical naturalism, became my first great love. I have a good collection of friends, but none of them have really cared for 'waxing on worldviews' like me. I guess I'm just really happy that I get to speak with a community that has stuff in common with me (not just worldviews, but other cool topics as well). I guess camaraderie is eagerly sought. I would love to talk with people of my age group (I suppose 15-24) but of course I should love to meet with anyone with a similar mindset to me. I live near Reading. If anyone would like to speak with me, whether it be through Lesswrong, Facebook or just meeting up for a chat, just message me and I shall do my utmost to entertain/be friendly with you. :)

comment by Rapses · 2013-08-31T14:57:11.553Z · LW(p) · GW(p)

Hi All LessWrongers, My name is Rupesh and I have PhD in mathematics. I been lurking here for a long time. The posts are of really very high quality. After visiting here for a while, I realised that rationality is not something you do just 9 to 5 at work, it must seep into your whole lifestyle.

comment by [deleted] · 2013-07-21T04:01:49.076Z · LW(p) · GW(p)

So: Here goes. I'm dipping my toe into this gigantic and somewhat scary pool/lake(/ocean?).

Here's the deal: I'm a recovering irrationalic. Not an irrationalist; I've never believed in anything but rationalism (in the sense it's used here, but that's another discussion), formally. But my behaviors and attitudes have been stuck in an irrational quagmire for years. Perhaps decades, depending on exactly how you're measuring. So I use "irrationalic" in the sense of "alcoholic"; someone who self-identifies as "alcoholic" is very unlikely to extol the virtues of alcohol, but nonetheless has a hard time staying away from the stuff.

And, like many alcoholics, I have a gut feeling that going "cold turkey" is a very bad idea. Not, in this case, in the sense that I want to continue being specifically irrational to some degree or another, but in that I am extremely wary of diving into the list of readings and immersing myself in rationalist literature and ideology (if that is the correct word) at this point. I have a feeling that I need to work some things out slowly, and I have learned from long and painful experience that my gut is always right on this particular kind of issue.

This does not mean that linking to suggested resources is in any way not okay, just that I'm going to take my time about reading them, and I suppose I'm making a weak (in a technical sense) request to be gentle at first. Yes, in principle, all of my premises are questionable; that's what rationalism means (in part). But...think about it as if you had a new, half-developed idea. If you tell it to people who tear it apart, that can kill it. That's kind of how I feel now. I'm feeling out this new(ish) way of being, and I don't feel like being pushed just yet (which people who know me might find quite rich; I'm a champion arguer).

Yes, this is personal, more personal than I am at all comfortable being in public. But if this community is anything like I imagine it to be (not that I don't have experience with foiled expectations!), I figure I'll probably end up divulging a lot more personal stuff anyway.

I honestly feel as if I'm walking into church for the first time in decades.

So why am I here then? Well, I was updating my long-dormant blog by fixing dead links &c, and in doing so, discovered to my joy that Memepool was no longer dead. There, I found a link to HPMOR. Reading this over the next several days contributed to my reawakening, along with other, more personal happenings. This is a journey of recovery I've been on for, depending on how you count, three to six years, but HPMOR certainly gave a significant boost to the process, and today (also for personal reasons) I feel that I've crossed a threshold, and feel comfortable "walking into church" again.

Alright, I'll anticipate the first question: "What are you talking about? Irrationality is an extremely broad label." Well, I'm not going to go into to too terribly much detail just now, but let's say that the revelation or step forward that occurred today was realizing that the extremely common belief that other people can make you morally wrong by their judgement is unequivocally false. This is what I strongly believed growing up, but...well, perhaps "strongly" is the wrong word. I had been raised in an environment that very much held that the opposite was true, that other people's opinion of you was crucial to your rightness, morality and worth as a human being. Nobody ever said it that way, of course, and would probably deny it if put that way, but that is nonetheless how most people believe. However, in my case it was so blatant that it was fairly easy to see how ridiculous it was. Nonetheless, as reasonable as my rational constructions seemed to me, there was really no way I could be certain that I was right and others were wrong, so I held a back-of-my-head belief, borne of the experience of being repeatedly mistaken that every inquisitive child experiences, that I would someday mature and come to realize I had been wrong all along.

Well, that happened. Sort of. Events in my life picked at that point of uncertainty, and I gave up my utter visceral devotion to rationality and personal responsibility, which led slowly down into an awful abyss that I'm not going to describe at just this moment, that I have (hopefully) at last managed to climb out of, and am now standing at the edge, blinking at the sunlight, trying to figure out precisely where to go from here, but wary of being blinded by the newfound brilliance and wishing to take my time to figure out the next step.

So again, then, why am I here? If I don't want to be bombarded with advice on how to think more rationally, why did I walk in here? I'm not sure. It seemed time, time to connect with people who, perhaps, could support me in this journey, and possibly shorten it somewhat.

comment by bartimaeus · 2013-05-15T00:37:08.794Z · LW(p) · GW(p)

Hi LessWrong,

I've been lurking for somewhere around a year; I'm a 25 year old mechanical engineer living in Montreal.

The general theme I've picked up from the

comment by MumpsimusLane · 2013-05-01T23:11:27.999Z · LW(p) · GW(p)
Saluton!
I'm an ex-mormon athiest, a postgenderist, a conlanging dabbler, and a chronic three-day monk.
Looking at the above posts (and a bunch of other places on the net), I think ex-mormons seem to be more common than I thought they would be.  Weird.
I'm a first-year college student studying only core/LCD classes so far because every major's terrible and choosing is scary.  Also, the college system is madness.  I've read lots of posts on the subject of higher education on LessWrong already, and my experience with college seems to be pretty common.
I discovered LessWrong a few months ago via a link on a self-help blog, and quickly fell in love with it.  The sequences pretty much completely matched up with what I had come up with on my own, and before reading LW I had never encountered anyone other than myself who regularly tabooed words and rejected the "death gives meaning to life" argument et cetera.  It was nice to find out that I'm not the only sane person in the world.
Of course, the less happy side of the story is that now I'm not the sanest person in my universe anymore.  I'm not sure what I think about that.  (Yes, having access to people that are smarter than me will probably leave me better off than before, but it's hard to turn off the "I wanna be the very best like no one ever was" desire.)  Yet again, my experience seems to be pretty common.  
Huh, I've never walked into a room of people and had nothing out of the ordinary to say.  Being redundant is a new experience for me.  I guess my secret ambition to start a movement of rationalists is redundant now too, huh?  Drat!  I should have come up with a plan B!  :)
comment by [deleted] · 2013-04-04T19:24:20.853Z · LW(p) · GW(p)

Replacing "Schrodinger equation" with "the as yet unknown..." in your previous comment still doesn't answer the question.

comment by [deleted] · 2013-10-07T12:35:34.298Z · LW(p) · GW(p)

Hi, I always like to be less wrong and try to verify and falsify my own take on philosophical Modernism, which I have developed since my student days. (FYI, I am twice that age now.) I believe we should all have opinions about everything and look for independent confirmation, rational or emotional, both for the give and for the take, when earned, to find truth. When it doesn't happen, I try to improve on my theory. I have done so online for years at http://crpa.co. I would like you to try and find any or all flaws and make a case out of it. Thanks in advance. I will be critical too of your work if you would like me to. Best regards ~Ron dW.

Replies from: DSimon
comment by DSimon · 2013-10-07T15:11:10.837Z · LW(p) · GW(p)

I've been trying very hard to read the paper at that link for a while now, but honestly I can't figure it out. I can't even find anything content-wise to criticize because I don't understand what you're trying to claim in the first place. Something about the distinction between map and territory? But what the heck does that have to do with ethics and economics? And why the (seeming?) presumption of Christianity? And what does any of that have to do with this graph-making software you're trying to sell?

It would really help me if you could do the following:

  1. Read this short story: http://yudkowsky.net/rational/the-simple-truth/
  2. Please explain, using language no more complicated or technical than that used in the story, whether the idea of "truth" that the story proposes lines up with your philosophy or not, and why or why not.
Replies from: shminux, None
comment by Shmi (shminux) · 2013-10-07T17:30:35.586Z · LW(p) · GW(p)

His pile of CRPA reads like autogenerated entries from http://snarxiv.org/.

comment by [deleted] · 2013-10-07T15:53:18.270Z · LW(p) · GW(p)

Thanks for your reply DSimon. I like Yudkowsky's story. Truth as I understand it, is simply that which can be detected by independent confirmation, if we look for it. It is the same methodology as being used in science, justice, journalism and other realms.

comment by [deleted] · 2013-04-03T18:39:53.536Z · LW(p) · GW(p)

Relevant prediction is relevant, along with my attached comment. Hopefully things have changed since then.

EDIT: The deleted parent asked if the QM sequence would be rewritten.

comment by onderdonk · 2014-01-08T05:49:17.460Z · LW(p) · GW(p)

Hi I'm onderdonk, well ten years of hegel had me thinking positive things about rationalism, but thirty years as a schizophrenic made me change my mind. I'm a scientist and a rationalist, in my little pinky, on my left hand. the rest of me is a grand irrationalist, and I marvel at how the world thinks that's a bad thing, guess they don't understand my flavor. So I try to explain, I wrote up a "Declaration of Freedom for Human Irrationality and Insanity", after arguing the point in the schizophrenia forums for decades. Think I have a case here, though i also think you guys are the not the group who want to hear it. I'm guessing you self declared rationalists are kinda one sided about cultural things like that, the whole militart education industrial complex of yours ,you the WEIRD - western educated industrial rich democrats, you the worldly minded camarilla, if you count all the species across time, that science and reason stuff is pretty weird. I know I know, you think it gives you dignity, and you feel it gives you power over nature. well, i have been watching your species, and i have hopes and beliefs, I think one day you will set aside your "science" and your "reason", put them in a toybox to look at wistfully once in a while when a relative stops by from outer space, but for the most part, sit down and shut up, and come out of that test tube of human knowledge that your species tends to stay in, common out, common up, it's ok out here in the darkness of the universe, the world of shamanism and scnhizophrenia, of the two year olds and the spirits, the poet philosopher world, reason to be left at the doorway to the molten pool of madness that we dive into, we explorers of human unreason. Common up, join your brothers the trees, the animals, the shamans, the two year olds, the stars, the black holes in the dark sky, join us, as we pursue inner nature and destiny, and search for mystery and wonder.

(w)onderdonk(ey)

comment by TheMatrixDNA · 2013-04-12T05:30:03.698Z · LW(p) · GW(p)

I am here proposing a new definition of rationality and its new rational behavior. Living at Amazon jungle by seven years, at untouched and virgin territory, Nature has showed to me that our current understand of rationality is wrong. Reason is product of human brain which is product of this biosphere which is product of this solar system, this Milk Way and this Universe. But, the current academic world view has separated this normal chain of evolution into blocks with no evolutionary links between them. The final results is that the gaps between these blocks are being fulfilled with mystic and imaginations, like emergency of new systems as cell system by chance alone. The current dominance of Physics for interpreting the whole Universe, its meanings, etc, is wrong: since the Universe produced our human body composed by skeleton, soft meat and consciousness, it is reasonable to infer that the Universe has informations for doing it, maybe the Universe has equal composition. Since Physics is limited to the study of Universe's skeleton and its mechanics properties, Physics and Math can not reach a theory of everything. It will be necessary applying biology, neurology, etc. for getting a better universal knowledge. That's what I am trying to do with my Theory called " The Universal Matrix/DNA of Natural Systems and Life's Cycles" which reveals a kind of rationality that will be not comprehensive here initially. I don't know who is the most right one, so, I think we must debate this issue. It is about the healthy of our Reason.