Open Thread, May 1-14, 2013
post by whpearson · 2013-05-01T22:28:06.136Z · LW · GW · Legacy · 649 commentsContents
650 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
649 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2013-05-02T00:27:24.916Z · LW(p) · GW(p)
Vague thought: it is very bad when important scientists die (in the general sense, including mathematicians and cmputer scientists). I recently learned that von Neumann died at age 54 of cancer. I think it's no exaggeration to say that von Neumann was one of the most influential scientists in history and that keeping him alive even 10 years more would have been of incredible benefit to humankind.
Seems like a problem worth solving. Proposed solution: create an organization which periodically offers grants to the most influential / important scientists (or maybe just the most influential / important people period), only instead of money they get a team of personal assistants who take care of their health and various unimportant things in their lives (e.g. paperwork). This team would work to maximize the health and happiness of the scientist so that they can live longer and do more science. Thoughts?
Replies from: None, Pablo_Stafforini, shminux, RolfAndreassen, maia, John_Maxwell_IV, FiftyTwo, Yuyuko↑ comment by [deleted] · 2013-05-02T01:41:22.220Z · LW(p) · GW(p)
Only tangentially related vague thought:
As I understand it, Stephen Hawking's words-per-minute in writing is excruciatingly slow, and as a result I recall seeing in a documentary that he has a graduate student whose job is to watch as he is writing and to complete his sentences/paragraphs, at which point Hawking says 'yes' or 'no'. I would think that over time this person would develop an extremely well-developed mental Hawking...
Replies from: AspiringRationalist↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-02T03:35:22.465Z · LW(p) · GW(p)
Emulators are slow due to being on different hardware than the device they are emulating. If you're also on inferior hardware to the device you're trying to emulate, it will be very slow.
That said, even a very slow Hawking emulator is a pretty cool thing to have.
↑ comment by Pablo (Pablo_Stafforini) · 2013-05-02T13:54:21.880Z · LW(p) · GW(p)
It is unclear whether the intellectual output of eminent scientists is best increased by prolonging their lives through existing medical technology, rather than by increasing their productivity through time-management, sleep-optimization or other techniques. Maybe the goal of your proposed organization would be better achieved by paying someone like David Allen to teach the von Neumanns of today how to be more productive. (MIRI did something similar to this when it hired Kaj Sotala to watch Eliezer Yudkowsky as he worked on his book.)
Replies from: asr, AspiringRationalist↑ comment by asr · 2013-05-02T14:02:43.509Z · LW(p) · GW(p)
Von Neumann himself, I believe, had poor work habits; maybe the goal of your proposed organization is better achieved by paying someone like David Allen to teach the von Neumanns of today how to be more productive.
There is something comically presumptuous about this statement. Von Neumann had very unusual work habits (he liked noise and distraction). He was also phenomenally productive (how many branches of mathematics have YOU helped invent?)
Given that he was (A) smarter and (B) more successful than any life coach you are likely to find, I would be surprised if this sort of coaching added value.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-05-02T14:36:16.809Z · LW(p) · GW(p)
I deleted the remark about von Neumann while you were composing your reply, after a quick Google search revealed no support for it. (I seem to remember a quote by von Neumann himself where he lamented that his lack of focus had prevented him from being much more productive as a scientist, but this is a very vague memory and I'm now unwilling to rest any claims on it.) For what is worth, here are some relevant remarks on von Neumann's work habits by Herman Goldstine, which contradict my earlier (and now retracted) statement:
Replies from: asrHis work habits were very methodical. He would get up in the morning, and go to the Nassau Club to have breakfast. And then from the Nassau Club he'd come to the Institute around nine, nine-thirty, work until lunch, have lunch, and then work until, say, five, and then go on home. Many evenings he would entertain. Usually a few of us, maybe my wife and me. We would just sit around, and he might not even sit in the same room. He had a little study that opened off of the living room, and he would just sit in there sometimes. He would listen, and if something interested him, he would interrupt. Otherwise he would work away. [...] So those were his work habits. He was a very methodical worker. Everytime he thought about something, he wrote it down in great detail. There was nothing rough or unpolished. Everything got written down either in the form of a letter or a memorandum.
↑ comment by asr · 2013-05-02T15:54:14.695Z · LW(p) · GW(p)
Ah. The thing I thought you had in mind is that he liked to work in a noisy distracting environment. (http://en.wikipedia.org/wiki/John_von_Neumann#Personal_life) Which wouldn't work for most people, but evidently did for him.
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-03T01:32:32.692Z · LW(p) · GW(p)
Anyone who has managed to become an eminent scientist is probably doing a pretty good job at things like time management. Since maintaining healthy habits is not a prerequisite for attaining eminence, that is more likely to be an area where they're lacking.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-05-03T13:16:36.184Z · LW(p) · GW(p)
Perhaps the word "eminent" was inappropriate: I meant, more generally, people with the ability to produce extremely valuable intellectual work and who have to some degree already produced that kind of work. This description could apply to people who haven't attained eminence in the traditional sense, but have still demonstrated the required brilliance. Eliezer is, again, a good example: he says, I believe, that he does serious work for only a couple of hours per day (I'm not entirely sure about this, and I'm happy to be corrected), and is as such someone who could benefit from a productivity or time-management coach. Another example that comes to mind is Saul Kripke, who is widely regarded as one of the smartest philosophers alive and the author of one of the most influential philosophical works of the past century (Naming and Necessity), and yet has produced very little output in large part because of lack of discipline.
↑ comment by Shmi (shminux) · 2013-05-02T01:28:31.490Z · LW(p) · GW(p)
"Most influential/important scientists" would likely tell this organization exactly where to go and how fast. They are usually not short on cash and can handle their own affairs. Or their partners/secretaries do that already. Some eccentric ones might not, but they are even more likely to reject this "help".
I am also wondering whom you would name as top 5 or so "important scientists"?
Replies from: None, NancyLebovitz, Qiaochu_Yuan↑ comment by NancyLebovitz · 2013-05-02T02:28:19.028Z · LW(p) · GW(p)
This, about pursuing varied movement, might offer intrinsic motivation to a few.
↑ comment by Qiaochu_Yuan · 2013-05-02T01:41:44.342Z · LW(p) · GW(p)
They are usually not short on cash and can handle their own affairs.
Maybe, but this isn't their comparative advantage. They could spend some time becoming an expert on health, but it makes much more sense to have a health expert take care of the health stuff. I expect there are enough trivial inconveniences along the way that even academics with the money don't do this, and that seems very bad.
Or their partners/secretaries do that already.
I see no particular reason that the partner of an influential scientist ought to be particularly knowledgeable about health. And do academics even have personal secretaries anymore? I haven't observed any such people in my limited experience in academia so far.
I am also wondering whom you would name as top 5 or so "important scientists"?
Dunno. This is out of my domain.
Replies from: Zaine↑ comment by RolfAndreassen · 2013-05-02T21:52:42.621Z · LW(p) · GW(p)
A more straightforward approach: Give a prize to every leading scientist who reach 70, 80, and 90 years of age. It is counter-intuitive, but it seems that monetary incentives do actually influence people's mortality. Source: I remember reading this somewhere, so it must be true.
↑ comment by maia · 2013-05-03T01:11:59.234Z · LW(p) · GW(p)
Isn't there a known phenomenon where, for example, where Nobel prize winners get significantly less productive after they win their prizes? Is it really true that the marginal benefit of keeping old scientists alive longer would be that great?
Replies from: None, Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-03T06:47:46.442Z · LW(p) · GW(p)
Maybe. Feynman talks about scientists getting less productive once they move to the IAS. But 10 years of a less productive von Neumann still beats 10 years of a dead one, I think. (Edit: It's less clear whether 10 years of a productive von Neumann and then 10 years of a dead von Neumann beats 20 years of a less productive von Neumann, I guess.)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-02T06:21:38.675Z · LW(p) · GW(p)
It's an interesting coincidence that JvN had both eidetic memory and extraordinary powers of mental computation. Given Hans Bethe: "I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man", does anyone think maybe von Neumann had some kind of unusual hardware-level brain mutation that simultaneously made him super smart and super-good at remembering things? (Any interesting implications for the basis of human intelligence differences and thus the intelligence explosion?) Or was it the combination of extreme memory powers and computational powers that allowed JvN to achieve such fame in the first place?
Also, how hard would it be to harvest genetic material from von Neumann's grave and create a zombie von Neumann? Edit: wait, looks like he might have had some worrisome views on nukes. Though is that just hindsight bias on my part?
↑ comment by FiftyTwo · 2013-05-02T14:55:13.801Z · LW(p) · GW(p)
This seems to be effectively what universities and research groups do. Providing administrative assistance, psychological support etc. to specialist researchers. (While they don't normally provide medical care themselves they often pay for health insurance.)
What would your proposed organisation do that they don't?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-02T17:57:50.827Z · LW(p) · GW(p)
It would be aggressively personalized, e.g. I don't think even universities and research groups will just straight up do your taxes or plan your meals.
Replies from: jooyous↑ comment by jooyous · 2013-05-02T21:04:06.139Z · LW(p) · GW(p)
Would important scientists still do science at the same level of quality if all their stuff was aggressively personalized? I can think of a couple of mechanisms that might kick in. They might work harder because they feel like they have to match the help they're receiving in scientific output. But they might also take the assistance as a sign that they're great and valuable and start slacking off, like ... divas?
Also, from what I've seen/read, I think Japanese culture has this type of system for elders/experts in various fields. Maybe it applies to scientists?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-02T23:25:13.075Z · LW(p) · GW(p)
Another risk is that what the helpers think is good for the scientist actually interferes with the scientists' work.
Replies from: jooyous↑ comment by jooyous · 2013-05-02T23:57:29.276Z · LW(p) · GW(p)
Like if the scientists get their best thinking done while chopping carrots or something?
I was about to write about how it might feel weird to have someone else do tasks that you're perfectly capable of doing. Or maybe scientists might feel used (objectified?) that society only values them for their output if there's assistants constantly yanking away any non-science and saying, "Sir, please get back to your work!" But then I realized that this could be overcome by having the scientists decide on exactly which chores need to be done. However, that leads to the overhead of explaining to someone how you want something done, which is sometimes more annoying than just doing it yourself.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-03T02:07:55.753Z · LW(p) · GW(p)
It could be anything. I know a mathematician who took advice from a very emphatic writer about not being perfectionistic about editing. This is not bad advice for commercial writers, though I don't think it necessarily applies to all of them. The problem is that being extremely picky is part of the mathematician's process for writing papers. IIRC, the result was two years without him finishing any papers.
Or there's the story about Erdos, who ran on low doses of amphetamines. A friend of his asked him to go a month without the amphetamine, and he did, but didn't get any math done during that month.
It's possible that the net effect of some sort of adviser could be good, whether for a particular scientist or for scientists in general, but it's not guaranteed.
↑ comment by Yuyuko · 2013-05-03T02:32:37.739Z · LW(p) · GW(p)
Oh, but some of them are such excellent company! Feynmann was such a charming raconteur when he came to visit in 1989...
Replies from: Leonhartcomment by NancyLebovitz · 2013-05-03T20:26:12.921Z · LW(p) · GW(p)
To make the device seem more trustworthy he made the handle heavier.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-03T21:20:12.646Z · LW(p) · GW(p)
From the same article:
A sufficiently advanced technology is indistinguishable from a rigged demonstration.
comment by gothgirl420666 · 2013-05-02T02:56:30.124Z · LW(p) · GW(p)
I was wondering to what extent you guys agree with the following theory:
All humans have at least two important algorithms left over from the tribal days: one which instantly evaluates the tribal status of those we come across, and another that constantly holds a tribal status value for ourselves (let's call it self-esteem). The human brain actually operates very differently at different self-esteem levels. Low-status individuals don't need to access the parts of the brain that contains the "be a tribal leader" code, so this part of the brain is closed off to everyone except those with high self-esteem. Meanwhile, those with low self-esteem are running off of an algorithm for low-status people that mostly says "Do what you're told". This is part of the reason why we can sense who is high status so easily - those who are high status are plainly executing the "do this if you're high-status" algorithms, and those who are low status aren't. This is also the reason why socially awkward people report experiencing rare "good nights" where they feel like they are completely confident and in control (their self-esteem was temporarily elevated, giving them access to the high-status algorithms) , and why in awkward situations they feel like their "personality disappears" and they literally cannot think of anything to say (their self-esteem is temporarily lowered and they are running off of a "shut up and do what you're told" low-status algorithm). This suggests that to succeed socially, one must trick one's brain into believing that one is high-status, and then one will suddenly find oneself taking advantage of charisma one didn't know one had.
Translated out of LessWrong-speak, this equates to "A boost or drop in confidence can make you think very differently. Take advantage of confidence spirals in order to achieve social success."
Replies from: Qiaochu_Yuan, latanius, None, army1987, Unnamed, Adele_L, RomeoStevens, wedrifid, gwern, DaFranker, Manfred, lucidian↑ comment by Qiaochu_Yuan · 2013-05-02T19:16:03.962Z · LW(p) · GW(p)
Yep. As I understand it, this is part of standard PUA advice.
↑ comment by latanius · 2013-05-02T03:48:06.386Z · LW(p) · GW(p)
Your "running different code" approach is nice... especially paired up with the notion of "how the algorithm feels from the inside", seems to explain lots of things. You can read books about what that code does, but the best you can get is some low quality software emulation... meanwhile, if you're running it, you don't even pay attention to that stuff as this is what you are.
↑ comment by A1987dM (army1987) · 2013-05-02T18:54:32.400Z · LW(p) · GW(p)
Yes, IME that's very close to the truth. I think that's the “less strong version” of this comment that people were talking of.
The Blueprint Decoded puts it as ‘when you [feel low-status], you don't give yourself permission to [do high-status stuff]’.
(I also seem to recall phonetician John C. Wells claiming that it's not like working-class people don't know what upper-class people speak like, it's just that they don't want to speak like that because it'd sound too posh for them.)
↑ comment by Unnamed · 2013-05-02T07:54:05.090Z · LW(p) · GW(p)
Related research: Mark Leary's sociometer theory and Amy Cuddy on power posing.
↑ comment by RomeoStevens · 2013-05-02T19:16:39.016Z · LW(p) · GW(p)
A possible reason rejection therapy has positive spillover effects. When, contra your expectations, people agree to all sorts of weird requests from you, it signals to you that you are high status.
↑ comment by wedrifid · 2013-05-03T04:09:27.055Z · LW(p) · GW(p)
Translated out of LessWrong-speak, this equates to "A boost or drop in confidence can make you think very differently. Take advantage of confidence spirals in order to achieve social success."
Note that the flip side is that (perception of personal) high status can make you stupid, for analogous reasons to the ones you give here.
↑ comment by gwern · 2013-05-02T16:10:07.752Z · LW(p) · GW(p)
Have you considered looking into the psychology literature? http://lesswrong.com/lw/dtg/notes_on_the_psychology_of_power/
Replies from: gothgirl420666↑ comment by gothgirl420666 · 2013-05-02T17:36:48.181Z · LW(p) · GW(p)
Yeah, I plan on investigating to see how much support this theory has going for it sometime in the future, but obviously it's easier to sit around in your chair thinking and coming up with theories than it is to actually do research. d: The article you linked to looks like a great starting point though, thank you!
↑ comment by DaFranker · 2013-05-02T14:43:41.323Z · LW(p) · GW(p)
Onwards to find a combination of electrical impulses or chemicals one can pump into the brain to keep it permanently in high-status mode!
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-05-02T19:04:15.711Z · LW(p) · GW(p)
“Dutch courage”? :-)
Replies from: DaFranker↑ comment by lucidian · 2013-05-05T18:17:43.926Z · LW(p) · GW(p)
I think it's a grave mistake to equate self-esteem with social status. Self-esteem is an internal judgment of self-worth; social status is an external judgment of self-worth. By conflating the two, you surrender all control of your own self-worth to the vagaries of the slavering crowd.
Someone can have high self-esteem without high social status, and vice versa. In fact, I might expect someone with a strong internal sense of self-worth to be less interested in seeking high social status markers (like a fancy car, important career, etc.). When I say "a strong internal sense of self-worth", I guess I mean self-esteem that does not come from comparing oneself with others. It's the difference between saying "I'm proud of myself because I coded this piece of software that works really well" and "I'm proud of myself because I'm a better programmer than Steve is."
From what I can tell, the internal kind of self-worth comes from having values, and sticking to them. So if I value honesty, hard work, ability to cook, etc., then I can be proud of myself for being an honest hard-working person who knows how to cook, regardless of whether anyone else shares these traits. Also, I think internal self-worth comes from completing one's goals, or contributing something useful to the world, both of which explain why someone can be proud of coding a great piece of software.
(Sometimes I wonder whether virtue ethicists have more internal motivation/internal self-worth, while consequentialists have more external motivation/external self-worth.)
(It seems that people of my generation (I'm 23) have less internal self-worth than people have had in the past. If this is true, then I'm inclined to blame consumerist culture and the ubiquity of social media, but I dunno, maybe I'm just a proto-curmudgeon.)
Anyway, your theory about there being a "high self-esteem algorithm" and a "low self-esteem algorithm" seems like a reasonable enough model. And the use of these algorithms may very well correlate with social status. I just don't think the relationship is at all deterministic, and an individual can work to decouple them in his own life by developing an internal sense of self-worth.
I don't think this phenomenon is unique to status or self-esteem though. I suspect that people have different cognitive algorithms for all the roles they play in society. I have a different behavior-algorithm when interacting with a significant other than I do when interacting with my coworkers, for instance. Of course status/social dominance/etc. has a huge impact on which role you'll play, but it's not the only thing influencing it.
I think people are probably most comfortable in social roles which feel "in line" with (one of) their identities.
Last thing: I think that social status should not be equated with a direct dominance relationship between two people. Social status seems like a more pervasive effect across relationships, while direct social dominance might play a bigger role in deciding which algorithm to use. If someone big and threatening gives you an order (like "hand me your wallet"), it might activate the "Do what you're told" algorithm regardless of your general social status.
Social status would seem to correlate with how frequently you are the dominant one in social interactions. But it's not always the case. A personal servant of the king might have very high status in society, but always follow the "Do what you're told" algorithm when he's at work taking orders from the king.
(As a last note, this is why I'm really concerned about the shift from traditional manufacturing jobs to service industry jobs. Both "car mechanic" and "fast food employee" are jobs associated with a lower socioeconomic class, but the car mechanic doesn't spend all day being subservient to customers.)
Replies from: gothgirl420666↑ comment by gothgirl420666 · 2013-05-05T22:50:43.656Z · LW(p) · GW(p)
I think it's a grave mistake to equate self-esteem with social status. Self-esteem is an internal judgment of self-worth; social status is an external judgment of self-worth. By conflating the two, you surrender all control of your own self-worth to the vagaries of the slavering crowd. Someone can have high self-esteem without high social status, and vice versa. In fact, I might expect someone with a strong internal sense of self-worth to be less interested in seeking high social status markers (like a fancy car, important career, etc.).
Yeah, I was using the term self-esteem in a specific sense to mean "the result of some primitive algorithm in the brain that attempts to compute your tribal status". I tried to find some alternative term to call the result of this algorithm to prevent this exact confusion, but everything I could come up with was awkward. Maybe "status meter"? I agree with you in that I think there's only a moderate correlation between the result of this algorithm and a person's self-worth as it's usually understood.
I just don't think the relationship is at all deterministic, and an individual can work to decouple them in his own life by developing an internal sense of self-worth.
I don't really agree with this, assuming that I'm right in reading you as saying "A low-status person can hack their brain into running off the high-status algorithm by developing a strong sense of self-worth." At least it's not true for me personally. To be completely honest, I think I'm very intelligent and creative, and I do spend a sizeable chunk of every day working on my major life goals, which I enjoy doing. But at the same time, I would definitely say I'm running off of a low-status algorithm in most of my interactions.
And even self-esteem purely in social interactions doesn't really seem to help my "status meter". For example, when I lost my virginity, I thought that it would make talking to girls much easier in the future. But this didn't really happen at all.
Last thing: I think that social status should not be equated with a direct dominance relationship between two people. Social status seems like a more pervasive effect across relationships, while direct social dominance might play a bigger role in deciding which algorithm to use. If someone big and threatening gives you an order (like "hand me your wallet"), it might activate the "Do what you're told" algorithm regardless of your general social status.
Yeah, now that I think about it, this seems like the weakest link in my argument. I imagine most people fluidly switch from low status to high status algorithms on a regular basis depending on who they're interacting with. But maybe there's also a sort of larger meter somewhere in the brain that maintains a more constant level and guides long-term behavior? I don't know.
Thank you for your response, though - this is definitely the most interesting response I've gotten for this comment. :)
comment by OrphanWilde · 2013-05-10T15:30:06.296Z · LW(p) · GW(p)
Incidentally, if anybody is curious why I stopped doing the Politics threads, it's because it seemed like people were -looking- for political things to discuss, rather than discussing the political things they had -wanted- to discuss but couldn't. People were still creating discussion articles which were politically oriented, so it didn't even help isolate existing political discussion.
comment by Jack · 2013-05-01T23:53:13.096Z · LW(p) · GW(p)
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
I have come to adore this sentence. It feels like home. Or a television character's catchphrase.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-02T06:04:21.570Z · LW(p) · GW(p)
That's actually discussed in Thinking Fast and Slow... familiar things that are cognitively easy to process feel nice.
comment by John_Maxwell (John_Maxwell_IV) · 2013-05-01T23:26:57.459Z · LW(p) · GW(p)
Anyone know why Jaan Tallin is an investor in this? I don't see anything on their site about a friendliness emphasis. Is he following Shane Legg's advice here? Is that also why Good Ventures are involved, or do they just want to make a profit?
comment by falenas108 · 2013-05-02T13:58:22.655Z · LW(p) · GW(p)
99 life hacks around the house: http://siriuslymeg.tumblr.com/post/33738057928/99-life-hacks-to-make-your-life-easier
Replies from: gyokuro↑ comment by gyokuro · 2013-05-04T23:31:33.328Z · LW(p) · GW(p)
The recent xkcd supports that small hacks have a large time-saving potential.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-08T08:20:32.325Z · LW(p) · GW(p)
Sample size of one here, but I'm pretty sure I looked through all 99 a year ago or something and it was time wasted.
comment by Qiaochu_Yuan · 2013-05-10T04:37:12.625Z · LW(p) · GW(p)
Some people seem to have a strong moral intuition about purity that informs many of their moral decisions, and others don't. One guess for where a purity meme might come from is that it strongly enforces behaviors that prevented disease at the time the meme was created (e.g. avoiding certain foods or STDs). This hypothesis predicts that purity memes would be strongest coming from areas and historical periods where it would be particularly easy to contract diseases, especially diseases that are contagious, and especially diseases that don't cause quick death but cause infertility. Is this in fact the case?
Replies from: fubarobfusco, army1987↑ comment by fubarobfusco · 2013-05-14T03:28:09.743Z · LW(p) · GW(p)
A contrary hypothesis:
Strong moral intuitions about purity do not carry significant useful knowledge about disease — and indeed can lead people to be resistant to accurate information about disease prevention. Rather, these intuitions stem from practices for maintaining group identity by refusing to share food, accommodations, or sexuality with members of rival groups. These are (memetically) selected-for because groups that do not maintain group identity cease to be groups. (This is not "group selection" — it's not that the members of these groups die out; it's that they blend in with others.)
Thus, we should expect purity memes to be strongest among people whose groups feel economically or politically threatened by foreigners, by different ethnic groups (including the threat of assimilation) or the like — and possibly weakest among world travelers, members of mixed-race or interfaith families, international traders, career diplomats, foreign correspondents, and others who benefit from engaging with foreigners or different ethnic groups.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-05-14T16:36:17.632Z · LW(p) · GW(p)
A contrary hypothesis:
How is it contrary? It seems mostly orthogonal to me: all four quadrants of (high pathogen threat, low pathogen threat) x (high foreigner threat, low foreigner threat) seem possible to me. Probably not exactly orthogonal, but it's not immediately obvious to me what the sign of the correlation coefficient would be.
↑ comment by A1987dM (army1987) · 2013-05-14T16:28:36.826Z · LW(p) · GW(p)
Not exactly the same question, but see here. (Short answer: yes.)
comment by Viliam_Bur · 2013-05-03T12:00:49.530Z · LW(p) · GW(p)
I have some thoughts about extending "humans aren't automatically strategic" to whole societies. I am just not sure how much of that is specific for the place where I live, and how much is universal.
Seems to me that many people believe that improvements happen magically, so you don't have to use any strategy to get them, and actually using a strategy would somehow make things worse -- it wouldn't be "natural", or something. Any data can be explained away using hindsight bias: If we have an example of a strategy bringing a positive change, we can always say that the change happened "naturally" and the strategy was superfluous. On the other hand, about a positive change not happening we can always say the problem wasn't lack of strategy, but that the change simply wasn't meant to happen, so any strategy would have failed, too.
Another argument against strategic changes is that sometimes people use a strategy and screw up. Or use a strategy to achieve an evil goal. (Did you notice it is usually the evil masterminds who use strategy to reach their goals? Or neurotic losers.) Just like trying to change yourself is "unnatural", trying to change the society is "undemocratic". We should only follow the uncoordinated unstrategic moves of millions of unstrategic individuals, and expect all the good things to happen magically (unless they simply weren't meant to happen, of course).
If you start following a strategy, all your imperfections may be reinterpreted as costs of following this strategy. Let's say that you don't have many friends. That's okay, there are many people like this. But let's say that you don't have enough friends while studying Japanese. Well, that means you are a heartless person who sacrificed human relations because of their stupid obsession with anime, or something like this. A group of people can be criticized for taking things too seriously and spending too much time following their goals (any value greater than zero can be too much). Even worse sin would be not accepting someone as their member, just because the actions of the person are contrary to the group's goals.
I am not sure where this all goes, I just have a feeling that if you want to live in a good society, you should not expect magic to happen, but you should find similarly thinking people, create a group, and try to make the change you want to see. And you should expect to be attacked completely irrationally from all sides. Than includes from inside, because even some of your well-meaning members will accept the anti-epistemology, and will try to convince you to self-destructive actions, and if you refuse they will leave you disappointed.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-03T16:01:02.275Z · LW(p) · GW(p)
Given the abstracted tone you seem to be trying to go for here, you might consider modifying the examples in your fourth paragraph to point to more widely separated points in subculture-space, so as to reduce the chance that an uncharitable reader might interpret this as a defensive reaction to how some particular subculture is often treated.
comment by JoshuaZ · 2013-05-08T05:02:03.144Z · LW(p) · GW(p)
The standard problem with using the Drake Equation and similar formulas to estimate how much of the Great Filter is in front of us and how much is behind us is the lack of good estimates for most terms. However, there are other issues also. The original version of the Drake Equation presupposes independence of variables but this may not be the case. For example, it may be that the same things that lead to a star having a lot of planets also contribute to making life more likely (say for example that the more metal rich a star is the more elements that life has a chance to form from or make complicated structures with). What are the most likely dependence issues to come up in this sort of context, or do we know so little now that this question is still essentially hopeless?
comment by CAE_Jones · 2013-05-02T09:40:34.972Z · LW(p) · GW(p)
I started typing something, then realized it was based on someone's claim in a forum discussion and I hadn't bothered trying to verify it.
It turns out that the information was exaggerated in such a way that, had I not bothered verifying, I would have updated much more strongly in favor of the efficacy of an organization of which he was a member. I got suspicious when Google turned up nothing interesting, so I checked the web site of said organization, which included a link to a press release regarding the subject.
Based on this and other things I've read, I conclude that this organization tends to have poor epistemic rationality skills overall (I haven't tested large groups of members; I'm comparing the few individual samples I've seen to organization policies and strategies), but the reports that they publish aren't as biased as I would expect if this were hopelessly pervasive.
(On the off chance that said person reads this and suspects that he is the subject, remember that I almost did the exact same thing, and I'm not affiliated with said organization in any way. Is there LW discussion on the tendency to trust most everything people say?)
Replies from: Richard_Kennaway, Document↑ comment by Richard_Kennaway · 2013-05-02T10:23:44.676Z · LW(p) · GW(p)
Is there LW discussion on the tendency to trust most everything people say?)
This older post is relevant.
↑ comment by Document · 2013-05-02T11:49:43.487Z · LW(p) · GW(p)
It turns out that the information was exaggerated in such a way that, had I not bothered verifying, I would have updated much more strongly in favor of the efficacy of an organization of which he was a member.
I initially misread this as saying you were impressed with his persuasive skill and strongly tempted to update on the organization's effectiveness based on that.
Replies from: CAE_Jonescomment by gwern · 2013-05-03T16:22:50.997Z · LW(p) · GW(p)
Last night I finished writing http://www.gwern.net/Google%20shutdowns
I'd appreciate any comments or fixes before I go around making a Discussion post and everything.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-05-04T21:32:03.550Z · LW(p) · GW(p)
Ideally we would have Google hits from the day before a product was officially killed, but the past is, alas, no longer accessible to us
Google kinda, sorta, lets you search the past, under "search tools." I think it filters pages by date of creation, but searches current text, so that "recent posts" type side bars pollute the results. And what is probably worse, it probably doesn't return dead pages.
Replies from: gwern↑ comment by gwern · 2013-05-04T22:52:53.138Z · LW(p) · GW(p)
It doesn't return dead pages, and the date-filtering is highly error-prone, I've found: while using it in searching for launch and shutdown dates, there were many 'leaks from the future', we could call them. (Articles from 2007 lamenting the shutdown of Google Reader...)
comment by ITakeBets · 2013-05-02T16:42:58.506Z · LW(p) · GW(p)
Request for advice:
I need to decide in the next two weeks which medical school to attend. My two top candidates are both state universities. The relevant factors to consider are cost (medical school is appallingly expensive), program quality (reputation and resources), and location/convenience.
Florida International University Cost: I have been offered a full tuition scholarship (worth about $125,000 over four years), but this does not cover $8,500/yr in "fees" and the cost of living in Miami is high. The FIU College of Medicine's estimated yearly cost of attendance (including all living expenses) is nearly $69,000; if I multiply that by four years and subtract the value of my scholarship I get about $145,000. However, my husband will continue working during all four years, defraying some of my expenses, so I hope to keep my actual indebtedness at graduation under $100,000 if I attend FIU. Program Quality: This is difficult to gauge, because the program is very new, having only graduated its first class of MDs this year. Their reputation is necessarily unestablished. All of their graduates successfully matched into residencies this year (a few in prestigious hospitals and competitive specialties), but this is reassuring rather than impressive. They only graduated 33 students although they matriculated 40 in their first year; not sure if that represents a worrying rate of attrition or what became of the other students (though I plan to ask). Another consideration is that although FIU is affiliated with many well-known hospitals in South Florida, they do not have a dedicated teaching hospital. Location/convenience: Already mentioned the higher cost of living. Miami is also farther from where we are currently living and working (over 3 hours away vs under 2 for Gainesville). My husband could probably find work in Miami, but it might be less desirable or pay less than his current job, and we would probably need to live apart during the week until he does. Also, the widely-scattered hospitals through which FIU students rotate, as well as South Florida traffic, make me worry about my quality of life during my third year.
The University of Florida Cost: I have been offered $7500 per year in aid. The rest of the $50k/year cost of attendance (including living expenses) would be loans. Again, my husband would continue to work and pay some of my expenses. In all I estimate a $30k-$60k difference in indebtedness at graduation between the two programs (in FIU's favor). Program Quality: UF is Florida's oldest and best-respected medical school, which is to say good but not elite. UF also has a reputable teaching hospital on campus, and a larger research budget, which would help build my resume if I decide to try for a very competitive specialty. They graduate 95% of their students within 4 years (98% in 5 years), and their residency match list looks a bit nicer than FIU's on average. For what it's worth (probably not much), I have a better feeling about this program's "culture" based on the events I've attended. Location/Convenience: Gainesville is closer. It might be feasible for my husband to stay at his current workplace for all four years if we find a good place to live around midway between.
Other advice I have received: Jess Whittlestone at 80,000 Hours suggested I'd do best, impact-wise, to consider which school would maximize my earning-to-give potential. This would mostly depend on the specialty I go into-- based on how I feel now, I'm most likely to try for an Internal Medicine subspecialty, which would mean doing a fellowship after residency. A good residency match would position me well for a fellowship in a competitive field. Physicians whom I have asked for advice tell me that people commonly match into even very competitive residencies from lower-ranked US MD schools, but it takes more work (better test scores, stronger evaluations). They also tend to say "OMG take the money" when I say the words "full tuition scholarship".
You can probably tell that I lean towards UF, but I don't want to make a bad call. What am I missing? What should I be asking the schools? Where should I go?
Replies from: Zaine, John_Maxwell_IV, Qiaochu_Yuan↑ comment by Zaine · 2013-05-09T03:43:48.737Z · LW(p) · GW(p)
Although the FIU is new, its curriculum seems to fit the old Flexner I mold. I cannot tell the state of UF's program from the site.
Research options at FIU appear limited, but if you have an interest in one among those available, this concern does not hold.
What do you want to pursue in a medical career? Research? Patient Care? Whatever earns the most money?
To find the necessary information if the answer is:
Research - Visit the school and investigate the status of its research department. Learn about ongoing studies, the attention ratios of the Principal Investigator to Junior Investigator to students, and the amount of freedom allowed in pursuing research interests.
Patient Care - Ask existing students of all years what their curriculum has been, and how much time they have spent with patients. Flexner I involves two years of study, then two years of practical application; Flexner II (an informal moniker) isn't a set system as individual schools are slowly implementing and trying new and different things, but generally differs from Flexner I - for example, involving patient care as part of the first two years.
Money - There are many avenues to approach this. Naturally the more prestige your school has the better, as that will help determine the quality of your first post; however, with enough research publications you can make your own prestige, and research will always be a value marker. Your alma mater on the other hand matters less and less as time passes and jobs accumulate.
↑ comment by ITakeBets · 2013-05-09T05:30:20.727Z · LW(p) · GW(p)
I plan on a career in patient care. I will almost certainly do research in medical school, but based on past experience I don't expect to find it extremely compelling or to be extraordinarily good at it. Money concerns me if only for philanthropic purposes. The field that interests me most now (infectious disease) does not pay especially well, but I have decided that I really should seriously consider more lucrative paths that might let me donate enough to save twice as many lives in the developing world.
Both schools seem to have pretty solid clinical training and early patient exposure, to hear the students tell it (though they have little basis for comparison). I don't have a strong preference between their curricula, except my worries about driving around between hospitals in Miami.
Replies from: Zaine↑ comment by Zaine · 2013-05-09T08:08:56.765Z · LW(p) · GW(p)
To me it then appears you have two (clear) paths in line with your preferences. Your emotional preference, what makes you happy, sounds like helping people in person (fuzzies). Your intellectual preference, goal, or ambition, could be paraphrased as, "Benefit to the highest possible positive degree the greatest number of people." Your ideal profession will meet somewhere between the optimal courses for each of these two preferences.
I list these to avoid misunderstanding.
The first course is the one you're pursuing - get an MD, work with patients to be happy, and donate to efficient high-utility charities in order to live with yourself. If the difference in cost will really only come out to 30-60k $US, you will be able to live with your husband while attending UF, UF is more prestigious, would cause you less worry, and if matriculating to UF makes you happier - then by all means attend UF! I'd be quite certain about the numbers, though.
The second course isn't unique to medical professionals, but they do have special skills which can be of unique use. Go to a developing country and solve medical problems in highly replicable and efficient manners. This course probably meets your two preferences with the least amount of compromise.
If you're unfamiliar with Paul Farmer, he went (still goes, maybe) to Haiti and tried to solve their medical problems - he had some success, but unfortunately the biggest problem with Haiti was governmental infrastructure, without which impact cannot be sustained.
The second course would involve you using medical expertise to solve medical problems, and acquiring either additional knowledge or a partner with knowledge of how to establish infrastructure sufficient to sustain your solution. The final step involves writing Project Evaluations on your endeavours so that others can replicate them in wide and varied locales - this is how you make an impact.
Not knowing anything about your husband, the above reasoning assumes he doesn't have any impact upon the decision.
Replies from: ITakeBets↑ comment by ITakeBets · 2013-05-09T16:09:37.248Z · LW(p) · GW(p)
Thanks, your advice more or less coincides with what I was planning up until Ohio State confused me again. I certainly have not ruled out international medicine and nonprofit work as some part of my career, but I don't see that any of the schools that has accepted me has a clear advantage on that front.
Replies from: Zaine↑ comment by Zaine · 2013-05-10T04:59:20.908Z · LW(p) · GW(p)
Perhaps one of the schools has someone on the faculty with experience in that area, and could mentor you. If I may inquire, how did Ohio State confuse you?
Replies from: ITakeBets↑ comment by ITakeBets · 2013-05-10T11:39:45.164Z · LW(p) · GW(p)
On Wednesday they awarded me a scholarship covering full in-state tuition, making them probably my least expensive option (since it's easy to establish residency for tuition purposes in Ohio after a year or two). It's an excellent program, but moving would be hard and Columbus is cold and far from both our families.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-08T08:19:02.958Z · LW(p) · GW(p)
A good residency match would position me well for a fellowship in a competitive field.
Do competitive fields tend to be the highest-paying? I would have assumed that the fields where there were more people going in to them than spots available had relatively low pay due to supply and demand, and the highest pay was to be found by going in to a field that was somehow difficult, boring, or distasteful in a way that discouraged people from entering it.
Replies from: ITakeBets↑ comment by ITakeBets · 2013-05-08T15:09:23.576Z · LW(p) · GW(p)
Fair question. It seems that compensation is determined largely by what Medicare/insurance companies are willing to pay for procedures etc. I believe unfilled fellowship spots aren't really a problem in any field, but the highest-paying subspecialties attract the most applicants. For example, cardiologists are very well-compensated, and cardiology fellowships are among the most competitive.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-08T20:14:34.805Z · LW(p) · GW(p)
Interesting.
Right now I'd be leaning towards UF if I were you, I think, since my intuition is that $30-60K isn't much debt relative to what physicians typically make. But have you thought about using instacalc.com or some other spreadsheet to actually tally up all the numbers related to fees, cost of living, expected career earnings, time value of money/disconting, etc.?
Congratulations on getting admitted to medical school, btw.
Replies from: ITakeBets↑ comment by ITakeBets · 2013-05-09T04:00:30.591Z · LW(p) · GW(p)
Thank you!
I had just about settled on UF when I was suddenly struck with SERIOUS FIRST WORLD PROBLEMS as Ohio State, the highest-ranked school that accepted me, offered me a scholarship covering full in-state tuition. Ohio is quite easy to establish residency in, so I'd probably only be out of pocket the difference between in-state and out-of-state tuition for the first year, but of course I'd have to move, and we'd be far from both our families.
I put together a spreadsheet taking into account the cost of moving, transportation costs, estimated change in rent, tuition and fees, and potential lost wages-- and it looks like OSU could actually be the least expensive of the three, depending on whether I manage to establish residency in time to get in-state tuition my second year (I'm told this is the norm). My estimate for the difference between UF and FIU increased slightly to $40k-$70k. I am not sure what to do about estimated career earnings-- lots of variance there, and I'm having a hard time weighing it against the costs, which I can be much more confident about.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-09T06:49:37.015Z · LW(p) · GW(p)
Congratulations on your first world problems! I don't have any brilliant ideas on estimating career earnings, sorry.
↑ comment by Qiaochu_Yuan · 2013-05-02T19:18:51.894Z · LW(p) · GW(p)
Replies from: ITakeBets↑ comment by ITakeBets · 2013-05-02T19:25:43.871Z · LW(p) · GW(p)
Do you have one in mind? Or are you just advising against medical school, and if so, why?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-02T19:31:58.194Z · LW(p) · GW(p)
I'm suggesting that you spend some time writing down what your third options are. Seems like a good thing to do in general. I don't know what your third options are or how they compare to medical school, so I can't say anything about that.
Replies from: ITakeBets↑ comment by ITakeBets · 2013-05-02T19:47:58.025Z · LW(p) · GW(p)
Ok, I agree that's probably good advice in general. I've tried to avoid premature closure throughout the process of making this career change, but I'll explicitly list some third options when I journal tonight. The bulk of my probability mass is in these two schools, though, so I am especially interested in advice that would help me choose between them.
comment by niceguyanon · 2013-05-02T07:08:10.606Z · LW(p) · GW(p)
Lately there seems to be an abundance of anecdotal and research evidence to refrain from masturbation/quit porn. I am not sure that the evidence is conclusive enough for me to believe the validity of the claims. The touted benefits are impressive, while the potential cons seem minimal. I would be interested in some counter arguments and if not too personal, I'd like to know the thoughts of those who have participated in quitting masturbation/porn.
Replies from: Qiaochu_Yuan, sixes_and_sevens, MrMind, gwern, Viliam_Bur, hg00, army1987↑ comment by Qiaochu_Yuan · 2013-05-02T18:48:36.099Z · LW(p) · GW(p)
I quit porn three weeks ago and attempted to quit masturbation but failed. Subjectively I notice that I'm paying more attention to the women around me (and also having better orgasms when I do masturbate). My main reason for doing this was not so much that I found the research convincing as that the fact that people were even thinking about porn in this particular way helped me reorient my attitude towards porn from "it's harmless" to "it's a superstimulus, it may be causing a hedonic treadmill, and I should be wary of it in the same way that I'm now wary of sugar." (There's also a second reason which is personal.)
I like sixes_and_sevens' hypothesis. Here's another one: a smallish number of people really do have a serious porn addiction and really do benefit substantially from quitting cold turkey, but they're atypical. (I don't think I fall into this category, but I still think this is an interesting experiment to run.)
General comment: I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high. When you're asking for conclusive scientific evidence, you're asking for something in the neighborhood of a 90% probability of success or higher. I think you should be willing to take probabilities of success in the neighborhood of 10% or lower in cases where the costs are sufficiently low. If you try out enough self-improvements, one of them may improve your life enough to have been worth all of the other failures (again, in cases where the costs are low). Plus, I think it's useful to make a habit out of changing your habits (think of it as simulated annealing on your life). Otherwise, you may just get better and better at arguing yourself out of changing anything.
In other words, I think people should be less risk-averse with respect to potential self-improvements. Anna thinks something like this is particularly likely to be a failure mode of people with a math background, where the demands for probability of correctness are much higher than in most of life.
Replies from: PhilGoetz, Adele_L, army1987, Kaj_Sotala↑ comment by PhilGoetz · 2013-05-02T21:33:49.937Z · LW(p) · GW(p)
I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high.
People on LW have a habit of treating posts as if LW were a peer-reviewed journal rather than a place to play with ideas.
↑ comment by Adele_L · 2013-05-02T21:04:39.096Z · LW(p) · GW(p)
On a slightly related note, vibrators like the Hitachi Magic Wand are probably a superstimulus for women analogous to porn for men. (of course, anyone can enjoy either type, but that is less common)
Also I agree with your general comment about self improvements, especially since it is hard to find techniques/habits that work for everyone.
↑ comment by A1987dM (army1987) · 2013-05-03T12:06:34.694Z · LW(p) · GW(p)
I think you should be willing to take probabilities of success in the neighborhood of 10% or lower in cases where the costs are sufficiently low.
Yes, but make sure to count all the costs, incl. opportunity costs, in there.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-03T17:38:45.680Z · LW(p) · GW(p)
Agreed. But the opportunity cost of quitting porn, for example, is negative: it's actually saving me time.
↑ comment by Kaj_Sotala · 2013-05-06T19:30:58.083Z · LW(p) · GW(p)
I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high.
I thought that the opposite was true, in that LW regulars tended to be eager to try any suggested self-improvement idea that anybody had spent more than a few sentences offering anecdotal support for. Though that might just be overgeneralizing from my own habits.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-06T22:57:32.643Z · LW(p) · GW(p)
Hmm. My impression is that people here are very willing to try anti-akrasia ideas but not very willing to try other kinds of ideas. I could be mistaken though.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-05-07T02:15:57.639Z · LW(p) · GW(p)
This is also my impression. People are willing to discuss anti-akrasia since Eliezer talked about it, but otherwise people have an unfortunate allief that any advise older than a couple decades is superstition.
↑ comment by sixes_and_sevens · 2013-05-02T15:11:54.207Z · LW(p) · GW(p)
Hypothesis: arbitrary long-term acts of self-control improve personal well-being, regardless of the benefits of the specific act.
Replies from: Alejandro1↑ comment by Alejandro1 · 2013-05-04T18:11:05.392Z · LW(p) · GW(p)
See also: Lent.
↑ comment by MrMind · 2013-05-02T08:33:35.496Z · LW(p) · GW(p)
I cannot reach the site from where I am now, but try to look at The Last Psychiatrist blog, it has an article right about that. Its main point is that there's a problem that cause both porn addiction and difficulties with sexual relationships, so that they're not directly related. I have to say that my experience agrees with that: I haven't any particular problem with my sexuality, and quitting porn for a couple of months did not had any noticeable positive or negative effect.
↑ comment by gwern · 2013-05-02T15:52:42.037Z · LW(p) · GW(p)
I am not sure that the evidence is conclusive enough for me to believe the validity of the claims.
I don't either. The anecdotal evidence is the usual crap that you'll see for anything, and the research they cite is equivocal or only distantly related or worse (someone linked a blog post arguing for this on LW in the past and I pointed out that most of the points were awful and one study actually showed the opposite of what they thought it showed, although I can't seem to refind this comment right now).
↑ comment by Viliam_Bur · 2013-05-02T11:23:04.439Z · LW(p) · GW(p)
On skeptics.stackexchange.com, the only answer on this topic is that masturbation is completely harmless.
Replies from: gothgirl420666↑ comment by gothgirl420666 · 2013-05-02T13:09:20.853Z · LW(p) · GW(p)
There's a big difference between the physical act of masturbation, which is probably harmless and good for you in moderate amounts, and the mental act of watching porn, which seems to be what people are advocating refraining from.
Also, r/nofap is weirdly cult-like from what I've seen and probably not a good resource. For example, this is the highest upvoted post that's not a funny picture, and it seems to be making very, very exaggerated claims about the benefits of not jacking off: "If you actually stop jerking off, and I mean STOP - eliminate it as a possibilty from your life (as I and many others have) - your sex starved brain and testicles will literally lead you out into the world and between the legs of a female. It just HAPPENS. Try it, you numbskull. You'll see that I speak the truth."
Replies from: Viliam_Bur, CAE_Jones↑ comment by Viliam_Bur · 2013-05-03T06:40:50.460Z · LW(p) · GW(p)
There's a big difference between the physical act of masturbation, which is probably harmless and good for you in moderate amounts, and the mental act of watching porn, which seems to be what people are advocating refraining from.
Oh. I didn't notice the difference, because I automatically assumed those two acts to be connected.
So, would that mean that masturbation without watching porn is healthy and harmless, but masturbation with watching porn is harmful? Sounds like an easy setup for a scientific experiment.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-03T07:34:58.614Z · LW(p) · GW(p)
But what if you're imagining porn?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-05-03T08:46:43.107Z · LW(p) · GW(p)
Uhhh... perhaps the best solution would be to masturbate while solving problems of algebra, just to make sure to avoid the sin of superstimulus. (Unless algebraic equations count as superstimulus too, in which case I am doomed completely.)
This whole topic feels extremely suspicious to me. We have two crowds shouting their messages ("masturbation is completely safe and healthy, no bad side effects ever", "porn is a dopamine addiction to superstimulus and will destroy your mind"), both of them claim to have science on their side, and imagining the world where both are correct does not make much sense.
To be honest, I suspect that both crowds are exaggerating and filtering the evidence. I also suspect that the actual reasons which created these crowds are something like this -- "Watching porn and masturbation is something that low-status males do, because high-status males get real sex. Let's criticize the low-status thing. Oh wait, women masturbate too; and we can't criticize that, because criticizing women would be sexist! Also, religion criticized masturbation, so we should actually promote it, just to show how open-minded we are. But porn is safe to criticize, because that's mostly a male thing. Therefore masturbation is perfectly okay, especially for a female, but porn is bad, and masturbation with porn is also bad. Other kinds of superstimuli, such as romantic stories for women, don't associate with low status, therefore we should ignore them in our debate about the dangers of superstimuli. Let's focus on criticizing the low-status things."
Replies from: NancyLebovitz, gothgirl420666, bogus↑ comment by NancyLebovitz · 2013-05-03T14:38:09.737Z · LW(p) · GW(p)
Romance novels are low status. They just aren't as low status as porn.
↑ comment by gothgirl420666 · 2013-05-03T12:12:30.416Z · LW(p) · GW(p)
I really don't understand how imagining "porn is a superstimulus because it allows you to instantly watch amazing sex that conforms to your personal taste. and therefore makes real sex seem less enjoyable" and "masturbation is not physically unhealthy, nor will it make real sex seem less enjoyable, and not walking around with blue balls all the time will make you a little happier, and 'practicing' for sex occasionally will make the act easier" leads to a world that doesn't make sense. I think it makes much more sense than your conspiracy theory against low-status males.
And romantic stories for woman seem to obviously not be a superstimulus in the same way porn might be? (For one, outside the realm of porn, TV is fairly addictive and literature isn't.) There are diagnosed porn addicts whose addiction is ruining their lives, but I've never heard of any romantic novel addicts.
Replies from: Viliam_Bur, OrphanWilde↑ comment by Viliam_Bur · 2013-05-03T15:25:02.478Z · LW(p) · GW(p)
My reasoning is that if porn is seriously harmful and masturbation is absolutely harmless, there should be some aspect present at porn, but absent at masturbation and everyday life, which causes the harm. I have problem pointing out precisely what exactly that aspect would be.
Too much conforming to my personal taste? That's already true for masturbation. Unlike at real sex, I can decide when, how often, for how long or short time, etc. But I am supposed to believe that none of this is a superstimulus, and it cannot make real sex less enjoyable even a bit. I am also supposed to believe that the similarities between masturbation and sex will help practising and make the act easier, but the differences are absolutely inconsequential.
Seeing too many sexy ladies that I can't have sex with, some of them could be even more attractive than my partner? Well, I see sexy ladies when I walk down the street. And in the summer I will see even more. On the beach, still more. (I am not sure whether nudist beach is already beyond the limits, or not.) But I am supposed to believe that as long as I don't see their nipples or something, it is completely safe. But if I see a nipple, my brain will release the waves of dopamine and my mind will be ruined. (If I understand the definition of porn correctly, seeing a naked sexy lady on a picture is already porn, even if she is not doing anything with anyone, am I right? And even limiting oneself to that kind of porn would be already harmful.)
All of that together? So if I see a sexy lady on the beach, and then I go home and masturbate thinking about her, that's completely harmless. However, if I make a picture of her, and then at home I look at the picture, especially if the picture was taken at the nudist beach, that is harmful; the mere looking is harmful, even if I don't touch myself.
Sorry for exaggerations, but this is how those theories feel to me, when taken together. I can imagine making convincing arguments for each of them separately. I just have trouble imagining a reasonable model which would explain both of them at the same time. Why a visual superstimulus ruins the real sex, but a tactile one is completely harmless.
Compared with that, a hypothesis "it is popular to slander low-status behavior, and the rest is rationalization" seems more likely.
Replies from: gothgirl420666, Qiaochu_Yuan, NancyLebovitz, Jiro, army1987, TheOtherDave↑ comment by gothgirl420666 · 2013-05-03T23:43:06.657Z · LW(p) · GW(p)
Honestly, dude, you seem to be sort of engaging in black-and-white thinking that I wouldn't expect from a LW reader. Yes, a noncentral example of porn use such as "looking at a candid picture of a nude woman and not touching your dick" is almost definitely harmless. A much more central example of porn use, however, is a guy who has been jacking off to porn four times a week since he was about thirteen, and has in that time seen probably hundreds of porn videos, of which he has selected a few that appeal very specifically to his particular tastes, which he watches regularly. There's obviously no boundary where as soon as you do something labeled "watching porn" your brain will "release waves of dopamine and ruin your mind". But it doesn't seem hard to imagine that maybe that guy would be healthier if he changed his habits and started jacking off to his imagination (which he would probably end up doing much less frequently, I imagine), and "don't jack off to anything but your imagination" is a much, much more effective rule to precommit to than "stop watching porn if you get the feeling that you might be falling for a superstimulus", or whatever.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-05-04T08:39:47.710Z · LW(p) · GW(p)
Honestly, dude, you seem to be sort of engaging in black-and-white thinking that I wouldn't expect from a LW reader.
Ironically, I imagined myself as making fun of other people's black-and-white thinking. (Masturbation completely healthy and harmless: in the skeptics discussion I linked. Porn: superstimulus ruining one's mind and life.) I tried to find out how exactly the world would look like for people who believe both of these things; mostly because nobody here tried to contradict either of them. What would be the logical consequences of these beliefs -- because people are often not aware of logical consequences of the beliefs they already have.
To me, both these beliefs feel like exaggerations, and they also feel contradictory, although technically they are not speaking about exactly the same thing. One kind of superstimulus is perfectly safe, other kind of superstimulus is addictive -- is this an inconsistent approach to superstimuli, or a claim that these superstimuli are of a different nature?
I am thankful for two contributors willing to bite the bullet and describe what could the world look like if both beliefs were true. TheOtherDave said that actions controlled by one's own mind (masturbation) could have smaller effect than actions not controlled by one's own mind (watching a porn movie), just like it is difficult to tickle oneself. Qiaochu_Yuan said that some actions have natural limit where a human must stop (masturbation), while other actions have no such limit and can be prolonged indefinitely (watching porn), just like you can't eat the whole day, but you can play a computer game the whole day. -- Both of these answers make sense and I did not realize that.
And that's essentially all I wanted from this topic. (Unless someone would give me a pointer to a scientific study concerned with differences between masturbation without porn and masturbation with porn, in terms of addiction and behavioral change.)
↑ comment by Qiaochu_Yuan · 2013-05-03T21:10:55.998Z · LW(p) · GW(p)
I have problem pointing out precisely what exactly that aspect would be.
You can continuously watch porn in the same way that you can continuously play World of Warcraft. You can't continuously masturbate in the same way that you can't continuously eat pizza.
"Porn" is too vague. Are you talking about a quick 5-minute session or a marathon lasting several hours? If you've never done the latter, consider that some people might. The effects of the two are likely to be quite different, especially if the latter is a frequent occurrence.
Also, it's not at all popular among my friend groups to slander porn. That's seen as sex-negative, which is one reason I never got around to thinking about porn as potentially harmful until quite recently.
↑ comment by NancyLebovitz · 2013-05-03T20:12:10.816Z · LW(p) · GW(p)
It may be that masturbation has satiation much more than looking at pictures does.
↑ comment by Jiro · 2013-05-03T18:08:41.312Z · LW(p) · GW(p)
Generally, when people claim something is harmless, they don't mean that it's "absolutely harmless". Playing videogames is harmful if you do it to the exclusion of eating, sleeping, and excreting, but one would not normally say that videogames are harmful based on them being harmful under such conditions. It is entirely possible to claim that porn is harmful, and that masturbation under similar circumstances (such as masturbating to mental images of people) is also harmful, while still consistently insisting that masturbation is harmless.
↑ comment by A1987dM (army1987) · 2013-05-03T22:08:58.871Z · LW(p) · GW(p)
I guess that according to such people the problem is not porn per se, but the addiction to porn. Looking at ladies on the beach and going home and masturbating once isn't problematic, but if you do that for 10% of your waking time for years... And ‘don't watch porn’ makes for a better Schelling point than ‘don't watch more than half an hour of porn a week’, for someone who's trying to quit.
↑ comment by TheOtherDave · 2013-05-03T15:50:49.330Z · LW(p) · GW(p)
While I agree with your ultimate conclusion, it's not that implausible that synchronously controlled self-stimulation (which IME most masturbation is, though I suppose it depends on what you're into) is less stimulating than asychronously controlled self-stimulation (e.g., programming a pattern of changing frequencies on a vibrator, or downloading a bunch of porn and queuing a slideshow on my desktop, or visiting a series of previously selected websites with changing content), for many of the same reasons that I can't tickle myself effectively with my fingers but can easily be tickled by inanimate objects.
If that turns out to be true, I would expect a not-very-rigorous analysis to conclude "masturbation is less stimulating than porn", since asynchronously controlled masturbation is relatively rare, as is synchronously controlled porn.
↑ comment by OrphanWilde · 2013-05-03T13:38:27.092Z · LW(p) · GW(p)
Literature isn't addictive? I think I'm going to have to disagree with you there. (And TV isn't addictive for me, personally, at -all-.)
Additionally, a Google search on "romance novel addiction" suggests there are such addicts.
↑ comment by bogus · 2013-05-03T10:20:46.305Z · LW(p) · GW(p)
We have two crowds shouting their messages ("masturbation is completely safe and healthy, no bad side effects ever", "porn is a dopamine addiction to superstimulus and will destroy your mind"), both of them claim to have science on their side, and imagining the world where both are correct does not make much sense.
Really? I can imagine a world where plenty of things that might be considered addictive are quite safe and healthy, as long as you do them in moderation - and what counts as "moderation" may well be different among different people. E.g. some people might be highly sensitive to addiction, so that their only alternative is quitting the habit entirely.
↑ comment by CAE_Jones · 2013-05-02T19:29:07.036Z · LW(p) · GW(p)
When you say "eliminate it as a possibility from your life", I get quite confused as to how this is managed. Take on as many roommates as possible and keep bathrooms on strict timers to minimize opportunities to do it in private? I've heard of people using sorts of cages to make doing it absurdly difficult and/or painful, but the one set of anecdotes I came across didn't make it sound particularly effective.
It just sounds like you're saying it's within most people's abilities to make masturbation practically impossible, which I find a much more difficult claim to believe than the assertion about the results.
Replies from: gothgirl420666, army1987↑ comment by gothgirl420666 · 2013-05-02T23:51:22.643Z · LW(p) · GW(p)
First of all, I can't tell if you realize that it's not me saying it, it's a quote that I selected for its absurdity.
But more importantly, I think he just means that the thought of jacking off no longer occurs to him or seems like something he has any reason to do, just like the idea of going to the store and buying cigarettes doesn't really seem like a possibility to non-smokers. I don't think he's talking about wearing a chastity belt or anything.
Replies from: CAE_Jones↑ comment by CAE_Jones · 2013-05-03T10:20:38.600Z · LW(p) · GW(p)
Oh! I read that with a screen reader with punctuation turned off, so completely failed to notice that the last chunk of it was in quotes! Though I probably should have noticed something was up, if I'd compared it more carefully to the previous sentence, which makes two incredibly obvious posts I got completely wrong yesterday. :( Thanks for clarifying!
↑ comment by A1987dM (army1987) · 2013-05-03T22:23:10.078Z · LW(p) · GW(p)
Meh. I just use picoeconomics for that.
↑ comment by A1987dM (army1987) · 2013-05-03T22:20:13.803Z · LW(p) · GW(p)
I've never watched porn on a regular basis for about a decade, so I won't comment about that. As for masturbation, IME there's an optimum: too much of it (more than a couple of times a week for me -- YMMV) seems to cause apathy and increase my need for sleep, but too little (less than once a week) makes it harder for me to focussedly think about anything other than women, and to fall asleep; and after ten days or so I can feel physical discomfort in my testicles (which takes hours to go away even after I eventually masturbate).
Replies from: Prismattic↑ comment by Prismattic · 2013-05-04T03:20:11.382Z · LW(p) · GW(p)
Pretty sure there's quite a bit of variation in the optimum. I hit the "can't concentrate on anything else" point at between 48 and 72 hours, and I don't experience either apathy or greater sleep need.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-05-06T18:52:36.547Z · LW(p) · GW(p)
Yeah, the optimum used to be shorter for me, too. It's like I get habituated to [whatever happens when I don't masturbate for a while] so that I need more to get the same positive effects, the way I do with (say) caffeine.
comment by FiftyTwo · 2013-05-13T11:51:56.594Z · LW(p) · GW(p)
People who are currently in jobs you like, how did you get them?
Replies from: fubarobfusco, ModusPonies↑ comment by fubarobfusco · 2013-05-14T00:36:44.384Z · LW(p) · GW(p)
My partner (who actually had a résumé posted online, whereas I did not) got calls from two recruiters for the same company; and redirected one of them to me. We wanted to relocate to a warmer climate; we both interviewed and got offers.
In other words, I had sufficient skill ... but also I got lucky big-time.
(A harder question is whether I actually like my job. I've been doing it for 7+ years, but I'm also actively looking for alternatives.)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-05-16T09:15:25.627Z · LW(p) · GW(p)
A harder question is whether I actually like my job. I've been doing it for 7+ years, but I'm also actively looking for alternatives.
Imagine that after next 7 years of looking for alternatives, it will still seem that your current job is the best choice for you.
Did this sentence make you feel happy or sad?
↑ comment by ModusPonies · 2013-05-15T19:16:56.685Z · LW(p) · GW(p)
Comically large amounts of networking. The connection that landed me a programming job was my mom's dance instructor's husband.
comment by [deleted] · 2013-05-08T21:43:49.375Z · LW(p) · GW(p)
Hello,
I am a young person who recently discovered Less Wrong, HP:MOR, Yudkowsky, and all of that. My whole life I've been taught reason and science but I'd never encountered people so dedicated to rationality.
I quite like much of what I've found. I'm delighted to have been exposed to this new way of thinking, but I'm not entirely sure how much to embrace it. I don't love everything I've read although some of it is indeed brilliant. I've always been taught to be skeptical, but as I discovered this site my elders warned me to be skeptical of skepticism as well.
My problem is that I'd like an alternate viewpoint. New ideas are always refreshing, and it's certainly not healthy to constantly hear a single viewpoint, no matter how right your colleagues think they are. (It becomes even worse if you start thinking about a cult.)
Clearly, the Less Wrong community generally (unanimously?) agrees about a lot of major things. For example, religion. The vast majority of "rationalists" (in the [avoid unnecessary Yudkowsky jab] LW-based sense of the term) and all of the "top" contributors, as far as I can tell, are atheists.
Here I need to be careful to stay on topic. I was raised religious, and still am, and I'm not planning to quit anytime soon. I don't want to get into defending religion or even defending those who defend religion. My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking? If you say there aren't any, I won't believe you. I sincerely hope that you aren't afraid to expose your young ones to alternate viewpoints, as some parents and religions are. The optimal situation for you is that you've heard intelligent, thoughtful, rational criticism but your position remains strong.
In other words, one way to demonstrate an argument's strength is by successfully defending it against able criticism. I sometimes see refutations of pro-religious arguments on this site, but no refutations of good arguments.
Can you help? I don't necessarily expect you to go to all this trouble to help along one young soul, but most religious leaders are more than happy to. In any case, I think that an honest summary of your own weak points would go a long way toward convincing me that you guys are any better than my ministers.
Sincerely, and hoping not to be bitten, a thoughtful but impressionable youth
Replies from: shminux, wedrifid, Intrism, None, gwern, Bugmaster, JoshuaZ, Desrtopa, TheOtherDave, Qiaochu_Yuan, metatroll↑ comment by Shmi (shminux) · 2013-05-09T17:11:47.292Z · LW(p) · GW(p)
I have been vocally anti-atheist here and elsewhere, though I was brought up as a "kitchen atheist" ("Obviously there is no God, the idea is just silly. But watch for that black cat crossing the road, it's bad luck"). My current view is Laplacian agnosticism ("I had no need of that hypothesis"). Going through the simulation arguments further convinced me that atheism is privileging one number (zero) out of infinitely many possible choices. It's not quite as silly as picking any particular anthropomorphization of the matrix lords, be it a talking bush, a man on a stick, a dude with a hammer, a universal spirit, or what have you, but still an unnecessarily strong belief.
If you are interested in anti-atheist arguments based on moral realism made by a current LWer, consider Unequally Yoked. It's as close to "intelligent, thoughtful, rational criticism" as I can think of.
There is an occasional thread here about how Mormonism or Islam is the one true religion, but the arguments for either are rarely rational.
Replies from: None↑ comment by [deleted] · 2013-05-09T17:30:18.707Z · LW(p) · GW(p)
That's a really good way of looking at things, thanks. From now on I'm an "anti-atheist" if nothing else...and I'll take a look at that blog.
Could you bring yourself to believe in one particular anthropomorphization, if you had good reason to (a vision? or something lesser? how much lesser?)
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-09T17:38:53.174Z · LW(p) · GW(p)
Could you bring yourself to believe in one particular anthropomorphization, if you had good reason to (a vision? or something lesser? how much lesser?)
I find it unlikely, as I would probably attribute it to a brain glitch. I highly recommend looking at this rational approach to hypnosis by another LW contributor. It made me painfully aware how buggy the wetware our minds run on is, and how easy it is to make it fail if you know what you are doing. Thus my prior when seeing something apparently supernatural is to attribute it to known bugs, not to anything external.
Replies from: None↑ comment by [deleted] · 2013-05-09T17:43:14.940Z · LW(p) · GW(p)
The brain glitch is always available as a backup explanation, and they certainly do happen (especially in schizophrenics etc.) But if I had an angel come down to talk to me, I would probably believe it.
Replies from: shminux, TheOtherDave, JoshuaZ↑ comment by Shmi (shminux) · 2013-05-09T18:51:19.608Z · LW(p) · GW(p)
How would you tell the difference? Also see this classic by another LWer.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-09T19:34:45.712Z · LW(p) · GW(p)
Personally, I think this one is more relevant. The biggest problem with the argument from visions and miracles, barring some much more complicated discussions of neurology than are really necessary, is that it proves too much, namely multiple contradictory religions.
Replies from: shminux, None↑ comment by Shmi (shminux) · 2013-05-09T20:26:25.281Z · LW(p) · GW(p)
It's a good post, but overly logical and technically involved for a non-LWer. Even if you agree with the logic, I can hardly imagine a religious person alieving that their favorite doctrine proves too much.
↑ comment by [deleted] · 2013-05-10T10:32:09.185Z · LW(p) · GW(p)
It's a very interesting post. You're right that we can't accept all visions, because they will contradict each other, but in fact I think that many don't. It's entirely plausible in my mind that God really did appear to Mohammed as well as Joseph Smith, for instance, and they don't have to invalidate each other. But of course if you take every single claim that's ever been made, it becomes ridiculous.
Does it prove too much, then, to say that some visions are real and some are mental glitches? I'm not suggesting any way of actually telling the difference.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T13:42:26.021Z · LW(p) · GW(p)
Well, it's certainly not a very parsimonious explanation. This conversation has branched in a lot of places, so I'm not sure where that comment is right now, but as someone else has already pointed out, what about the explanation that most lightning bolts are merely electromagnetic events, but some are thrown by Thor?
Proposing a second mechanism which accounts for some cases of a phenomenon, when the first mechanism accounts for others, is more complex (and thus in the absence of evidence less likely to be correct) than the supposition that the first mechanism accounts for all cases of the phenomenon. If there's no way to tell them apart, then observations of miracles and visions don't count as evidence favoring the explanation of visions-plus-brain-glitches over the explanation of brain glitches alone.
It's possible, but that doesn't mean we have any reason to suppose it's true. And when we have no reason to suppose something is true, it generally isn't.
↑ comment by TheOtherDave · 2013-05-09T19:04:25.879Z · LW(p) · GW(p)
FWIW, I've had the experience of a Presence manifesting itself to talk to me. The most likely explanation of that experience is a brain glitch. I'm not sure why I ought to consider that a "backup" explanation.
Replies from: None↑ comment by [deleted] · 2013-05-09T19:56:38.186Z · LW(p) · GW(p)
Right, obviously it's a problem. There are lots of people who think they've been manifested to, and some of them are schizophrenic, and some of them are not, and it's a whole lot easier to just assume they're all deluded (even if not lying). But even Richard Dawkins has admitted that he could believe in God if he had no other choice. (I have a source if you want.)
Certainly, if you're completely determined not to believe no matter what—if you would refuse God even if He appeared to you himself—then you never will. But if there is absolutely nothing that would convince you, then you're giving it a chance of 0.
Since you are rationalists, you can't have it actually be 0. So what is that 0.0001 that would convince you?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-09T21:16:48.249Z · LW(p) · GW(p)
There's a big difference between "no matter what" and "if He appeared to you himself," especially if by the latter you mean appearing to my senses. I mean, the immediate anecdotal evidence of my senses is far from being the most convincing form of evidence in my world; there are many things I'm confident exist without having directly perceived them, and some things I've directly perceived I'm confident don't exist.
For example, a being possessing the powers attributed to YHWH in the Old Testament, or to Jesus in the New Testament, could simply grant me faith directly -- that is, directly raising my confidence in that being's existence. If YHWH or Jesus (or some other powerful entity) appeared to me that way, I would believe in them.
I'm assuming you're not counting that as convincing me, though I'm not sure why not.
But if there is absolutely nothing that would convince you, then you're giving it a chance of 0.
Actually, that isn't true. It might well be that I assign a positive probability to X, but that I still can't rationally reach a state of >50% confidence in X, because the kind of evidence that would motivate such a confidence-shift simply isn't available to me. I am a limited mortal being with bounded cognition, not all truths are available to me just because they're true.
But it may be that with respect to the specific belief you're asking about, the situation isn't even that bad. I don't know, because I'm not really sure what specific belief you're asking about. What is it, exactly, that you want to know how to convince me of?
That is... are you asking what would convince me in the existence of YHWH, Creator of the Universe, the God of my fathers and my forefathers, who lifted them up from bondage in Egypt with a mighty hand an an outstretched arm, and through his prophet Moses led them to Sinai where he bequeathed to them his Law?
Or what would convince me of the existence of Jesus Christ, the only begotten Son of God, who was born a man and died for our sins, that those who believe in Him would not die but have eternal life?
Or what would convince me of the existence of Loki, son of the All-Father Odin who dwells in highest Asgard, and will one day bring about Ragnarok and the death of the Gods?
Or... well, what, exactly?
With respect to those in particular, I can't think of any experience off-hand which would raise my confidence in any of them high enough to be worth considering (EDIT: that's hyperbole; I really mean "to convince me"; see below), though that's not to say that such experiences don't exist or aren't possible... I just don't know what they are.
With respect to other things, I might be able to.
Replies from: JoshuaZ, None↑ comment by JoshuaZ · 2013-05-09T21:25:33.884Z · LW(p) · GW(p)
With respect to those in particular, I can't think of any experience off-hand which would raise my confidence in any of them high enough to be worth considering, though that's not to say that such experiences don't exist or aren't possible... I just don't know what they are.
Huh. That's interesting. For at least the first two I can think of a few that would convince me, and for the third I suspect that a lack of being easily able to be convinced is connected more to my lack of knowledge about the religion in question. In the most obvious way for YHVH, if everyone everywhere started hearing a loud shofar blowing and then the dead rose, and then an extremely educated fellow claiming to be Elijah showed up and started answering every halachic question in ways that resolve all the apparent problems, I think I'd be paying close attention to the hypothesis.
Similar remarks apply for Jesus. They do seem to depend strongly on making much more blatant interventions in the world then the deities generally seem to (outside their holy texts).
Replies from: Eliezer_Yudkowsky, Desrtopa, TheOtherDave↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-14T08:32:13.564Z · LW(p) · GW(p)
Technically the shofar blowing thing should not be enough sensory evidence to convince you of the prior improbability of this being the God - probability of alien teenagers, etcetera - but since you weren't expecting that to happen and other people were, good rationalist procedure would be to listen very carefully what they had to say about how your priors might've been mistaken. It could still be alien teenagers but you really ought to give somebody a chance to explain to you about how it's not. On the other hand, we can't execute this sort of super-update until we actually see the evidence, so meanwhile the prior probability remains astronomically low.
↑ comment by Desrtopa · 2013-05-09T21:37:20.657Z · LW(p) · GW(p)
and then an extremely educated fellow claiming to be Elijah showed up
In this context I think it makes sense to ask "showed up where?" but if the answer were "everywhere on earth at once," I'd call that pretty damn compelling.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-09T22:06:33.346Z · LW(p) · GW(p)
Not to mention crowded.
↑ comment by TheOtherDave · 2013-05-09T22:03:37.097Z · LW(p) · GW(p)
Yeah, you're right, "to be worth considering" is hyperbole. On balance I'd still lean towards "powerful entity whom I have no reason to believe created the universe, probably didn't lift my forefathers up from bondage in Egypt, might have bequeathed them his Law, and for reasons of its own is adopting the trappings of YHWH" but I would, as you say, be paying close attention to alternative hypotheses.
Fixed.
↑ comment by [deleted] · 2013-05-10T10:13:18.622Z · LW(p) · GW(p)
You're right, I'm assuming that God doesn't just tweak anyone's mind to force them to believe, because the God of the Abrahamic religions won't ever do that—our ultimate agency to believe or not is very important to Him. What would be the point of seven billion mindless minions? (OK, it might be fun for a while, but I bet sentient children would be more interesting over the course of, say, eternity.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T14:32:41.752Z · LW(p) · GW(p)
As I said at the time, it hadn't been clear when I wrote the comment that you meant, specifically, the God of the Abrahamic religions when you talked about God.
I've since read your comments elsewhere about Mormonism, which made it clearer that there's a specific denomination's traditional beliefs about the universe you're looking to defend, and not just beliefs in the existence of a God more generally.
And, sure, given that you're looking for compelling arguments that defend your pre-existing beliefs, including specific claims about God's values as well as God's existence, history, powers, personality, relationships to particular human beings, and so forth, then it makes sense to reject ideas that seem inconsistent with those epistemic pre-commitments.
That's quite a given, though.
Replies from: None↑ comment by [deleted] · 2013-05-10T15:41:59.921Z · LW(p) · GW(p)
If you do assume that God can (and does) just reach in and tweak our minds directly, then being "convinced" takes on a sort of strange meaning. Unless we're assuming that you remain in normal control of your own mind, the concepts of "choice," "opinion," and "me" sort of start to disappear.
I'm trying to talk about a deity in general, but you're right, it often turns into the God we're all familiar with. A radically different deity could uproot every part of the way we think about things, even logic and reason itself.
So in order to stay within our own universe, I think it's OK to assume that any God only intervenes to the extent that we usually hear about, like Old Testament miracles.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T15:59:01.512Z · LW(p) · GW(p)
A radically different deity could uproot every part of the way we think about things, even logic and reason itself. So in order to stay within our own universe, I think it's OK to assume that any God only intervenes to the extent that we usually hear about, like Old Testament miracles.
Wait... you endorse rejecting the lived experience of millions of people whose conception of deity is radically different from yours, on the grounds that to do otherwise could uproot logic, reason, and every part of the way we think about things?
Wow. Um... I genuinely don't mean to be offensive, but I don't know a polite way to say this: if I understood that correctly, I just lost all interest in discussing this subject with you.
You seemed to be arguing a while back that our precommitments to "the way we think about things" were not sufficient grounds to reject uncomfortable or difficult ideas, which is a position I can respect, though I think it's importantly though subtly false.
But now you just seem to be saying that we should not respect such precommitments when they interfere with accepting some beliefs, such as one popular conception of deity, while considering them sufficient grounds to reject others, such as different popular conceptions of deity.
Which seems to bring us all the way back around to the idea that an "atheist" is merely someone who treats my God the way I treat everyone else's God, which is boring.
Have I misunderstood you?
Replies from: None↑ comment by [deleted] · 2013-05-10T16:40:59.448Z · LW(p) · GW(p)
Probably you have, unfortunately. Give me a few minutes to figure it out...this is getting confusing.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T17:15:43.204Z · LW(p) · GW(p)
OK. No worries; no hurries... I'll consider this branch paused pending re-evaluation. Take your time.
Replies from: None↑ comment by [deleted] · 2013-05-10T17:52:37.849Z · LW(p) · GW(p)
So it seems like what we were actually talking about here was how thoroughly God could convince a human of His existence, and you suggested he could just raise your faith level directly.
Here's the problem I have with that: I don't know about Odin, but the YHWH we were raised with doesn't (could, but doesn't) ever do that. I wouldn't really call it faith if you have no choice in the matter.
But I recognize that free agency is a very important tenet of my religion and important to my understanding of the universe given that my religion is correct. (I still don't quite understand free choice, which I'll have to figure out sometime in the next few years, but that's my own issue.) Thus, a radically different deity is at odds with my view of the universe. This probably means that I ought to go looking for radically different deities which will challenge my universe, but for now I don't know of any (except maybe simulation hypotheses, which I like a lot).
But for the purposes of this discussion—which, remember, was only about how spectacular a manifestation it would take to make you believe—I said it would be easier to stick to a God that doesn't intervene to the point of directly tampering with our neurons. You had a problem with this. OK, sorry—let's also think about a fundamentally different God.
I think that an effectively all-powerful being could easily just reach in and rearrange our circuits such that we know it exists. Sure it could happen. As I think I told someone, I don't see why—having seven billion mindless minions would get old after a while—but I have no right to go questioning the motives of a deity, especially one that's radically different from the one I'm told I'm modeled after.
I'm sorry, I never meant to dismiss the possibility of radically different religions. You're right, that would be awfully silly coming from me.
Now then.
You seemed to be arguing a while back that our precommitments to "the way we think about things" were not sufficient grounds to reject uncomfortable or difficult ideas, which is a position I can respect, though I think it's importantly though subtly false.
This sounds very interesting, what do you mean?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T20:11:48.258Z · LW(p) · GW(p)
I recommend you prioritize clarifying your confusions surrounding "free choice" higher than you seem to be doing.
In particular, I observe that our circuits have demonstrably been arranged such that we find certain propositions, sources of value, and courses of action (call them C1) significantly (and in some cases overwhelmingly) more compelling than other propositions, sources of value, and courses of action (C2). For example (and trivially), C1 includes "I have a physical body" and C2 includes "I don't have a physical body".
If we were designed by a deity, it follows that this deity in fact designed us to be predisposed to accept C1 and not accept C2.
A concept of free agency that allows for stacking the deck so overwhelmingly in support of C1 over C2, but does not allow for including in C1 "YHWH as portrayed in the Book of Mormon, other texts included by reference in the Book of Mormon, and subsequent revelations granted to the line of Mormon Prophets by YHWH", seems like an important concept to clarify, if only because it sounds so very contrived on the face of it.
This sounds very interesting, what do you mean?
Well, for example, consider the proposition (Pj) that YHWH as conceived of and worshiped by 20th-century Orthodox Jews of my family's tradition exists.
As a child, I was taught Pj and believed it (which incidentally entailed other things, for example, such as Jesus Christ not being the Messiah). As a teenager re-evaluated the evidence I had for and against Pj and concluded that my confidence in NOT(Pj) was higher than my confidence in Pj.
Had someone said to me at that time "Dave, I realize that your evaluation of the evidence presented by your experience of the world leads you to high confidence in certain propositions which you consider logically inconsistent with Pj, but I caution you not to become so thoroughly precommitted to the methods by which you perform those evaluations that you cannot seriously consider alternative ways of evaluating evidence," that would intuitively feel like a sensible, rational, balanced position.
The difficulty with it is that in practice, refusing to commit to any epistemic method means giving up on reaching any conclusions at all, however tentative. And since in practice making any choices about what to do next requires arriving at some conclusion, however implicit or unexamined, it similarly precludes an explicit examination of the conclusions underlying my choices. (Which typically entails an unexamined adoption of the epistemic methods my social group implicitly endorses, rather than the adoption of no epistemic methods at all, but that's a whole different conversation.)
I ultimately decided I valued such explicit examinations, and that entailed a willingness to making a commitment to an epistemic methodology, and that the epistemic methodology that seemed most compelling to me at that time did in fact lead me to reject Pj, so absent discovering inconsistencies in that methodology that led me to reject it at some later time I was committed to rejecting Pj, which I did.
(Of course, I wasn't thinking in quite these terms as a 13-year-old Yeshiva student, and it took some years to get fully consistent about that position. Actually, I'm not yet fully consistent about it, and don't anticipate becoming so in my lifetime.)
Replies from: None↑ comment by [deleted] · 2013-05-10T22:24:01.354Z · LW(p) · GW(p)
Interesting. I'll keep thinking about it. But just to clarify, what exactly was it I said that was subtly but importantly wrong?
This is what EY says about "uncomfortable or difficult ideas:"
"When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don't rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind."
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T22:42:14.704Z · LW(p) · GW(p)
But just to clarify, what exactly was it I said that was subtly but importantly wrong?
Like I said, I thought you were arguing a while back that our precommitments to "the way we think about things" were not sufficient grounds to reject uncomfortable or difficult ideas, which is an idea I respect (for reasons similar to those articulated in the post you quote) but consider subtly but importantly wrong (for reasons similar to those I articulate in the comment you reply to).
I'll note, also, that an epistemic methodology (a way of thinking about things) isn't the same thing as a belief.
↑ comment by wedrifid · 2013-05-09T04:58:58.023Z · LW(p) · GW(p)
The optimal situation for you is that you've heard intelligent, thoughtful, rational criticism but your position remains strong.
The optimal situation could also be hearing intelligent, thoughtful, rational criticism, learn from it and having a new 'strong position' incorporating the new information. (See: lightness).
↑ comment by Intrism · 2013-05-09T05:12:12.214Z · LW(p) · GW(p)
I sometimes see refutations of pro-religious arguments on this site, but no refutations of good arguments.
What good arguments do you think LW hasn't talked about?
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?
Religion holds an important social and cultural role that the various attempts at rationalist ritual or culture haven't fully succeeded at filling yet.
↑ comment by [deleted] · 2013-05-08T22:21:41.911Z · LW(p) · GW(p)
Clearly, the Less Wrong community generally (unanimously?) agrees about a lot of major things. For example, religion.
The 2012 survey showed something around 10% non-atheist, non-agnostic.
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?
From most plausible to least plausible:
It's possible to formulate something like an argument that religious practice is good for neurotypical humans, in terms of increasing life expectancy, reducing stress, and so on.
Monocultures tend to do better than populations with mixed cultural heritage, and one could argue that some religions do very well at creating monocultures where none previously existed, e.g., the mormons, or perhaps the Catholic Church circa 1800 in the states.
I've heard some reports that religious affiliation is good for one's dating pool.
↑ comment by [deleted] · 2013-05-09T13:50:42.727Z · LW(p) · GW(p)
See, but these are only arguments that religion is useful. Rationalists on this site say that religion is most definitely false, even if it's useful; are there any rational thinkers out there who actually think that religion could realistically be true? I think that's a much harder question that whether or not it's good for us.
Replies from: None↑ comment by [deleted] · 2013-05-09T13:59:33.503Z · LW(p) · GW(p)
Replies from: None↑ comment by [deleted] · 2013-05-09T14:05:14.151Z · LW(p) · GW(p)
This is great, thanks. I know there must be people out there, but I'm not entirely convinced most atheists ever bother to actually consider a real possibility of God.
Replies from: None↑ comment by [deleted] · 2013-05-09T14:39:36.585Z · LW(p) · GW(p)
I no longer have any idea what evidence would convince you otherwise.
Replies from: None↑ comment by [deleted] · 2013-05-09T14:54:14.853Z · LW(p) · GW(p)
Rationalists who take religion seriously, for instance.
Replies from: Desrtopa, None↑ comment by Desrtopa · 2013-05-09T17:42:02.156Z · LW(p) · GW(p)
Take seriously in what sense?
For instance, I spent about six years seriously studying up on religions and theology, because I figured that if there were any sort of supreme being concerned with the actions of humankind, that would be one of the most important facts I could possibly know. So in that sense, I take religion very seriously. But in the sense of believing that any religion has a non-negligible chance of accurately describing reality, I don't take it seriously at all, because I feel that the weight of evidence is overwhelmingly against that being the case.
What sense of "taking religion seriously" are you looking for examples of?
Replies from: None↑ comment by [deleted] · 2013-05-09T23:14:24.703Z · LW(p) · GW(p)
That's what I mean—a non-negligible chance. If your estimation of the likelihood of God is negligible, then it may as well be zero. I don't think that there is an overwhelming weight of evidence toward either case, and I don't think this is something that science can resolve.
Replies from: JoshuaZ, Desrtopa, Intrism↑ comment by JoshuaZ · 2013-05-10T00:36:26.227Z · LW(p) · GW(p)
If your estimation of the likelihood of God is negligible, then it may as well be zero.
This doesn't follow. For example, if you recite to me a 17 million digit number, my estimate that it is a prime is about 1 in a million by the prime number theorem. But, if I then find out that the number was in fact 2^57,885,161 -1, my estimate for it being prime goes up by a lot. So one can assign very small probabilities to things and still update strongly on evidence.
↑ comment by Intrism · 2013-05-10T01:24:40.570Z · LW(p) · GW(p)
So, you're saying that in your view no atheist could possibly take the question of the truth of religion seriously? Or, alternately, that one could be an atheist but still give a large probability of God's existence? Both of these seem a bit bizarre...
↑ comment by [deleted] · 2013-05-09T15:11:34.221Z · LW(p) · GW(p)
See my first comment in this thread. There's a 10% minority that takes religion seriously. Presumably some of them consider themselves rationalists, or else they wouldn't bother responding to the survey.
↑ comment by gwern · 2013-05-08T22:10:55.356Z · LW(p) · GW(p)
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?
You may find this helpful: http://prosblogion.ektopos.com/archives/2012/02/results-of-the-.html
Replies from: None↑ comment by [deleted] · 2013-05-09T13:58:24.493Z · LW(p) · GW(p)
This is interesting. It shouldn't be surprising coming from philosophers, but it can be instructional anyway. There are as many atheists who have never heard a decent defense of religion as there are religious fundamentalists who have never bothered to think rationally.
Replies from: Intrism↑ comment by Intrism · 2013-05-09T15:43:58.887Z · LW(p) · GW(p)
There are as many atheists who have never heard a decent defense of religion as there are religious fundamentalists who have never bothered to think rationally.
This seems improbable, considering that there are vastly more religious people than atheists.
Replies from: None↑ comment by [deleted] · 2013-05-09T15:48:11.888Z · LW(p) · GW(p)
Props for being technical. You know what I meant.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-09T17:44:27.047Z · LW(p) · GW(p)
Even in the non-technical sense, he's still making a relevant counterpoint, because it's much, much harder for atheists to go without exposure to religious culture and arguments than for a religious person to go without exposure to atheist arguments or culture (insofar as such a thing can be said to exist.)
Replies from: None↑ comment by [deleted] · 2013-05-09T20:34:26.102Z · LW(p) · GW(p)
I don't just mean being exposed to religious culture and arguments, I mean good arguments. I know, practically everyone here was raised religious and given really bad reasons to believe. But I think those may become a straw dummy—what I'm skeptical of is how many people here have heard a religious argument that actually made them think, one that has a chance in a real debate.
Replies from: None, Desrtopa, JoshuaZ, Zaine↑ comment by [deleted] · 2013-05-09T20:55:33.026Z · LW(p) · GW(p)
one that has a chance in a real debate.
good arguments don't in general have a chance in a real debate, because debates are not about reasoning. But that's a nitpick.
I've seen a lot of religious people claiming to have access to strong arguments for theism, but have never seen one myself.
As JoshuaZ asks, you must have a strong argument or you wouldn't think this line of discussion was worth anything. What is it?
↑ comment by Desrtopa · 2013-05-09T20:58:53.591Z · LW(p) · GW(p)
I'm going to second JoshuaZ here. There's a lot of disagreement among theists about what the best arguments for theism are. I'd rather not try to represent any particular argument as the best one available for theism, because I can't think of anything that theists would universally agree on as a good argument, and I don't endorse any of the arguments myself.
I would say that most atheists are at least exposed to arguments that apologists of some standing, such as C.S. Lewis or William Lane Craig, actually use.
↑ comment by Zaine · 2013-05-15T21:49:17.168Z · LW(p) · GW(p)
...[W]hat I'm skeptical of is how many people here have heard a religious argument that actually made them think, one that has a chance in a real debate.
A-causal blackmail, once I thought deeply about why it might be scary. Took about an hour to refute it (to my satisfaction) - whether it would have a chance in a 'real debate': debate length, forum, allotted quiet thinking time and other confounds make me uncertain of your intended meaning.
↑ comment by Bugmaster · 2013-05-09T04:50:36.449Z · LW(p) · GW(p)
I'm much closer to "below average" than to the "top" as far as LW users go, but I'll give it a shot anyway.
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking ?
I assume that by "way of thinking" you mean "atheism", specifically (if not, what did you mean ?).
I don't know how you judge which criticisms are "legitimate", so I can't answer the question directly. Instead, I can say that the most persuasive arguments against atheism that I'd personally seen come in form of studies demonstrating the efficacy of prayer. If prayer does work consistently with the claims of some religion, this is a good indication that at least some claims made by the religion are true.
Note, though, that I said "most persuasive"; another way to put it would be "least unpersuasive". Unfortunately, all such studies that I know of have either found no correlation between prayer and the desired effect whatsoever; or were constructed so poorly that their results are meaningless. Still, at least they tried.
In general, it is more difficult to argue against atheism (of the weak kind) than against theism, since (weak) atheism is simply the null hypothesis. This means that theists must provide positive evidence for the existence of their god(s) in order to convince an atheist, and this is very difficult to do when one's god is undetectable, or works in mysterious ways, or is absent, etc., as most gods tend to be.
Replies from: None↑ comment by [deleted] · 2013-05-09T14:21:42.597Z · LW(p) · GW(p)
Many people would disagree that atheism is the null hypothesis. "All things testify of Christ," as some say, and in those circles people honestly believe they've been personally contacted by God. (I'm talking about Mormons, whose God, from what I've heard, is not remotely undetectable.)
Have most atheists honestly put thought into what if there actually was a God? Many won't even accept that there is a possibility, and I think this is just as dangerous as blind faith.
Replies from: wedrifid, Desrtopa, JoshuaZ, Bugmaster↑ comment by wedrifid · 2013-05-09T15:52:23.884Z · LW(p) · GW(p)
Have most atheists honestly put thought into what if there actually was a God?
Don't know. Most probably have something better to do. I have thought about what would happen if there was a God. If it turned out the the god of the religion I was brought up in was real then I would be destined to burn in hell for eternity. If version 1 of the same god (Yahweh) existed I'd probably also burn in hell for eternity but I'm a bit less certain about that because the first half of my Bible talked more about punishing people while alive (well, at the start of the stoning they are alive at least) than the threat of torment after death. If Alah is real... well, I'm guessing there is going to be more eternal pain involved since that is just another fork of the same counterfactual omnipotent psychopath. Maybe I'd have more luck with the religions from ancient India---so long as I can convince the gods that lesswrong Karma counts.
So yes, I've given some thought to what happens if God exists: I'd be screwed and God would still be a total dick of no moral worth.
Many won't even accept that there is a possibility, and I think this is just as dangerous as blind faith.
Assigning probability 0 or 1 to a hypothesis is an error, but rounding off 0.0001 to 0 is less likely to be systematically destructive to an entire epistemological framework than rounding 0.0001 off to 1.
Replies from: None↑ comment by [deleted] · 2013-05-09T16:35:02.789Z · LW(p) · GW(p)
So, with no evidence either way, would you honestly rate the probability of the existence of God as 0.0001%?
Replies from: wedrifid, JoshuaZ↑ comment by wedrifid · 2013-05-09T16:50:30.741Z · LW(p) · GW(p)
So, with no evidence either way, would you honestly rate the probability of the existence of God as 0.0001%?
That probability is off by a factor of 100 from the one I mentioned.
(And with 'no evidence either way' the probability assigned would be far, far lower than that. It takes rather a lot of evidence to even find your God in hypothesis space.)
Replies from: JoshuaZ, None↑ comment by [deleted] · 2013-05-09T17:12:22.046Z · LW(p) · GW(p)
You're right, I'm sorry. It was 0.0001. That's still pretty small, though. Is that really what you think it is?
It takes rather a lot of evidence to even find your God in hypothesis space
Don't think of my God, then. Any deity at all.
Do we want to be Bayesian about it? Of course we do. Let's imagine two universes. One formed spontaneously, one was created. Which is more likely to occur?
Personally I think that the created one seems more likely. Apparently you think that the spontaneity is more believable. But as for the probability that any given universe is created rather than accidental, 0.0001 seems unrealistically low. And if that's not the number you actually believe—it was just an example—what is?
Replies from: JoshuaZ, BerryPick6, Jack↑ comment by JoshuaZ · 2013-05-09T17:17:11.259Z · LW(p) · GW(p)
Do we want to be Bayesian about it? Of course we do. Let's imagine two universes. One formed spontaneously, one was created. Which is more likely to occur?
It isn't obvious that this is at all meaningful, and gets quickly into deep issues of anthropics and observer effects. But aside from that, there's some intuition here that you seem to be using that may not be shared. Moreover, it also has the weird issue that most forms of theism have a deity that is omnipotent and so should exist over all universes.
Note also that the difference isn't just spontaneity v. created. What does it mean for a universe to be created? And what does it mean to call that creating aspect a deity? One of the major problems with first cause arguments and similar notions is that even when one buys into them it is extremely difficult to jump from their to theism. Relevant SMBC.
Replies from: None↑ comment by [deleted] · 2013-05-09T20:14:39.436Z · LW(p) · GW(p)
Certainly this is a tough issue, and words get confusing really quickly. What intuition am I not sharing? Sorry if by "universe" I meant scenario or existence or something that contains God when there is one.
What I mean by "deity" and "created" is that either there is a conscious, intelligent mind (I think we all agree what that means) organizing our world/universe/reality, or there isn't. And of course I'm not trying to sell you on my particular religion. I'm just trying to point out that I think there's not any more inherent reason to believe there is no deity than to believe there is one.
Replies from: JoshuaZ, Bugmaster, ArisKatsaris, Intrism↑ comment by JoshuaZ · 2013-05-09T20:42:55.471Z · LW(p) · GW(p)
What I mean by "deity" and "created" is that either there is a conscious, intelligent mind (I think we all agree what that means) organizing our world/universe/reality, or there isn't.
Ok. So in this context, why do you think that one universe is more likely than the other? It may help to state where "conscious" and "intelligent" and "mind" come into this argument.
And of course I'm not trying to sell you on my particular religion.
On the contrary, that shouldn't be an "of course". If you sincerely believe and think you have the evidence for a particular religion, you should present it. If you don't have that evidence, then you should adjust your beliefs.
Even if one thinks one is in a constructed universe, it in no way follows that the constructor is divine or has any other aspects one normally associates with a deity. For example, this universe could be the equivalent of a project for a 12 dimensional grad student in a wildly different universe (ok, that might be a bit much- it might just be by an 11 -dimensional bright undergrad).
I'm just trying to point out that I think there's not any more inherent reason to believe there is no deity than to believe there is one.
What do you mean as an "inherent" reason? Are you solely making a claim here about priors, or are you making a claim about what evidence there actually is when we look out at the world? Incidentally, you should be surprised if this is true- for the vast majority of hypotheses, the evidence we have should assign them probabilities far from 50%. Anytime one encounters a hypothesis which is controversial in a specific culture, and one concludes that it has a probability close to 1/2, one should be concerned that one is reaching such a conclusion not out of rational inquiry but more out of an attempt to balance competing social and emotional pressures.
Replies from: None, None↑ comment by [deleted] · 2013-05-10T10:49:19.720Z · LW(p) · GW(p)
Even if one thinks one is in a constructed universe, it in no way follows that the constructor is divine or has any other aspects one normally associates with a deity. For example, this universe could be the equivalent of a project for a 12 dimensional grad student in a wildly different universe (ok, that might be a bit much- it might just be by an 11 -dimensional bright undergrad).
I'd actually consider that deity in the sense of a conscious, intelligent being who created the universe intentionally. As opposed to it happening by cosmic hazard. (That is, no conscious creator.)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-05-10T16:03:48.196Z · LW(p) · GW(p)
Would you assign that being any of the traits normally connected to being a deity? For example, if the 11 dimensional undergrad say not to eat shellfish, or to wear special undergarments, would you listen?
Replies from: None↑ comment by [deleted] · 2013-05-10T16:10:42.107Z · LW(p) · GW(p)
Yes, I would listen if was confident that was where it was coming from. This 11-dimensional undergrad is much more powerful and almost certainly smarter than me, and knowingly rebelling would not be a good idea. If this undergrad just has a really sick sense of humor, then, well, we're all screwed in any case.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-05-10T16:12:27.745Z · LW(p) · GW(p)
And if the 11-dimensional undergrad says you should torture a baby?
Replies from: None↑ comment by [deleted] · 2013-05-10T16:27:07.726Z · LW(p) · GW(p)
Clearly, then I need to make awfully sure it's actually God and not a hallucination. I would probably not do it because in that case I know that the undergrad does have a sick sense of humor and I shouldn't listen to him because we're all screwed anyway.
Now, if you're going to bring up Abraham and Isaac or something like that, remember that in this case Abraham was pretty darn sure it was actually God talking.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-05-10T16:31:41.967Z · LW(p) · GW(p)
So this sort of response indicates that you are distinguishing between "God" and the 11-dimensional undergrad as distinct ideas. In that case, a generic creator argument isn't very strong evidence since there are a lot of options for entities that created the universe that aren't God.
Replies from: None↑ comment by [deleted] · 2013-05-10T17:09:44.182Z · LW(p) · GW(p)
This is confusing because we're simultaneously talking about a deity in general and my God, the one we're all familiar with.
Of course there are lots of options other than my specific God; the 11-dimensional undergrad is one of those. I'm not using a generic creator argument to convince you of my God, I'm using the generic creator argument to suggest that you take into account the possibility of a generic creator, whether or not it's my God. I'm keeping my God mostly out of this—I think an atheist ought to be able to argue my position while keeping his/her own conclusions.
↑ comment by Bugmaster · 2013-05-09T21:56:44.345Z · LW(p) · GW(p)
And of course I'm not trying to sell you on my particular religion.
As JoshuaZ says, there's no "of course" about it. If some particular religion is right and I am wrong, then I absolutely want to know about it ! So if you have some evidence to present, please do so.
Replies from: None↑ comment by [deleted] · 2013-05-09T23:05:28.954Z · LW(p) · GW(p)
I think that my religion is right and you are misguided. I really do, for reasons of my own. But I don't have any "evidence" to share with you, especially if you are committed to explaining it away as you may not be but many people here are.
Remember that my original question was just to see where this community stood. I don't have all that many grand answers myself. I suppose I could actually say that if you honestly absolutely want to know and are willing to open your mind, then you should try reading this book—I'm serious, but I'm aware how silly that would sound in such a context as this. Really, I don't want to become that guy.
I'm young, and I myself am trying to find good, rational arguments in favor of God. I'm trying to reconcile rationality and religion in my mind, and if I can't find anyone online, I'll figure it out myself and write a blog post about it in twenty years.
But what it seems I've found is that no, most of the people on this site (based on my representative sample of about a dozen, I know) have never been presented with solid arguments in favor of religion. Maybe I'll manage to find some or write them myself, and maybe I'll decide that the population of Less Wrong is as closed-minded as I feared. In any case, thank you for being more open than certain others.
Replies from: JoshuaZ, Intrism, Bugmaster, Bugmaster, Prismattic↑ comment by JoshuaZ · 2013-05-10T00:32:55.238Z · LW(p) · GW(p)
But I don't have any "evidence" to share with you, especially if you are committed to explaining it away as you may not be but many people here are.
So this is a problem. In general, there are types of claims that don't easily have shared evidence (e.g. last night I had a dream that was really cool, but I forgot it almost as soon as I woke up, I love my girlfriend, when I was about 6 years old I got the idea of aliens who could only see invisible things but not visible things, etc.) But most claims, especially claims about what we expect of reality around us should depend on evidence that can be shared.
I'm young, and I myself am trying to find good, rational arguments in favor of God.
So this is already a serious mistake. One shouldn't try to find rational arguments in favor of one thing or another. One should find the best evidence for and against a claim, and then judge the claim based on that.
have never been presented with solid arguments in favor of religion. Maybe I'll manage to find some or write them myself, and maybe I'll decide that the population of Less Wrong is as closed-minded as I feared.
You may want to seriously consider that the arguments you are looking for don't exist. In the meantime, may I recommend reddit's Debate Religion forum. They are dedicated to discussing a lot of these issues and may be a better forum for some of the things you are interested. Of course, the vast majority of things related to rationality has very little to do with whether or not there are any deities, and so you are more than welcome to stick around here. There's a lot of interesting stuff going on here.
Replies from: Kawoomba, None↑ comment by [deleted] · 2013-05-10T11:03:59.508Z · LW(p) · GW(p)
Note that my expressed intention in this post was not to start a religious debate, though I have enjoyed that too. I have considered that the arguments I'm looking for don't exist; what I've found is that at least you guys don't have any, which means that from your position this case is entirely one-sided. So generally, your belief that religion is inherently ridiculous from a rationalist standpoint has never actually been challenged at all.
Definitely it's been interesting. Thanks.
Replies from: khafra↑ comment by khafra · 2013-05-14T19:10:53.101Z · LW(p) · GW(p)
If you really want rationalist (more properly, post-rationalist) arguments in favor of God, I recommend looking through Will Newsome's comments from a few years ago; also through his twitter accounts @willnewsome and @willdoingthings.
If you follow my advice, though, may God have mercy on your soul; because Will Newsome will have none on your psychological health.
Replies from: None↑ comment by Intrism · 2013-05-10T01:37:01.635Z · LW(p) · GW(p)
I'm young, and I myself am trying to find good, rational arguments in favor of God. I'm trying to reconcile rationality and religion in my mind, and if I can't find anyone online, I'll figure it out myself and write a blog post about it in twenty years.
Ah, no, haven't you read the How to Actually Change Your Mind sequence? Or at least the Against Rationalization subsequence and The Bottom Line? You can't just decide "I want to prove the existence of God" and then write a rational argument. You can't start with the bottom line. Really, read the sequence, or at least the subsequence I pointed out.
you should try reading this book
I wasn't under the impression that the Book of Mormon was substantially more convincing than any other religious holy book. I have, however, heard that the Mormon church does exceptionally well at building a community. If you'd like to talk about that, I'd be extremely interested.
But what it seems I've found is that no, most of the people on this site (based on my representative sample of about a dozen, I know) have never been presented with solid arguments in favor of religion.
How sure are you that more solid arguments exist? We don't know about them. You apparently don't know about them. If you've got any that you're hiding, remember that if God actually exists we would really like to know about it; we don't want to explain anything away that isn't wrong.
Replies from: None↑ comment by [deleted] · 2013-05-10T10:56:29.852Z · LW(p) · GW(p)
Yes, I have read the sequence. I think that not being one-sided sometimes requires a conscious effort, and is a worthwhile cause.
Of course you won't read the Book of Mormon. I wouldn't expect you to. But if you want "evidence" which has firmly convinced millions of people—here it is. I personally have found it more powerful than the Bible or Qur'an.
You're right, I don't have any solid arguments in favor of religion. My original question of this post was actually just to ask if you had any—and I've gotten an answer. No, you believe there are none.
if God actually exists we would really like to know about it
I've shown you one source that convinces a lot of people; consider yourself to know about it. I would recommend reading it, too, if you're really interesting in finding the truth.
Replies from: Desrtopa, Richard_Kennaway↑ comment by Desrtopa · 2013-05-12T14:16:21.380Z · LW(p) · GW(p)
Of course you won't read the Book of Mormon. I wouldn't expect you to. But if you want "evidence" which has firmly convinced millions of people—here it is. I personally have found it more powerful than the Bible or Qur'an.
Have you read the Quran in the original Arabic? It's pretty famously considered to lose a lot in translation.
I haven't, of course, but the only ex-muslim I've spoken to about it agrees that even in the absence of his religious belief, it's a much more powerful and poetic work in Arabic.
Replies from: None↑ comment by Richard_Kennaway · 2013-05-10T12:48:54.975Z · LW(p) · GW(p)
I personally have found [the Book of Mormon] more powerful than the Bible or Qur'an.
Can you expand on that? What is this perception of "power" you get in varying degrees from such books, and what is the relation between that sensation and deciding whether anything in those books is true?
I've read the Bible and the Qur'an, and while I haven't read the Book of Mormon, I have a copy (souvenir of a visit to Salt Lake City). I'll have a look at it if you like, but I'm not expecting much, because of the sort of thing that books like these are. Neither the Bible nor the Qur'an convince me that any of the events recounted in them ever happened, or that any of the supernatural entities they talk about ever existed, or that their various moral prescriptions should be followed simply because they appear there. How could they?
A large part of the Bible is purported history, and to do history right you can't rely on a single collection of old and multiply-translated documents which don't amount to a primary source for much beyond their own existence, especially when archaeology (so I understand) doesn't turn up all that much to substantiate it. And things like the Genesis mythology are just mythology. The world was not created in six days. Proverbs, Wisdom, the "whatsoever things..." passage, and so on, fine: but I read them in the same spirit as reading the rationality quote threads here. Where there be any virtue, indeed.
The Qur'an consists primarily of injunctions to believe and imprecations against unbelievers. I'm not going to swallow that just because of its aggressive manner.
So, that is my approach to religious documents. This "power" that leads many people to convert to a religion, that gives successful missionaries thousands of converts in a single day: I have to admit that I have no idea what experience people are talking about. Why would reading a book or tract open my eyes to the truth? Especially if I have reason to think that the authors were not engaged in any sort of rational inquiry?
That is, BTW, also my approach to non-religious documents, and I find it really odd when I see people saying of things like, say, Richard Dawkins' latest, "this book changed the way I see things!" It's a frequent jibe of religious people against atheists that "atheism is just another religion", but when people within atheism convert so readily from one idea to another just by reading a book, I have to wonder whether "religion" might be just the word for that mental process.
Replies from: Desrtopa, None↑ comment by Desrtopa · 2013-05-12T14:39:12.824Z · LW(p) · GW(p)
That is, BTW, also my approach to non-religious documents, and I find it really odd when I see people saying of things like, say, Richard Dawkins' latest, "this book changed the way I see things!" It's a frequent jibe of religious people against atheists that "atheism is just another religion", but when people within atheism convert so readily from one idea to another just by reading a book, I have to wonder whether "religion" might be just the word for that mental process.
What's strange about converting from one idea to another by reading a book? A book can contain a lot of information. Sometimes it doesn't even take very much to change one's mind. Suppose a person believes that the continents can't be shifting, because there's no room for them to move around on a solid sphere. Then they read about subduction zones and mid-ocean ridges, and see a diagram of plate movement around the world, and think "Oh, I guess it can happen that way, how silly of me not to have thought of that."
I haven't found any religious text convincing, because they tend to be heavy on constructing a thematic message and providing social motivation to believe, light on evidence, but for a lot of people that's a normal way to become convinced of things (indeed, I recently finished reading a book where the author discussed how, among the tribe he studied, convincing people of a proposition was almost entirely a matter of how powerful a claim you were prepared to make and what authority you could muster, rather than what evidence you could present or how probable your claim was.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-12T17:13:26.127Z · LW(p) · GW(p)
among the tribe he studied, convincing people of a proposition was almost entirely a matter of how powerful a claim you were prepared to make and what authority you could muster, rather than what evidence you could present
I suspect this was also true of the tribe I went to high-school with.
↑ comment by [deleted] · 2013-05-10T15:03:17.154Z · LW(p) · GW(p)
a single collection of old and multiply-translated documents which don't amount to a primary source for much beyond their own existence
I know how most atheists feel about the Bible. Really, I do. But if you don't understand what's so powerful about a book, and you want to know, then you really should give it a try—I might say that the last chapter of Moroni especially addresses this.
(I promise I'm not trying to convert you. I don't remotely expect you to have a spiritual experience because of this one chapter.)
I have to wonder whether "religion" might be just the word for that mental process.
Yes, it's easy to compare religion and atheism to each other as well as professional sports and a lot of other human behaviors. I'm all for free thought and not being persuaded by powerful words alone. However, just as I try to be able to enjoy ridiculous sports games, I'm glad to understand why people believe what they do.
Replies from: Richard_Kennaway, BerryPick6↑ comment by Richard_Kennaway · 2013-05-12T11:06:10.675Z · LW(p) · GW(p)
But if you don't understand what's so powerful about a book, and you want to know, then you really should give it a try—I might say that the last chapter of Moroni especially addresses this.
Well, I've now read the last chapter of Moroni, which is the last book of the Book of Mormon. The prophet takes his leave of his people, promises that God, the Son, and the Holy Ghost will reveal the truth of these things to those who sincerely pray, enjoins them to practice faith, hope, and charity and avoid despair, and promises to see them in the hereafter.
I don't feel any urge to read this as other than fiction.
Replies from: None↑ comment by BerryPick6 · 2013-05-14T01:17:05.025Z · LW(p) · GW(p)
I know how most atheists feel about the Bible. Really, I do. But if you don't understand what's so powerful about a book, and you want to know, then you really should give it a try—I might say that the last chapter of Moroni especially addresses this.
I grew up on the Bible. I studied the Bible for over a decade. I have read the Old Testament in Hebrew.
It's the most boring thing I've ever laid eyes on.
Replies from: None, Desrtopa↑ comment by Desrtopa · 2013-05-14T05:11:19.972Z · LW(p) · GW(p)
I've always marveled at peoples' assertions that, even if they don't believe the bible is the word of God, they still respect it as a great work of literature. I suspect that they really do believe it, humans can invest a whole lot of positive associations with things simply through expectation and social conditioning. But my opinion of it as a literary work is low enough that I have a hard time coming up with any sort of of comparison which doesn't make it sound like I'm making a deliberate effort to mock religious people.
↑ comment by Bugmaster · 2013-05-10T00:26:46.799Z · LW(p) · GW(p)
But I don't have any "evidence" to share with you, especially if you are committed to explaining it away ... I'm young, and I myself am trying to find good, rational arguments in favor of God. ... But what it seems I've found is that no, most of the people on this site (based on my representative sample of about a dozen, I know) have never been presented with solid arguments in favor of religion.
I was honest when I said that I'd love to see some convincing evidence for the existence of any god. If you have some, then by all means, please present it. However, if I look at your evidence and find that it is insufficient to convince me, this does not necessarily mean that I'm closed-minded (though I still could be, of course). It could also mean that your reasoning is flawed, or that your observations can be more parsimoniously explained by a cause other than a god.
A big part of being rational is learning to work around your own biases. Consider this: if you can't find any solid arguments for the existence of your particular version of God... is it possible that there simply aren't any ?
Replies from: None↑ comment by [deleted] · 2013-05-10T11:06:51.961Z · LW(p) · GW(p)
Yes, it's possible that there aren't any. That makes your beliefs much, much simpler. But I think that it's much safer and healthier to assume that you just haven't been exposed to any yet. I can't call you closed-minded for not having been exposed, and I'm sure that if some good arguments did pop up you at least would be willing to hear them. I'm sorry that I don't myself have any; I'm going to keep looking for a few years, if you don't mind.
Replies from: drethelin, Bugmaster↑ comment by drethelin · 2013-05-10T18:47:06.138Z · LW(p) · GW(p)
I do mind. If you look for a few years for "rational" arguments for Mormonism you will be wasting your life duplicating the effort of thousands of people before you. Please don't. Even if you remain Mormon, there are far better things you can do than theology.
Replies from: None↑ comment by [deleted] · 2013-05-10T19:11:22.480Z · LW(p) · GW(p)
What should I spend my next few years of rationalism doing then?
It seems that according to you, my options are
a) leave my religion in favor of rationalism. (feel free to tell me this, but if my parents find out about it they'll be worried and start telling me you're a satanic cult. I can handle it.)
b) leave rationalism in favor of religion. (not likely. I could leave Less Wrong if it's not open-minded enough, but I won't renounce rational thinking.)
c) learn to live with the conflict in my mind.
Suggestions?
Replies from: drethelin, Vladimir_Nesov, shminux, TheOtherDave↑ comment by drethelin · 2013-05-10T19:48:46.127Z · LW(p) · GW(p)
In descending order of my preference: a, c, then b.
I think c is the path chosen by most people who are reasonable but want to remain religious.
C is much more feasible if you can happily devote your time to causes other than religion/rationality. math, science, writing, art, I think all are better for you and society than theology
Replies from: None↑ comment by [deleted] · 2013-05-10T20:25:07.226Z · LW(p) · GW(p)
C seems likely as a long-term solution, because I don't see a or b as very realistic right now. And even if I don't make it a focused pursuit, I'll still be on the lookout for option d. (I'm not seriously interested in theology, don't worry. I'm quite into math and such things.)
↑ comment by Vladimir_Nesov · 2013-05-15T18:31:13.769Z · LW(p) · GW(p)
These are not "options", but possible outcomes. You shouldn't decide to work on reaching a particular conclusion, that would filter the arguments you encounter. Ignore these whole "religion" and "rationality" abstractions, work on figuring out more specific questions that you can understand reliably.
↑ comment by Shmi (shminux) · 2013-05-13T17:29:52.628Z · LW(p) · GW(p)
leave my religion in favor of rationalism.
That's not either/or. Plenty of participants here are quietly religious (I don't recall what the last survey said), yet they like the site for what it has to offer. It may well happen some day that some of the sequence posts will click in a way that would make you want to decide to distance yourself from your fellow saints. Or it might not. If you find some discussion topics which interest you more, then just enjoy those. As I mentioned originally, pure logical discourse is rarely the way to change deep-seated opinions and preferences. Those evolve as your subconscious mind integrates new ideas and experiences.
Replies from: None↑ comment by [deleted] · 2013-05-13T17:42:00.423Z · LW(p) · GW(p)
Yes, that's what I think I'll do. But many people here seem to be telling me that's impossible without some sort of cognitive dissonance. I don't think so.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-13T18:19:11.326Z · LW(p) · GW(p)
many people here seem to be telling me that's impossible without some sort of cognitive dissonance
"People here" are not perfectly rational and prone to other-optimizing. Including yours truly. Even the fearless leader has a few gaping holes in his rationality, and he's done pretty well. I don't know which of his and others' ideas speak to you the most, but apparently some do, so why not enjoy them. If anything, the spirit of altruism and care for others, so prominent on this forum, seems to fit well with Mormon practice, as far as I know.
Replies from: None↑ comment by [deleted] · 2013-05-13T18:42:20.605Z · LW(p) · GW(p)
I honestly haven't gotten much of a sense of altruism or care for others. (You were serious, right?) I mean, yes, there's the whole optimizing charity thing, but that's often (not always) for personal gratification as much as sincere altruism. I suppose people here think that their own cryonic freezing is actually doing the world a huge favor.
And care for others...that's something Mormons definitely have on you guys.
But I like this environment anyways. Because people here are smart and educated, and some of them are even honest. :)
Replies from: shware, shminux, TimS↑ comment by shware · 2013-05-14T03:20:12.222Z · LW(p) · GW(p)
By signing up for cryonics you help make cryonics more normal and less expensive, encouraging others to save their own lives. I believe there was a post where someone said they signed up for cryonics so that they wouldn't have to answer the "why aren't you signed up then?" crowd when trying to convince other people to do so.
Replies from: TimS↑ comment by TimS · 2013-05-15T19:18:38.741Z · LW(p) · GW(p)
I'm sure that many folks who have signed up for cryonics are happy that their behavior normalizes it for others. But I'm doubtful that any significant number would have made a different decision if normalizing cryonics was not an effect of their actions.
↑ comment by Shmi (shminux) · 2013-05-13T19:11:49.363Z · LW(p) · GW(p)
I suppose people here think that their own cryonic freezing is actually doing the world a huge favor.
I don't believe you really think that. Probably your frustration is talking. But you can probably relate to the standard analogy with a religious approach: if you believe that you have a chance for a happy immortality, it's a lot easier to justify spending some of your mortal toil on helping others to be happy. Even if there is no correlation between how much good you do in this life and how happy you will be in the next, if any.
Replies from: None↑ comment by [deleted] · 2013-05-13T19:35:05.189Z · LW(p) · GW(p)
I don't believe you really think that.
Hmm. Is it really better to assume they're entirely selfish? I could do that. But I know that Harry James P-E-V at least actually believes he's going to save the world. (Maybe not specifically with cryonics.)
(But yes, my tendency for sarcasm is something I need to work on. When I'm on Less Wrong, at least.)
↑ comment by TimS · 2013-05-13T18:53:17.700Z · LW(p) · GW(p)
there's the whole optimizing charity thing, but that's often (not always) for personal gratification as much as sincere altruism.
There's two issue here: (1) the difference between donating because it is useful and donating because it makes one feel good, and (2) many donations that make one feel good are really social status games.
I really do think many people here are sincere altruists (re the second issue).
I suppose people here think that their own cryonic freezing is actually doing the world a huge favor.
I hope they don't. It's an awfully stupid position. I'm not aware of anyone who is signed up for cryonics for anything other than self-oriented (selfish?) desire to live forever.
↑ comment by TheOtherDave · 2013-05-10T22:25:17.143Z · LW(p) · GW(p)
My recommendation is that you commit to/remain committed to basing your confidence in propositions on evaluations of evidence for and against those propositions. If that leads you to conclude that LessWrong is a bad place to spend time, don't spend time here. If that leads you to conclude that your religious instruction has included some falsehoods, stop believing those falsehoods. If it leads you to conclude that your religious instruction was on the whole reliable and accurate, continue believing it. If it leads you to conclude that LessWrong is a good place to spend time, keep spending time here.
↑ comment by Bugmaster · 2013-05-11T16:37:29.455Z · LW(p) · GW(p)
But I think that it's much safer and healthier to assume that you just haven't been exposed to any yet.
At what point do I stop looking, though ? For example, a few days ago I lost my favorite flashlight (true story). I searched my entire apartment for about an hour, but finally gave up; my guess is that I left it somewhere while I was hiking. I am pretty sure that the flashlight is not, in fact, inside my apartment... but should I keep looking until I'd turned over every atom ?
Replies from: None↑ comment by Bugmaster · 2013-05-10T00:41:24.926Z · LW(p) · GW(p)
As for the Book of Mormon... try to think of it this way.
Imagine that, tomorrow, you meet aliens from a faraway star system. The aliens look like giant jellyfish, and are in fact aquatic; needless to say, they grew up in a culture radically different from ours. While this alien species does possess science and technology (or else they wouldn't make it all the way to Earth !), they have no concept of "religion". They do, however, have a concept of fiction (as well as non-fiction, of course, or else they wouldn't have developed science).
The aliens have studied our radio transmissions, translated our language, and downloaded a copy of the entire Web; this was easy for them since their computers are much more powerful than ours. So, the aliens have access to all of our literature, movies, and other media; but they have a tough time making sense of some of it. For example, they are pretty sure that the Oracle SQL Manual is non-fiction (they pirated a copy of Oracle, and it worked). They are also pretty sure that Little Red Riding Hood is fiction (they checked, and they're pretty sure that wolves can't talk). But what about a film like Lawrence of Arabia ? Is that fiction ? The aliens aren't sure.
One of the aliens comes to you, waving a copy of The Book of Mormon (or whichever scripture you believe in) in its tentacles (but in a friendly kind of way). It asks you to clarify: is this book fiction, or non-fiction ? If it contains both fictional and non-fictional passages, which are which ? Right now, the alien is leaning toward "fiction" (it checked, and snakes can't talk), but, with us humans, one can never be sure.
What do you tell the alien ?
Replies from: None↑ comment by [deleted] · 2013-05-10T10:59:30.366Z · LW(p) · GW(p)
a) I would tell them it's non-fiction. Are Yudkowsky's posts fiction or non-fiction? What about the ones where he tells clearly made-up instructional stories?
b) No need to bash the Book of Mormon. I'm fully aware how you people feel about it. But—
I absolutely want to know about it !
you did in fact ask.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-05-11T16:34:00.600Z · LW(p) · GW(p)
It was not my intent to bash the Book of Mormon specifically; I just used it as a convenient stand-in for "whichever holy scripture you believe in". Speaking of which:
The alien spreads its tentacles in confusion, then pulls out a stack of books from the storage compartment of its exo-suit. "What about all these other ones ?", it asks. You recognize the Koran, the Bhagavad Gita, Enuma Elish, the King James Bible, and the Nordic Eddas; you can tell by the way the alien's suit is bulging that it's got a bunch more books in there. The alien says (or rather, its translation software says for it),
"We can usually tell the difference between fiction and non-fiction. For example, your fellow human Yudkowsky wrote a lot of non-fictional articles about things like ethics and epistemology, but he also wrote fictional stories such as Three Worlds Collide. In that, he is similar to [unpronounceable], the author on our own world who wrote about imaginary worlds in order to raise awareness his ideas concerning [untranslateable] and [untranslateable], which is now the basis of our FTL drive. Sort of like your own Aesop, in fact.
But these books", -- the alien waves some of its tentacles at the huge stack -- "are confusing our software. Their structure and content contains many elements that are usually found only in fiction; for example, talking animals, magical powers, birds bigger than mountains, some sort of humanoids beings that are said to live in the skies or at the top of tall mountains or perhaps in orbit, shapeshifters, and so on. We checked, and none of those things exist in real life.
But then, we talked to other humans such as yourself, and they told us that some of these books are true in a literal sense. Oddly enough, each human seems to think that one particular book is true, and all the others are fictional or allegorical, but groups of humans passionately disagree about which book is true, as well as about the meaning of individual passages.
Thus, we [unpronounceable]" -- you recognize the word for the alien's own species -- "are thoroughly confused. Are these books fiction, or aren't they ? For example", the alien says as it flips open the Book of Mormon, "do you really believe that snakes can talk ? Or that your Iron Age ancestors could build wooden submarines ? Or that a mustard seed is the smallest thing there is ? Or that there's an invisible person in the sky who watches your every move ?"
The alien takes a pause to breathe (or whatever it is they do), then flips open some of the other books.
"What about these ? Do you believe in a super-powered being called Thor, who creates lightning bolts with his hammer, Mjolnir ? Do you think that some humans can cast magic spells that actually work ? And what about Garuda the mega-bird, is he real ?
If you believe some of these books are truth and others fiction, how do you tell the difference ? Our software can't tell the difference, and neither can we..."
Replies from: None↑ comment by Prismattic · 2013-05-10T00:10:14.817Z · LW(p) · GW(p)
I'm young, and I myself am trying to find good, rational arguments in favor of God. I'm trying to reconcile rationality and religion in my mind, and if I can't find anyone online, I'll figure it out myself and write a blog post about it in twenty years.
You are privileging the hypothesis of (presumably one specific strain of) monotheism. That is not actually a rational approach. The kind of question a rationalist would ask is not "does God exist?" but "what should I think about cosmology" or "what should I think about ethics?" First you examine the universe around you, and then you come up with hypotheses to see how well they match that. If you don't start from the incorrectly narrow hypothesis space of [your strain of monotheism, secular cosmology acccording to the best guesses of early 21st century science], you end up with a much lower probability for your religion being true, even if science turns out to be mistaken about the particulars of the cosmology.
Put another way: What probability do you assign to Norse mythology being correct? And how well would you respond if someone told you you were being closed-minded because you'd never heard a solid argument for Thor?
Replies from: None↑ comment by ArisKatsaris · 2013-05-09T21:05:57.775Z · LW(p) · GW(p)
The universe looks very undesigned -- the fine-tuned constants and the like only allow conscious observers and so can be discounted on the basis of the anthropic principle (in a set of near-infinite universes, even undesigned ones, conscious observers would only inhabit universes with constants such that would allow their existence -- there's no observer who'd observe constants that didn't permit their existence)
So pretty much all the evidence seems to speak of a lack of any conscious mind directing or designing the universe, neither malicious nor benevolent.
Replies from: None↑ comment by [deleted] · 2013-05-10T10:47:03.688Z · LW(p) · GW(p)
I know many, many people who think that the universe looks designed. I can refer you to Ivy League scientists if you want.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-05-10T11:30:10.984Z · LW(p) · GW(p)
I know many, many people who think that the universe looks designed.
There are 7 billion people in the world. One can find "many, many" people to believe all sorts of things, especially if one's going to places devoted to gathering such people together.
But the stuff that are really created by conscious minds, there's rarely a need to argue about them. When the remnants of Mycenae were discovered nobody (AFAIK) had to argue whether they were a natural geological formation or if someone built them. Nobody had to debate whether the Easter Island statues were designed or not.
The universe is either undesigned and undirected, or it's very cleverly designed so as to look undesigned and undirected. And frankly, if the latter is the case, it'd be beyond our ability to manage to outwit such clever designers; in that hypothetical case to believe it was designed would be to coincidentally reach the right conclusion by making all the wrong turns just because a prankster decided to switch all the roadsigns around.
I can refer you to Ivy League scientists if you want.
There are many, many Ivy League scientists. Again beware confirmation bias, the selection of evidence towards a predetermined conclusion. Do you have statistics for the percentage of Ivy League scientists that say "the universe looks designed" vs the ones that say "the universe doesn't look designed" ? That'd be more useful.
Replies from: None↑ comment by [deleted] · 2013-05-10T12:03:05.719Z · LW(p) · GW(p)
Aaaand unfortunately we're getting into personal opinion. It's easy enough to find statistics about belief among top scientists, though.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-05-10T12:52:54.479Z · LW(p) · GW(p)
As an addendum to my above comment -- if you personally feel that the universe looks designed, can you tell me how would it look in the counterfactual where you were observing a blatantly UNdesigned universe?
Here's for example elements of a hypothetical blatantly designed world: Continents in the shape of animals or flowers. Mountains that are huge statues. Laws of conservation that don't easily reduce to math (e.g. conservation of energy, momentum, etc) but rather to human concepts (conservation of hope, conservation of dramatic irony). Clouds that reshape themselves to amuse and entertain the people watching them.
↑ comment by BerryPick6 · 2013-05-12T21:25:53.963Z · LW(p) · GW(p)
Let's imagine two universes. One formed spontaneously, one was created. Which is more likely to occur?
Personally I think that the created one seems more likely.
What evidence makes you think this?
Replies from: None↑ comment by [deleted] · 2013-05-12T21:52:49.420Z · LW(p) · GW(p)
I don't have any evidence. I know, downvote me now. But I suspect some sort of Bayesian analysis might support this, because if there is a deity, it is likely to create universes, whereas if there is no deity, universes have to form spontaneously, which requires a lot of things to fall into place perfectly.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-05-12T21:57:23.758Z · LW(p) · GW(p)
But I suspect some sort of Bayesian analysis might support this, because if there is a deity, it is likely to create universes,
Okay, so what makes you think this is true? I'm wondering how on earth we would even figure out how to answer this question, let alone be sure of the answer.
whereas if there is no deity, universes have to form spontaneously, which requires a lot of things to fall into place perfectly.
What has to fall into place for this to occur? Exactly how unlikely is it?
Replies from: None↑ comment by [deleted] · 2013-05-13T15:25:11.068Z · LW(p) · GW(p)
Look, let's just admit that this line of reasoning is entirely speculative anyway...
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-05-13T16:45:57.125Z · LW(p) · GW(p)
Um, why cut off the conversation at this point rather than your original one, in that case?
Replies from: None↑ comment by [deleted] · 2013-05-13T16:51:46.997Z · LW(p) · GW(p)
All I'm saying is that if you need numbers and evidence to continue, we're not going to get any further.
Replies from: BerryPick6↑ comment by Jack · 2013-05-12T21:18:14.077Z · LW(p) · GW(p)
What would be your prior probability for God existing before updating on your own existence?
Replies from: None↑ comment by [deleted] · 2013-05-12T22:33:35.862Z · LW(p) · GW(p)
I have absolutely no idea. Good question. What would be yours?
Replies from: Jack↑ comment by Jack · 2013-05-13T01:18:05.436Z · LW(p) · GW(p)
It's not a well-defined enough hypothesis to assign a number to: but the the main thing is that it's going to be very low. In particular, it is going to be lower than a reasonable prior for a universe coming into existence without a creator. The reason existence seems like evidence of a creator, to us, is that we're used to attributing functioning complexity to an agent-like designer. This is the famous Watchmaker analogy that I am sure you are familiar with. But everything we know about agents designing things tells us that the agents doing the designing are always far more complex than the objects they've created. The most complicated manufactured items in the world require armies of designers and factory workers and they're usually based on centuries of previous design work. Even then, they are probably no manufactured objects in the world that are more complex than human beings.
So if the universe were designed, the designer is almost certainly far more complex than the universe. And as I'm sure you know, complex hypotheses get low initial priors. In other words: a spontaneous Watchmaker is far more unlikely than a spontaneous watch. Now: an apologist might argue that God is different. That God is in fact simple. Actually, they have argued this and such attempts constitute what I would call the best arguments for the existence of God. But there are two problems with these attempts. First, the way they argue that God is simple is based on imprecise, anthropocentric vocabulary that hides complexity. An "omnipotent, omnipresent, omniscient and omnibenevolent creator" sounds pretty simple. But if you actually break down each component into what it would actually have to be computationally it would be incredibly complex. The only way it's simple is with hand-waving magic.
Second, A simple agent is totally contrary to our actual experience with agents and their designs. But that experience is the only thing leading us to conclude that existence is evidence for a designer in the first place. We don't have any evidence that a complex design can come from a simple creator.
This a more complex and (I think) theoretically sophisticated way of making the same point the rhetorical question "Who created the creator?" makes. The long and short of it is that while existence perhaps is very good evidence for a creator, the creator hypothesis involves so much complexity that the prior for His spontaneous existence is necessarily lower than the prior for the universe's spontaneous existence.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-05-16T09:10:44.518Z · LW(p) · GW(p)
An "omnipotent, omnipresent, omniscient and omnibenevolent creator" sounds pretty simple. But if you actually break down each component into what it would actually have to be computationally it would be incredibly complex.
I agree that the "omnibenevolent" part would be incredibly complex (FAI-complete).
But "omnipotent", "omnipresent" and "omniscient" seem much easier. For example, it could be a computer which simulates this world -- it has all the data, all the data are on its hard disk, and it could change any of these data.
Replies from: Jack↑ comment by JoshuaZ · 2013-05-09T16:40:03.602Z · LW(p) · GW(p)
There's quite a bit of evidence against. Absense of expected evidence is evidence of absence.
Replies from: None↑ comment by [deleted] · 2013-05-09T20:15:46.654Z · LW(p) · GW(p)
There's also quite a bit of evidence for, if you bother to listen to sincere believers. Which I do.
Replies from: Intrism, JoshuaZ↑ comment by Intrism · 2013-05-09T20:38:30.807Z · LW(p) · GW(p)
The problem is that "quite a bit" is far, far too little. Though religious people often make claims of religious experience, these claims tend to be quite flimsy and better explained by myriad other mechanisms, including random chance, mental illness, and confirmation bias. Scientists have studied these claims, and thus far well-constructed studies have found them to be baseless.
↑ comment by JoshuaZ · 2013-05-09T20:23:36.245Z · LW(p) · GW(p)
There's also quite a bit of evidence for, if you bother to listen to sincere believers. Which I do.
You may be forgetting here that a lot of people here (including myself) grew up in pretty religious circumstances. I'm familiar with all sorts of claims, ranging from teleological arguments, to ontological arguments, to claims of revelation, to claims of mass tradition, etc. etc. So what do you think is "quite a bit of evidence" in this sort of context? Is there anything remotely resembling the Old Testament miracles for example that happens now?
Replies from: None↑ comment by [deleted] · 2013-05-10T11:12:03.486Z · LW(p) · GW(p)
Yes. They don't casually share them with every skeptic who asks, because miracles are personal, but there is an amazing number of modern miracle stories (among Mormons if not others.) And not just lucky coincidences with easy explanations—real miracles that leave people quite convinced that God is there.
And don't be too hasty to dismiss millions of personal experiences as mental illness.
Replies from: TheOtherDave, JoshuaZ↑ comment by TheOtherDave · 2013-05-10T15:15:09.535Z · LW(p) · GW(p)
I suspect that you and JoshuaZ are unpacking the phrase "Old Testament miracles" differently. Specifically, I suspect they are thinking of events on the order of dividing the Red Sea to allow refugees to pass and then drowning their pursuers behind them.
Such events, when they occur, are not personal experiences that must be shared, but rather world-shaking events that by their nature are shared.
And don't be too hasty to dismiss millions of personal experiences as mental illness.
First of all, Joshua didn't bring up mental illness here. But since you do: how hasty is "too" hasty? To say that differently: in a community of a billion people, roughly how many hallucinations ought I expect that community to experience in a year?
↑ comment by JoshuaZ · 2013-05-10T16:02:09.492Z · LW(p) · GW(p)
Yes. They don't casually share them with every skeptic who asks, because miracles are personal, but there is an amazing number of modern miracle stories (among Mormons if not others.) And not just lucky coincidences with easy explanations—real miracles that leave people quite convinced that God is there.
Curiously, nearly identical claims are made by other religions also. For example, you see similar statements in the chassidic branches of Judaism.
But it isn't at all clear why in this sort of context miracles should be at all "personal" and even then, it doesn't really work. The scale of claimed miracles is tiny compared to those of the Bible. One has things like the splitting of the Red Sea, the collapse of the walls of Jericho, the sun standing still for Joshua, the fires on Mount Carmel, etc. That's the scale of classical miracles, and even the most extreme claims of personal miracles don't match up to that.
And don't be too hasty to dismiss millions of personal experiences as mental illness.
They aren't all mental illness. Some of them are seeing coincidences as signs when they aren't, and remembering things happening in a more extreme way than they have. Eye witnesses are extremely unreliable. And moreover, should I then take all the claims by devout members of other faiths also as evidence? If so, this seems like a deity that is oddly willing to confuse people. What's the simplest explanation?
↑ comment by Desrtopa · 2013-05-09T16:18:18.633Z · LW(p) · GW(p)
I would venture a guess that atheists who haven't put thought into the possibility of there being a god are significantly in the minority. Although there are some who dismiss the notion as an impossibility, or such a severe improbability as to be functionally the same thing, in my experience this is usually a conclusion rather than a premise, and it's not necessarily an indictment of a belief system that a conclusion be strongly held.
Some Christians say that "all things testify of Christ." Similarly, Avicenna was charged with heresy for espousing a philosophy which failed to affirm the self-evidence of Muslim doctrine. But cultures have not been known to adopt Christianity, Islam, or any other particular religion which has been developed elsewhere, independent of contact with carriers of that religion.
If cultures around the world adopted the same religion, independently of each other, that would be a very strong argument in favor of that religion, but this does not appear to occur.
Replies from: None, Eugine_Nier↑ comment by [deleted] · 2013-05-10T10:42:37.587Z · LW(p) · GW(p)
Although there are some who dismiss the notion as an impossibility, or such a severe improbability as to be functionally the same thing, in my experience this is usually a conclusion rather than a premise
OK, that works. But what evidence do we have that unambiguously determines that there is no deity? I'd love to hear it. Not just evidence against one particular religion. Active evidence that there is no God, which, rationally taken into account, gives a chance of ~0 that some deity exists.
Replies from: Intrism, Desrtopa↑ comment by Intrism · 2013-05-10T16:16:15.038Z · LW(p) · GW(p)
What evidence of no deity could you possibly expect to see? If there were no God, I wouldn't expect there to be any evidence of the fact. In fact, if I were to find the words "There is no God, stop looking" engraved on an atom, my conclusion would not be "There is no God," but rather (ignoring the possibility of hallucination) "There is a God or some entity of similar power, and he's a really terrible liar." Eliezer covers this sort of thing in his sequence entry You're Entitled to Arguments But Not That Particular Proof.
If you really want to make this argument, describe a piece of evidence that you would affirmatively expect to see if there were no God.
Replies from: None↑ comment by [deleted] · 2013-05-10T16:39:45.701Z · LW(p) · GW(p)
Right, I don't see how there could be any evidence to convince a person to the point of a 0.0001 chance of God. And so when all of these people say that they've concluded that the chance of God is negligible, I think that they're subject to a strong cognitive bias worsened by the fact that they're supposed to be immune to those.
Replies from: Prismattic, Intrism↑ comment by Prismattic · 2013-05-10T17:35:52.794Z · LW(p) · GW(p)
Two things that your perpsective appears to be missing here:
1) Lots of people here were raised in religious families; they didn't start out privileging atheism. (Or they aren't atheists per se; I'm agnostic between atheism and deism; it's just the anthropomorphic interventionist deity I reject.)
2) You aren't the first believer to come here and present the case you are trying to make. See, for example, the rather epic conversation with Aspiringknitter here. You aren't even the first Mormon to make the case here. Calcsam has been quite explicit about it.
Note that both of those examples are people who've accumulated quite a bit of karma on LessWrong. People give them a fair hearing. They just don't agree that their arguments are compelling.
Replies from: None↑ comment by [deleted] · 2013-05-10T18:18:00.485Z · LW(p) · GW(p)
Thank you for pointing out perceived fundamental flaws. It's so much more helpful than disputing technical details.
1) I know that. However, I would guess that most people here have fully privileged atheism since the time they started considering themselves rationalists, and this is a big difference.
2) I was aware of that too; however, thanks for the specific links. I certainly got on here loudly proclaiming that I was religious; however, my original stated purpose was not to start an argument. That said, I really was asking for it, and when people argued, I argued back. Where I live it's so hard to find people willing to have an intellectual debate about this sort of thing. So if I did something "taboo," I apologize. But the reaction I've gotten suggests that people are interested in what I've said, and so my thoughts were worth something at least.
I suppose that when this thread resolves itself I'll make a grand post on the welcome page just like AspiringKnitter did.
Replies from: Prismattic↑ comment by Prismattic · 2013-05-10T18:36:51.910Z · LW(p) · GW(p)
Let me see if I can explain my objection to (1) a different way. Rationalists do not privilege atheism. They privilege parsimony. This is basically a tautology. The only way to subscribe to both rationality and theistic religion is compartmentalization. Saying you want to be rational and a theist is equivalent to saying you want to make a special exception to the principles you follow in every other situation when the subject of God comes up. That's going to take a particular kind of strong argument.
Replies from: None↑ comment by [deleted] · 2013-05-10T18:45:21.627Z · LW(p) · GW(p)
Rationalists do not privilege atheism
You're telling me that it's essentially impossible to be theist and fully rational. You're saying that not only do rationalists privilege atheism, but if fact they have to follow it by definition, unless they manage to deceive themselves.
I disagree with your objection and I believe that it is possible to reconcile rationality and religion.
Replies from: Prismattic↑ comment by Prismattic · 2013-05-10T18:54:07.070Z · LW(p) · GW(p)
That is not the case. Observing something for which one can provide no natural explanation is going to cause a rationalist to increase their probability estimate for the supernatural. It's not going to increase it to near certainty, because the mysteriousness of the universe is a fact about the limits of our own understanding, not about the universe, so it's still possible that something we can't explain has natural causes we don't yet have the ability to measure or explain. But it will cause the estimate to rise. And if inexplicable things keep happening, their estimate will keep rising.
The question, though, is whether there is anything that could ever cause you to lower your estimate of the probability that your religion is correct. If the answer is no, then you're not being rational right off the bat, and your quest is doomed.
Replies from: None↑ comment by [deleted] · 2013-05-10T19:04:56.051Z · LW(p) · GW(p)
The only way to subscribe to both rationality and theistic religion is compartmentalization
What do you mean by compartmentalization, then, if it's not a bad thing? Sounds to me like it's sacrificing internal consistency.
The question, though, is whether there is anything that could ever cause you to lower your estimate of the probability that your religion is correct. If the answer is no, then you're not being rational right off the bat, and your quest is doomed.
That's true. I actively go looking for things that might challenge my faith, and come out stronger because of it. That's partly why I'm here.
Replies from: drethelin↑ comment by drethelin · 2013-05-15T04:40:49.096Z · LW(p) · GW(p)
compartmentalization IS a bad thing if you care about internal consistency and absolute truth. It's a great thing if you want to hold multiple useful beliefs that contradict each other. You might be happier and more productive, as I'm sure many are, believing that we should expect the world to work based on evidence except insofar as it conflicts with your religion, where it should work on faith.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-05-16T02:16:12.560Z · LW(p) · GW(p)
Also premature decompartmentalizing can be dangerous. There are many sets of (at least mostly) true ideas where it's a lot harder to reconcile them then to understand either individually.
↑ comment by Intrism · 2013-05-10T18:11:53.592Z · LW(p) · GW(p)
The problem is that you're not being consistent in your handling of unfalsifiable theories. A lot of what's been brought to the table are Russell's Teapot-type problems and other gods, but I think I can find one that's a bit more directly comparable. I'll present a theory that's entirely unfalsifiable, and has a fair amount of evidence supporting it. This theory is that your friends, family, and everyone you know are government agents sent to trick you for some unclear reason. It's a theory that would touch every aspect of your life, unlike a Russell's Teapot. There's no way to falsify this theory, yet I assume you're assigning it a negligible probability, likely .0001 or even less. To remain consistent with your position on religion, you must either accept that there's a significant chance you're trapped in some kind of evil simulation run by shadowy G-Men, or accept that the impossibility of counterevidence isn't actually a good argument in favor of something. (Which still wouldn't mean that you'd have to turn atheist - as you've mentioned, there is some evidence for religion, even if the rest of us think it's really terrible evidence.)
Replies from: None↑ comment by [deleted] · 2013-05-10T19:01:11.901Z · LW(p) · GW(p)
First of all, in an intellectual debate, you don't go around telling someone that they're cornered. That ought to raise all sorts of red flags as to your logic, but in fact I'm perfectly happy to accept both of those propositions.
I would quite agree that there's a chance worth considering that I'm the center of a government conspiracy. (It's got a name.) I don't have any idea how that chance actually ranks in my mind, and any figure I did give would be a Potemkin (a complete guess). But it's entirely possible.
the impossibility of counterevidence isn't actually a good argument in favor of something The impossibility (according to some) of counterevidence against atheism (i.e. evidence for God) does not provide any evidence whatsoever in favor of atheism. Even though I keep being told that absence of evidence is evidence of absence implies absence of evidence. The impossibility of counterevidence against God (evidence for atheism) does not mean that God exists. Granted. I've never tried to use that argument, even if some theists do.
However, the fact that it isn't an argument in favor of religion surely doesn't mean that it's an argument in favor of atheism. Jeez.
And thank you for admitting that there is at least a tiny bit of evidence for religion. It would be really silly not to.
Replies from: Intrism↑ comment by Intrism · 2013-05-10T19:13:43.435Z · LW(p) · GW(p)
First of all, in an intellectual debate, you don't go around telling someone that they're cornered.
No, my understanding is that it's a fairly typical tactic.
I would quite agree that there's a chance worth considering that I'm the center of a government conspiracy. (It's got a name.) I don't have any idea how that chance actually ranks in my mind, and any figure I did give would be a Potemkin (a complete guess). But it's entirely possible.
Yes, I was indeed thinking of the Truman Show Delusion. My point, though, is that it shouldn't be any less credible than religion to you, meaning that you should be acting on that theory to a similar degree to religion.
The impossibility (according to some) of counterevidence against atheism (i.e. evidence for God) does not provide any evidence whatsoever in favor of atheism
Counterevidence for atheism is not impossible at all, as people have been saying up and down the thread. If the skies were to open up, and angels were to pour down out of the breach as the voice of God boomed over the landscape... that would most certainly be counterevidence for atheism. (Not conclusive counterevidence, mind. I might be insane, or it could be the work of hyperintelligent alien teenagers. But it would be more than enough evidence for me to convert.) And, in less dramatic terms, a simple well-designed and peer-reviewed study demonstrating the efficacy of prayer would be extremely helpful. There are even those miracles you've been talking about, although (again) most of us consider it poor evidence.
Replies from: None↑ comment by [deleted] · 2013-05-10T19:23:39.174Z · LW(p) · GW(p)
No, my understanding is that it's a fairly typical tactic.
Sure, cornering your opponent in her arguments is a very common tactic, but it seems a bit silly to go telling me you've succeeded in it. In any case, I sure don't feel cornered. :)
you should be acting on the theory to a similar degree as you act on religion.
See, I've got evidence for religion. What's my evidence for the Truman Show?
Counterevidence for atheism is not impossible
Not conclusive counterevidence, mind.
most of us consider it poor evidence.
QED. Counterevidence, yes, but not any conclusive or good or rational counterevidence.
Replies from: Intrism, Prismattic↑ comment by Intrism · 2013-05-10T19:27:08.673Z · LW(p) · GW(p)
What's my evidence for the Truman Show?
If you actually believed in the Truman Show hypothesis? Confirmation bias would provide a whole pile of evidence. Every time someone you know stutters, or someone stares at you from across the lunchroom, or the whole room goes quiet as you enter. Whenever there's been a car following you for more than three blocks, especially if it's a black SUV. Certain small things will happen by chance to support any theory. We'd argue that the same bias is likely responsible for most reports of miracles, by the way.
QED. Counterevidence, yes, but not any conclusive or good or rational counterevidence.
By "conclusive," I mean "assigning it probability of 1, not rounded or anything, just 1, there must be a god, case closed." But, rationalists don't believe that about any evidence, about anything. And we shouldn't, as you've been saying all this time about probability 0. The evidence I posited would, on the other hand, be extremely good rational evidence and I don't want to diminish that at all.
↑ comment by Prismattic · 2013-05-10T19:33:58.836Z · LW(p) · GW(p)
Downvoted for paraphrasing Intrism in a way that does not reflect what he actually said in your third quote.
See, I've got evidence for religion. What's my evidence for the Truman Show?
What's your evidence for religion? It's one thing for you to claim that that your own estimate for the truth of your religion is high based on supposedly strong evidence that you refuse to share. It's quite another to expect anyone else to move their estimate.
Replies from: None, None↑ comment by [deleted] · 2013-05-10T20:32:37.435Z · LW(p) · GW(p)
What's your evidence for religion? It's one thing for you to claim that that your own estimate for the truth of your religion is high based on supposedly strong evidence that you refuse to share. It's quite another to expect anyone else to move their estimate.
I'm not expecting to convince you to move your estimate using my evidence—some of it is personal, and the rest would likely be rejected out of hand. No, that's just why I believe in religion rather than the Truman Show.
As for you, I think it's totally fine for you to rank the Truman Show as high as religion, given your rejection of practically all the evidence in favor of either. As long as you keep a real possibility for both.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-10T20:43:18.895Z · LW(p) · GW(p)
I hope you do not feel bad because of some overzealous atheists here ganging up on you. This specific faucet of epistemic rationality is only a small part of the site. And kudos for being instrumentally rational and not letting yourself being bullied into discussing your specific evidence. This would certainly not be useful to anyone. Most people are good at compartmentalizing, and we don't have to be uniformly rational to benefit from bits and pieces here and there.
Replies from: None↑ comment by [deleted] · 2013-05-10T21:11:37.081Z · LW(p) · GW(p)
No, don't worry about my feelings. I wouldn't have "come out" immediately, or probably posted anything in the first place, if I wasn't sure I could survive it. I mean, yes, of course I feel like everyone's ganging up on me, but I could hardly expect them to do otherwise given the way I've been acting.
Thanks...I'm trying to be rational, I certainly am. And I'm delighted to find other people who are willing to think this way. You could never have this discussion where I'm from, except with someone who either is on this site or ought to be.
↑ comment by Desrtopa · 2013-05-10T14:09:41.407Z · LW(p) · GW(p)
Well, as I linked previously, absence of evidence is evidence of absence. If God were a proposition which did not have low probability in the absence of evidence, then it would be unique in that respect.
I'm prepared to argue in favor of the propositions that we do not have evidence favoring God over no God, and that we have no reason to believe that god has uniquely high probability in absence of evidence. Would that satisfy you?
Replies from: None↑ comment by [deleted] · 2013-05-10T14:39:43.419Z · LW(p) · GW(p)
This "in the absence of evidence" theme is popping up all over but doesn't seem to be getting anywhere new or useful. I'm going to let it be.
And I'm not momentarily interested in a full-blown argument about the nature of the evidence for and against God. I believe there is evidence of God; you believe there is none, which is practically as good as evidence that there is no God. We can talk over each other about that for hours with no one the wiser. I shouldn't be surprised that any debate about this boils down to the evidence—but the nature of the evidence (remember, we've been over this) means that it's really impossible to firmly establish one side or the other.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T14:55:31.976Z · LW(p) · GW(p)
And I'm not momentarily interested in a full-blown argument about the nature of the evidence for and against God. I believe there is evidence of God; you believe there is none, which is practically as good as evidence that there is no God. We can talk over each other about that for hours with no one the wiser. I shouldn't be surprised that any debate about this boils down to the evidence—but the nature of the evidence (remember, we've been over this) means that it's really impossible to firmly establish one side or the other.
Why is that?
If god were really communicating and otherwise acting upon people, as you suggest, there's no reason to suppose this should be indistinguishable from brain glitches, misunderstandings, and exaggerations. I think that the world looks much more like we should anticipate if these things are going on in the absence of any real god than we should expect it to look like if there were a real god. You could ask why I think that. A difference of anticipation is a meaningful disagreement to follow up on.
You might want to check out this post. The idea that we can't acquire evidence that would promote the probability of religious claims is certainly not one we can take for granted.
Replies from: None↑ comment by Eugine_Nier · 2013-05-15T01:23:57.614Z · LW(p) · GW(p)
But cultures have not been known to adopt Christianity, Islam, or any other particular religion which has been developed elsewhere, independent of contact with carriers of that religion.
The same is true of science.
Replies from: drethelin↑ comment by drethelin · 2013-05-15T04:38:10.257Z · LW(p) · GW(p)
if you define "science" as carrying on in the tradition of Bacon, sure. But that didn't stop the greeks from making the antikythera device long before he existed. Astronomy has been independently discovered by druids, mesoamerican cultures, the far east, and countless others where "independent" is more vague. If you consider "science" as a process of invention as well as research and discovery there are also tons of examples in eg http://en.wikipedia.org/wiki/History_of_science_and_technology_in_China#Magnetism_and_metallurgy and so on of inventions that were achieved in vastly different places seemingly independently at different times. Moveable type is still movable type whether invented in China or by Gutenberg. On the other hand, Loki is not Coyote.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-05-16T02:09:10.891Z · LW(p) · GW(p)
On the other hand, Loki is not Coyote.
A lot of actual pagans may disagree with you. True, there are some differences between the cults involved, there are also differences between Babylonian and Chinese mathematics. (As for your example of Greek science, much of it is on the same causal path that led to Bacon.)
↑ comment by JoshuaZ · 2013-05-09T15:59:31.263Z · LW(p) · GW(p)
Have most atheists honestly put thought into what if there actually was a God?
Many people here are grew up in religious settings. Eliezer for example comes from an Orthodox Jewish family. So yes, a fair number have given thought to this.
people honestly believe they've been personally contacted by God.
Curiously many different people believe that they've been contacted by God, but they disagree radically on what this contact means. Moreover, when they claim to have been contacted by God but have something that doesn't fit a standard paradigm, or when they claim to have been contacted by something other than God, we frequently diagnose them as schizophrenic. What's the simplest explanation for what is going on here?
Replies from: None↑ comment by [deleted] · 2013-05-09T20:08:38.412Z · LW(p) · GW(p)
Simple explanations are good, but not necessarily correct. It's awfully easy to say they're all nutcases, but it's still easy and a bit more fair to say that they're mostly nutcases but maybe some of them are correct. Maybe. I think it's best to give it a chance at least.
Replies from: ArisKatsaris, Bugmaster, JoshuaZ↑ comment by ArisKatsaris · 2013-05-09T20:53:00.608Z · LW(p) · GW(p)
It's awfully easy to say they're all nutcases, but it's still easy and a bit more fair to say that they're mostly nutcases but maybe some of them are correct. Maybe. I think it's best to give it a chance at least.
Openmindedness in these respects has always seemed to me highly selective -- how openminded are you to the concept that most thunderbolts may be mere electromagnetic phenomena but maybe some thunderbolts are thrown down by Thor? Do you give that possibility a chance? Should we?
Or is it only the words that current society treats seriously e.g. "God" and "Jesus", that we should keep an open mind about, and not the names that past societies treated seriously?
Replies from: None↑ comment by [deleted] · 2013-05-10T10:45:45.253Z · LW(p) · GW(p)
how openminded are you to the concept that most thunderbolts may be mere electromagnetic phenomena but maybe some thunderbolts are thrown down by Thor? Do you give that possibility a chance? Should we?
If billions of people think so, then yes, we should.
It's not just that our society treats Jesus seriously, it's that millions of people have overwhelming personal evidence of Him. And most of them are not rationalists, but they're not mentally insane either.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T15:02:03.633Z · LW(p) · GW(p)
Is the number of people really all that relevant?
I mean, there are over a billion people in the world who identify as believers of Islam, many of whom report personal experiences which they consider overwhelming evidence that there is no God but Allah, and Mahomet is His Prophet. But I don't accept that there is no God but Allah. (And, I'm guessing, neither do you, so it seems likely that we agree that the beliefs of a billion people at least sometimes not sufficient evidence to compel confidence in an assertion.)
Going the other way, there was a time when only a million people reported personal evidence of Jesus Christ as Lord.
There was a time when only a hundred thousand people had.
There was a time when only a thousand people had.
Etc.
And yet, if Jesus Christ really is Lord, a rationalist wants to believe that even in 13 A.D., when very few people claim to. And if he is not, a rationalist wants to believe that even in 2013 A.D. when billions of people claim to.
I conclude that the number of people just isn't that relevant.
Replies from: None↑ comment by [deleted] · 2013-05-10T15:12:05.772Z · LW(p) · GW(p)
I think that if in 13 A.D. you had asked a rationalist whether some random Nazarene kid was our savior, "almost certainly not" would have been the correct response given the evidence. But twenty years later, after a whole lot of strong evidence came out, that rationalist would have adjusted his probabilities significantly. The number of people who were brought up in something doesn't matter, but given that there are millions if not billions of personal witnesses, I think God is a proposition to which we ought to give a fair chance.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T15:27:12.295Z · LW(p) · GW(p)
given that there are millions if not billions of personal witnesses, I think God is a proposition to which we ought to give a fair chance.
And by "God" here you specifically mean God as presented in the Church of Jesus Christ of Latter-Day Saints' traditional understanding of the Book of Mormon, and our collective traditional understandings of the New Testament insofar as they don't contradict each other or that understanding of the Book of Mormon, and our traditional understandings of the Old Testament insofar as they don't contradict each other or any of the above.
Yes?
But you don't mean God as presented in, for example, the Sufis' traditional understanding of the Koran, and our collective traditional understandings of the New Testament insofar as they don't contradict each other or that understanding of the Koran, and our traditional understandings of the Old Testament insofar as they don't contradict each other or any of the above.
Yes?
Is this because there are insufficient numbers of personal witnesses to the latter to justify such a fair chance?
Replies from: None↑ comment by [deleted] · 2013-05-10T15:46:25.573Z · LW(p) · GW(p)
I mean deity or God in general. Because although they don't agree on the details, these billions of people agree that there is some sort of conscious higher Power. And they don't have to contradict each other in that.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T16:05:53.039Z · LW(p) · GW(p)
Well... hm.
Is there sufficient evidence, on your account, to conclude (or at least take very seriously the hypothesis) that Thomas Monson communicates directly with a conscious higher Power in a way that you do not?
Is there sufficient evidence, on your account, to conclude (or at least take very seriously the hypothesis) that Sun Myung Moon communicated directly with a conscious higher Power in a way that you do not?
↑ comment by [deleted] · 2013-05-10T16:20:23.143Z · LW(p) · GW(p)
I think it's too difficult to take this reasoning into specific cases. That is, with the general reasoning I've been talking about, I'm going to conclude that I think it's best to take the general possibility of deity seriously.
Given that, and given my upbringing and personal experience and everything else, I think that it's best to take Thomas Monson very seriously. I hardly know anything about Sun Myung Moon so I can't say anything about him.
I can't possibly ask you to do that second part, but I think that the possibility of deity in general is a cause I will fight for. (edit: clarified)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T17:09:16.845Z · LW(p) · GW(p)
I see.
So on your account, if I've understood it, I have sufficient evidence to justify a high confidence in a conscious higher Power consistent with the accounts of all believers in Abrahamic religions, though not necessarily identical to that described in any of those accounts, and the fact that I lack such confidence is merely because I haven't properly evaluated the evidence available to me.
Yes?
Just to avoid confusion, I'm going to label that evidence -- the evidence I have access to on this account -- E1.
Going further: on your account, you have more evidence than E1, given your upbringing and personal experience and everything else, and your evidence (which I'll label E2) is sufficient to further justify a high confidence in additional claims, such as Thomas Monson's exceptional ability to communicate with that Power.
Yes?
And since you lack personal experiences relating to Sun Myung Moon that justify a high confidence in similar claims about him, you lack that confidence, but you don't rule it out either... someone else might have evidence E3 that justifies a high confidence in Sun Myung Moon's exceptional ability to communicate with that Power, and you don't claim otherwise, you simply don't know one way or the other. .
Yes?
OK, so far so good.
Now, moving forward, it's worth remembering that personal experience of an event V is not our only, or even our primary, source of evidence with which to calculate our confidence in V. As I said early on in our exchange, there are many events I'm confident occurred which I've never experienced observing, and some events which I've experienced observing which I'm confident never occurred, and I expect this is true of most people.
So, how is that possible? Well, for example, because other people's accounts of an event are evidence that the event occurred, as you suggest with your emphasis on the mystical experiences of millions (or billions) of people as part of E1. Not necessarily compelling evidence, because people do sometimes give accounts of events that didn't occur, but evidence worth evaluating.
Yes?
Of course, not all such accounts are equally useful as evidence. You probably don't know Thomas Monson personally, but you still take seriously the proposition that he is a Prophet of YHWH, primarily on the basis of the accounts of a relatively small number of people whom you trust (due to E2) to be sufficiently reliable evaluators of evidence.
Yes?
(A digression on terminology: around here, we use "rational" as a shorthand which entails reliably evaluating evidence, so we might semi-equivalently say that you trust this group to be rational. I'm avoiding that jargon in this discussion because you're new to the community and "rational" in the broader world has lots of other connotations that might prove distracting. OTOH, "sufficiently reliable evaluator of evidence" is really tedious to type over and over, which is why we don't usually say that, so I'm going to adopt "SREoE" as shorthand for it here.)
Moving on: you don't know Sun Myung Moon personally, but you don't take seriously the proposition that he is a Prophet of the higher Power, despite the similar accounts of a relatively small number of people, presumably because you don't trust them to be SREoEs.
Yes?
And similarly, you don't expect me to take seriously the proposition that Thomas Monson is a Prophet of the higher Power, not only because I lack access to E2, but also because you don't expect me to trust you as a SREoE. If I did (for whatever reason, justified or not) trust you to be a SREoE, I would take that proposition seriously.
Yes?
Pausing here to make sure I haven't gone off the rails.
Replies from: None↑ comment by [deleted] · 2013-05-10T18:10:58.477Z · LW(p) · GW(p)
Yes, actually, that's spot on. Good job and thank you for helping me to figure out my own reasoning. Please continue...
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T19:32:28.087Z · LW(p) · GW(p)
OK, good.
So, summarizing your account as I understand it and continuing from there:
Consider five propositions G1-G5 roughly articulable as follows:
G1: "there exists a conscious higher Power consistent with the accounts A1 of all believers in Abrahamic religions, though not necessarily identical to that described in any particular account in A1"
G2: "there exists a conscious higher Power consistent with the accounts A2 of Thomas Monson, where A2 is a subset of A1; any account Antm which is logically inconsistent with A2 is false."
G3: "there exists a conscious higher Power consistent with the accounts A3 of Sun Myung Moon, where A3 may or may not be a subset of A1; any account Ansmm which is logically inconsistent with A3 is false."
G4: "there exists a conscious higher Power consistent with the accounts A4 of all believers in any existing religion, Abrahamic or otherwise, though not necessarily identical to that described in any particular account in A4"
G5: "there exists a conscious higher Power consistent with the accounts A5 of some particular religious tradition R, where A5 is logically inconsistent with A1 and A2."2: On your account there exists evidence, E1, such that a SREoE would, upon evaluating E1, arrive at high confidence in G1. Further, I have access to E1, so if I were an SREoE I would be confident in G1, and if I lack confidence in G1 I am not an SREoE.
3: On your account there exists evidence E2 that similarly justifies high confidence in G2, and you have access to E2, though I lack such access.
4: If there are two agents X and Y, such that X has confidence that Y is an SREoE and that Y has arrived at high confidence of a proposition based on some evidence, X should also have high confidence in that proposition even without access to that evidence.
Yes? (I'm not trying to pull a fast one here; if the above is significantly mis-stating any of what you meant to agree to, pull the brake cord now.)
And you approached this community seeking evidence that we were SREoEs -- specifically, seeking evidence that we had engaged with E1 in a sufficiently open-minded way, which an SREoE would -- and you have concluded that no, we haven't, and we aren't.
Yes?
And because of that conclusion, you don't reduce your confidence in G1 based on our interactions, because the fact that we haven't concluded G1 from E1 is not compelling evidence that #2 above is false, which it would be if we were SREoEs.
Yes?
So, given all of that, and accepting for the sake of argument that I wish to become an SREoEs, how would you recommend I proceed?
And is that procedure one you would endorse following if, instead of engaging with you, I were instead engaging with someone who claimed (2b) "There exists evidence, E5, such that a SREoE would, upon evaluating E5, arrive at high confidence in G5. Further, Dave has access to E5, so if Dave were an SREoE he would be confident in G5, and if Dave lacks confidence in G5 he is not an SREoE."?
Replies from: None↑ comment by [deleted] · 2013-05-11T19:18:48.304Z · LW(p) · GW(p)
I don't think I can claim that your rejection of E1 means you are not a SREoE—this community is by far more SR in EE, the way we're talking about it at least, than those who believe G1. I'm not going to go around calling anyone irrational as long as their conclusions do come from a proper evaluation of the evidence.
I can't really claim E2 is that much stronger than E1—many people have access to E2 but don't believe G2.
What I'm trying to figure out is if this community thinks that any SREoE must necessarily reject G1 (based largely on the inconsistency of E1). I'm not claiming that a SREoE must accept G1 upon being exposed to E1.
But assuming I did claim that I was a SREoE and you all weren't...no, I don't know. Because being a SREoE equates almost completely in my mind with being a rationalist in the ideal sense that this community strives for. That doesn't mean everyone here is a SREoE, but most of them appear to be doing their best.
I'm curious, though, where else could this logic lead?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-11T20:13:16.076Z · LW(p) · GW(p)
What I'm trying to figure out is if this community thinks that any SREoE must necessarily reject G1 (based largely on the inconsistency of E1). I'm not claiming that a SREoE must accept G1 upon being exposed to E1.
I get that you're trying to be polite and all, and that's nice of you.
Politeness is important, and the social constraints of politeness are a big reason I steered this discussion away from emotionally loaded terms like "rational," "irrational," "God," "faith," etc.in the first place; it's a lot easier to discuss what confidence a SREoE resides in G1 given E1 without getting offended or apologetic or defensive than to discuss whether belief in God is rational or irrational, because the latter formulation carries so much additional cultural and psychological weight.
But politeness aside, I don't see how what you're saying can possibly be the case given what you've already agreed to. If E1 entails high confidence in G1, then an SREoE given E1 concludes that G1 is much more likely than NOT(G1), and an agent that does not conclude this is not an SREoE. That's just what it means for evidence to entail a given level of confidence in a conclusion, be it a low level or a high level.
Which means that if you're right that I have evidence that entails reasonably high confidence in the existence of God, then my vanishingly low confidence in the existence of God means I'm not being rational on the subject. Maybe that's rude to say, but rude or not that's just what it means for me to have evidence that entails reasonably high confidence in the existence of God.
And I get that you're looking for the same kind of politeness in return... that we can believe or not believe whatever we want, but as long as we don't insist it's irrational to conclude from available evidence that God exists, we can all get along.
And in general, we're willing to be polite in that way... most of us have stuff in our lives we don't choose to be SREoEs about, and going around harassing each other about it is a silly way to spend our time. There are theists of various stripes on LW, but we don't spend much time arguing about it.
But if you insist on framing the discussion in terms of epistemic rationality then, again, politeness aside, that doesn't really work. If E1 entails low confidence in G1, then an SREoE given E1 concludes that G1 is much less likely than NOT(G1), and an agent that does not conclude this is not an SREoE. That's just what it means for evidence to entail a given level of confidence in a conclusion, be it a low level or a high level.
Or, expressed in the more weighted way: either we have shared evidence that entails high confidence in the existence of God and I'm not evaluating that evidence as reliably as you are, or we have shared evidence that entails low confidence in the existence of God and you're not evaluating that evidence as reliably as I am.
All the politeness in the world doesn't change that.
All of that said, there's no obligation here to be an SREoE in any particular domain, which is why I started this whole conversation by talking about pragmatic reasons to continue practicing your religion in the first place. If you insist on placing the discussion in the sphere of epistemic rationality, I don't see how you avoid the conclusion, but there's no obligation to do that.
Replies from: None↑ comment by [deleted] · 2013-05-11T21:02:28.754Z · LW(p) · GW(p)
I'm not trying to be nice. Do not interpret the fact that I'm won't admit to attacking you to mean that I'm trying to be nice—perhaps I'm really not attacking you. I honestly believe that your position is fully self-justified, and I respect it.
Neither am I asking for politeness. I didn't get come on here expecting you to be nice, only rational and reasonable, which most people have been. I'd be happy for you all to tell me that it's irrational to conclude that God exists. One of my biggest questions was whether you all thought this was the case. Some of you don't, but you all did, and undiplomatically told me so, I wouldn't be offended. I might come away disappointed that this community wasn't as open-minded as I had hoped (no accusations intended), but I wouldn't be offended. If you think it's the case, please tell me so, and I will respectfully disagree.
If E1 entails high confidence in G1, then an SREoE given E1 concludes that G1 is much more likely than NOT(G1), and an agent that does not conclude this is not an SREoE. That's just what it means for evidence to entail a given level of confidence in a conclusion, be it a low level or a high level.
I think the biggest problem here is that, as I wrote in the other post, I don't believe there's only one conclusion a rational person (SREoE) can draw from the evidence. I don't believe that there is only one correct "methodology," and so I don't believe that evidence necessarily entails one thing or the other.
Replies from: TheOtherDave, TimS↑ comment by TheOtherDave · 2013-05-11T22:34:55.842Z · LW(p) · GW(p)
I don't believe that there is only one correct "methodology," and so I don't believe that evidence necessarily entails one thing or the other.
I see. I apologize; I missed this the first time you said it.
So, on your view, what does it mean to evaluate evidence reliably, if not that sufficiently reliable evaluations of given evidence will converge on the same confidence in given propositions? What does it mean for a methodology to be correct, if not that it leads a system that implements it to a given confidence in given propositions given evidence?
Or, to put it differently... well, let's back up a step. Why should anyone care about evaluating evidence reliably? Why not evaluate it unreliably instead, or not bother evaluating it at all?
Replies from: None↑ comment by [deleted] · 2013-05-12T20:11:43.627Z · LW(p) · GW(p)
Yeah, I don't really know. It just depends on your paradigm—according to rationalists like yourself, it seems, a cold rational analysis is most "correct" and reliable. For some others, the process involves fasting and prayer. I'm not going to say either is infallible. Certainly logic is a wonderful thing which has its place in our lives. But taken too far it's not always helpful or accurate, especially in us subjective humans.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-12T21:19:56.935Z · LW(p) · GW(p)
Well, I certainly agree about fallibility. Humans don't have access to infallible epistemologies.
That said, if fasting and prayer reliably gets me the most useful confidence levels in propositions for achieving my goals, then I should engage in fasting and prayer because that's part of the most reliable process for evaluating evidence.
If it doesn't, then that's not a reason for me to engage in fasting and prayer, though I may choose to do so for other reasons.
Either one of those things is true, or the other is. And I may not know enough about the world to decide with confidence which it is (though I sure do seem to), but even if I don't my ignorance doesn't somehow make it the case that they are both true.
Replies from: None↑ comment by [deleted] · 2013-05-12T22:14:24.395Z · LW(p) · GW(p)
Is there no possibility of partly true?
reliably gets me the most useful
These words seem subjective or at the very least unmeasurable. There is no way of determining absolutely whether something is "reliable" or "useful" without ridiculously technical definitions, which ruin the point anyway.
(sorry if I don't respond right away. I've been retributively downvoted to -15 and so LW is giving me a hassle about commenting. The forum programming meant well...)
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2013-05-13T02:15:08.671Z · LW(p) · GW(p)
sorry if I don't respond right away. I've been [...] downvoted to -15
That's OK. If we no longer have any way of agreeing on whether propositions are useful, reliable, or true, or agreeing on what it means for propositions to be any of these things, then I don't anticipate the discussion going anywhere from here that's worth my time. We can let it drop here.
↑ comment by wedrifid · 2013-05-13T00:56:33.227Z · LW(p) · GW(p)
(sorry if I don't respond right away. I've been retributively downvoted to -15 and so LW is giving me a hassle about commenting. The forum programming meant well...)
Working as intended. Evangelism of terrible thinking is not welcome here. For most intents and purposes you are a troll. It's time for you to go and time for me to start downvoting anyone who feeds you. Farewell Ibidem (if you the user behind the handle ever happen to gain an actual sincere interest in rationality I recommend creating a new account and making a fresh start.)
↑ comment by TimS · 2013-05-11T22:06:52.537Z · LW(p) · GW(p)
I don't believe there's only one conclusion a rational person (SREoE) can draw from the evidence.
There is one direction a SREoE updates on evidence - towards the evidence.
If I have strong reasons (high prior probability) of thinking that a coin has heads on both sides, I'm making a mistake by becoming more confident after I flip the coin and it comes up tails.
Likewise, if I have strong reasons of thinking that another coin is biased towards heads, so it turns up heads 60% instead of 50%, I'm committing the same error if I become more confident after seeing the coinflip turn up tails.
So learning E1 should make any SREoE become more confident of G1 unless that person's priors are already very heavily weighed towards G1. In the real world, there just aren't that many SREoE's with high priors on G1 before being exposed to E1.
Replies from: None↑ comment by [deleted] · 2013-05-11T22:29:52.590Z · LW(p) · GW(p)
In the real world, there just aren't that many SREoE's with high priors on G1 before being exposed to E1.
First of all, note that you effectively just said that nearly all religious people are irrational. I won't hold it against you, just realize that that's the position you're expressing.
If I have strong reasons (high prior probability) of thinking that a coin has heads on both sides, I'm making a mistake by becoming more confident after I flip the coin and it comes up tails.
Obviously. If there is clear evidence against your beliefs, you should decrease your confidence in your beliefs. But the problem is that this situation is not so simple as heads and tails.
What I'm trying to say is that two SREoEs can properly examine E1 and come up with different conclusions. I'm sorry if I agreed too fully to Dave's first set of propositions—the devil's in the details, as we irrational people who believe in a Devil say sometimes.
So on your account, if I've understood it, I have sufficient evidence to justify a high confidence in a conscious higher Power consistent with the accounts of all believers in Abrahamic religions, though not necessarily identical to that described in any of those accounts, and the fact that I lack such confidence is merely because I haven't properly evaluated the evidence available to me. Yes?
The key is "if I haven't properly evaluated the evidence." I took "properly" to mean "in a certain way," while Dave intended it as "in the one correct way." When this became clear, I tried to clarify my position.
I'm going to reiterate it again, because you don't seem to be getting it: I believe that it's possible for two equally R Es oE to evaluate the same evidence and come up with different conclusions. Thus exposure to E1 does not necessarily entail any confidence-shifting at all, even in a SREoE.
Replies from: Desrtopa, TimS↑ comment by Desrtopa · 2013-05-12T13:41:20.810Z · LW(p) · GW(p)
First of all, note that you effectively just said that nearly all religious people are irrational. I won't hold it against you, just realize that that's the position you're expressing.
I'll pop in here and note that the general point of view here is that everyone is irrational, and even the best of us frequently err. That's why we tend to use the term "aspiring rationalist," since nobody has reached the point of being able to claim to be an ideal rationalist.
The highest standard we can realistically hold people to is to make a genuine effort to be rational, to the best of their abilities, using the information available to them.
Replies from: None↑ comment by [deleted] · 2013-05-12T19:49:21.080Z · LW(p) · GW(p)
That's true. It's not actually "rational" vs. "irrational," even if that would make the situation so much easier to understand.
I hope you'd agree, though, that there are many people in this world (think: evangelicals) who don't make any sort of effort to be rational in the sense you mean it, and even some who honestly think logical inference is a tool of the devil. How sad...but probably no need to worry about them in this thread.
↑ comment by TimS · 2013-05-12T01:59:34.035Z · LW(p) · GW(p)
I believe that it's possible for two [SREoEs] to evaluate the same evidence and come up with different conclusions.
That is possible if and only if the two SREoEs started with different beliefs (priors) before receiving the same evidence. Aumann's Agreement Theorem says that SREoEs who start with the same beliefs and see the same evidence cannot disagree without doing something wrong.
In the real world, there just aren't that many SREoE's with high priors on G1 before being exposed to E1.
I didn't write this clearly. I meant that most human SREoEs who haven't been exposed to E1 don't assign high probability to G1. Theoretically, an SREoE who hadn't been exposed to E1 could have such high confidence in G1 that expose to E1 should reduce confidence in G1. In practice, I'm not sure any adult human hasn't been exposed to E1 already, and I'm doubtful that most children are SREoEs - thus, I'm not sure whether the set (human&non-E1&SREoE) has any elements in existence.
First of all, note that you effectively just said that nearly all religious people are irrational. I won't hold it against you, just realize that that's the position you're expressing.
I'm saying that people who assign high probability to G1 after exposure to E1 either (a) had very different priors about G1 than I before exposure to E1, or (b) are not SREoEs. Alternatively, I either (a) am not an SREoE, or (b) have not been exposed to the evidence we have referred to as E1.
To put it slightly differently, I can identify evidence that would make me increase the probability I assign to G1. Can you identify evidence that would make you decrease the probability you assign G1?
Replies from: None↑ comment by [deleted] · 2013-05-12T20:06:57.367Z · LW(p) · GW(p)
Aumann's Agreement Theorem says that SREoEs who start with the same beliefs and see the same evidence cannot disagree without doing something wrong.
Perhaps, then, I don't fully agree with Aumann's Agreement Theorem. I'll leave it to you to decide whether that means I'm not a "genuine" Bayesian. I wouldn't have a problem with being unable to fully adopt a single method of thinking about the universe.
In practice, I'm not sure any adult human hasn't been exposed to E1 already, and I'm doubtful that most children are SREoEs
Is it fair to say that most current SREoEs became that way during a sort of rationalist awakening? (I know it's not as simple as being a SREoE or not, and so this process actually takes years. but let's pretend for a moment.) Imagine a child who grows up being fed very high priors about G1. This child (not a SREoE) is exposed to E1 and has a high confidence in G1. When he (/she) grows up and eventually becomes a SREoE, he first of all consciously throws out all his priors (rebellion against parents), then re-evaluates E1 (re-exposure?) and decides that in fact it entails ~G1.
Whether or not this describes you, does it make sense?
I'm saying that people who assign high probability to G1 after exposure to E1 either (a) had very different priors about G1 than I before exposure to E1, or (b) are not SREoEs. Alternatively, I either (a) am not an SREoE, or (b) have not been exposed to the evidence we have referred to as E1.
How about this: since both of you have been exposed to the same evidence and don't agree, then either (a) you had very different priors (which is likely), or (b) you evaluate evidence differently. I'm going to avoid saying either of you is "better" or "more rational" at evaluating evidence.
Replies from: Qiaochu_Yuan, TimS↑ comment by Qiaochu_Yuan · 2013-05-12T20:24:07.579Z · LW(p) · GW(p)
Perhaps, then, I don't fully agree with Aumann's Agreement Theorem.
Whoa there. Aumann's agreement theorem is a theorem. It is true, full stop. Whatever that term "SREoE" means (I keep going up and keep not seeing an explanation), either it doesn't map onto the hypotheses of Aumann's agreement theorem or you are attempting to disagree with a mathematical fact.
Replies from: TimS, BerryPick6, None↑ comment by TimS · 2013-05-13T17:24:45.633Z · LW(p) · GW(p)
. Whatever that term "SREoE" means
I believe it was "Sufficiently reasonable evaluator of evidence" - which I was using roughly equivalently to Bayesian empiricist. I'm beginning to doubt that is what ibidem means by it.
TheOtherDave defined it way back in the thread to try to taboo "rationalist," since that word has such a multitude of denotations and connotations (including the LW intended meanings). Edit: terminology mostly defined here and here.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-13T18:26:04.429Z · LW(p) · GW(p)
Sufficiently reliable, but otherwise yes.
That said, we've since established that ibidem and I don't have a shared understanding of "reliable" or "evidence," either, so I'd have to call it a failed/incomplete attempt at tabooing.
↑ comment by BerryPick6 · 2013-05-12T20:38:56.778Z · LW(p) · GW(p)
Whatever that term "SREoE" means (I keep going up and keep not seeing an explanation)
They're using it to mean "sufficiently reliable evaluator of evidence".
↑ comment by [deleted] · 2013-05-12T22:04:18.576Z · LW(p) · GW(p)
mathematical fact
For it to be a mathematical fact, it needs a mathematical proof. Go ahead...!
Like it or not, rationality is not mathematics—it is full of estimations, assumptions, objective decisions, and wishful thinking. Thus, a "theorem" in evidence evaluation is not a mathematical theorem, obtained using unambiguous formal logic.
If what you mean to say is that Aumann's Agreement "Theorem" is a fundamental building block of your particular flavor of rational thinking, then what this means is simply that I don't fully subscribe to your particular flavor of rational thinking. Nothing (mathematics nearly excepted) is "true, full stop." Remember? 1 is not a probability. That one's even more "true, full stop" than Aumann's ideas about rational disagreement.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-13T01:21:43.932Z · LW(p) · GW(p)
For it to be a mathematical fact, it needs a mathematical proof.
Like it or not, rationality is not mathematics
When did I claim that rationality was mathematics?
If what you mean to say is that Aumann's Agreement "Theorem" is a fundamental building block of your particular flavor of rational thinking
When did I say this?
Replies from: None↑ comment by [deleted] · 2013-05-13T15:48:27.761Z · LW(p) · GW(p)
When did I claim that rationality was mathematics?
Right here:
you are attempting to disagree with a mathematical fact.
it needs a mathematical proof.
Here you go.
Maybe not "rationality" exactly but Aumann's work, whatever it is you call what we're doing here. Rational decision-making.
So yes, Aumann's theorem can be proven using a certain system of formalization, taking a certain set of definitions and assumptions. What I'm saying is not that I disagree with the derivation I gave, but that I don't fully agree with its premises.
If what you mean to say is that Aumann's Agreement "Theorem" is a fundamental building block of your particular flavor of rational thinking
When did I say this?
You didn't yet, I didn't say you did. I'm guessing that that's what you actually mean though, because very, very few things if any are "true, full stop." Something like this theorem can be fully true according to Bayesian statistics or some other system of thought, full stop. If this is the case, then in means I don't fully accept that system of thought. Is disagreement not allowed?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-13T16:03:37.807Z · LW(p) · GW(p)
Maybe not "rationality" exactly but Aumann's work, whatever it is you call what we're doing here. Rational decision-making.
How does what I said there mean "rationality is mathematics"? All I'm saying is that Aumann's agreement theorem is mathematics, and if you're attempting to disagree with it, then you're attempting to disagree with mathematics.
What I'm saying is not that I disagree with the derivation I gave, but that I don't fully agree with its premises.
I agree that this is what you should've said, but that isn't what you said. Disagreeing with an implication "if P, then Q" doesn't mean disagreeing with P.
I'm guessing that that's what you actually mean though
No, it's not. I just mean that mathematical facts are mathematical facts and questioning their relevance to real life is not the same as questioning their truth.
Replies from: None↑ comment by [deleted] · 2013-05-13T17:02:03.333Z · LW(p) · GW(p)
Now this just depends on what we mean by "disagree." Of course I can't dispute a formal logical derivation. The math, of course, is sound.
Disagreeing with an implication "if P, then Q" doesn't mean disagreeing with P.
All I disagree with X, which means either that I don't agree that Q implies X, or I don't accept P.
I'm not questioning mathematical truth. All I'm questioning is what TimS said. But if we agree it was just a misunderstanding, can we move on? Or not. This also doesn't seem to be going anywhere, especially if we've decided we fundamentally disagree. (Which in and of itself is not grounds for a downvote, may I remind you all.)
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-13T17:06:37.847Z · LW(p) · GW(p)
I didn't downvote you because we disagree, I downvoted you because you conflated disagreeing with the applicability of a mathematical fact to a situation with disagreeing with a mathematical fact. Previously I downvoted you because you tried to argue against two positions I never claimed to hold.
Replies from: None↑ comment by TimS · 2013-05-13T17:39:50.454Z · LW(p) · GW(p)
Imagine a child who grows up being fed very high priors about G1. This child (not a SREoE) is exposed to E1 and has a high confidence in G1. When he (/she) grows up and eventually becomes a SREoE, he first of all consciously throws out all his priors (rebellion against parents), then re-evaluates E1 (re-exposure?) and decides that in fact it entails ~G1.
This was not my experience. I was raised in a practicing religious family, and the existence of the holy texts, the well-being of the members of the religious community, and the existence of the religious community were all strong evidence for G1.
I reduced the probability I assigned to G1 because I realized I was underweighing other evidence. Things I would expect to be true if G1 were true turned out to be false. I think I knew those facts were false, but did not consider the implications, and so didn't adjust my belief in G1.
Once I considered the implications, it became clear to me that E1 was outweighed by the falsification of other implications of G1. Given that balance, I assign G1 very very low probability of being accurate. But I still don't deny that E1 is evidence of G1. If I didn't know E1, learning it would adjust upward my belief in G1.
Also, if we are going to talk coherently about priors, we can't really describe anything humans do as "throwing out their priors." If we really assign probability zero to any proposition, we have no way of changing our minds again.. And if we assign some other probability, justifying that is weird.
In practice, what people seem to mean is best described technically as changing what sorts of things count as evidence. I changed my beliefs about G1 because I started taking the state of the world and the prevalence of human suffering as a fact about G1
Replies from: None↑ comment by [deleted] · 2013-05-13T17:52:03.104Z · LW(p) · GW(p)
Also, if we are going to talk coherently about priors, we can't really describe anything humans do as "throwing out their priors." If we really assign probability zero to any proposition, we have no way of changing our minds again.. And if we assign some other probability, justifying that is weird.
Certainly you can't simply will your aliefs to change, but it does seem to be a conscious and deliberate effort around here. The belief in G1 usually happens without any knowledge about Bayesian statistics, technical rationality, or priors, so this "awakening" may be the first time a person ever thought of E1 as "evidence" in this technical sense.
the prevalence of human suffering
By the way, I think the best response to this argument is that yes, there is evil, but God allows it because it is better for us in the long run—in other words, if there is an afterlife which is partly defined by our existence here, than our temporary comfort isn't the only thing to consider. If we all lived in the Garden of Eden, we would never learn or progress. But I don't want a whole new argument on my hands.
↑ comment by Bugmaster · 2013-05-09T21:50:02.680Z · LW(p) · GW(p)
Maybe. I think it's best to give it a chance at least.
I agree. As soon as a theist can demonstrate some evidence for his deity's existence... well, I may not convert on the spot, given the plethora of simpler explanations (human hoaxers, super-powered alien teenagers, stuff like that), but at least I'd take his religion much more seriously. This is why I mentioned the prayer studies in my original comment.
Unfortunately, so far, no one managed to provide this level of evidence. For example, a Mormon friend of mine claimed that their Prophet can see the future. I told him that if the Prophet could predict the next 1000 rolls of a fair six-sided die, he could launch a hitherto unprecedented wave of atheist conversions to Mormonism. I know that I personally would probably hop on board (once alien teenagers and whatnot were taken out of the equation somehow). That's all it would take -- roll a die 1000 times, save a million souls in one fell swoop.
I'm still waiting for the Prophet to get back to me...
Replies from: None↑ comment by [deleted] · 2013-05-09T23:11:35.451Z · LW(p) · GW(p)
This one is a classic Sunday School answer. The God I was raised with doesn't do that sort of thing very often because it defeats the purpose of faith, and knowledge of God is not the one simple requirement for many versions of heaven. It is necessary, they say, to learn to believe on your own. Those who are convinced by a manifestation alone will not remain faithful very long. There's always another explanation. So yes, you're right, God (assuming Mormonism is true for a moment, as your friend does) could do that, but it wouldn't do the world much good in the end.
Replies from: JoshuaZ, Bugmaster↑ comment by JoshuaZ · 2013-05-10T01:09:58.411Z · LW(p) · GW(p)
The God I was raised with doesn't do that sort of thing very often because it defeats the purpose of faith,
The primary problem with this sort of thing is that apparently God was willing to do full-scale massive miracles in ancient times. So why the change?
↑ comment by Bugmaster · 2013-05-10T00:05:11.372Z · LW(p) · GW(p)
The God I was raised with doesn't do that sort of thing very often because it defeats the purpose of faith...
Right, but hopefully this explains one of the reasons why I'm still an atheist. From my perspective, gods are no more real than 18th-level Wizards or Orcs or unicorns; I don't say this to be insulting, but merely to bring things into perspective. There's nothing special in my mind that separates a god (of any kind) from any other type of a fictional character, and, so far, theists have not supplied me with any reason to think otherwise.
In general, any god who a priori precludes any possibility of evidence for its existence is a very hard (in fact, nearly impossible) sell for me. If I were magically transported from our current world, where such a god exists, into a parallel world where the god does not exist, how would I tell the difference ? And if I can't tell the difference, why should I care ?
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T00:48:18.055Z · LW(p) · GW(p)
And if I can't tell the difference, why should I care ?
Well, if in one world, your disbelief results in you going to hell and being tormented eternally, I think that would be pretty relevant. Although I suppose you could say in that case you can tell the difference, but not until it's too late.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-05-10T00:53:31.497Z · LW(p) · GW(p)
Although I suppose you could say in that case you can tell the difference, but not until it's too late.
Indeed. I have only one of me available, so I can't afford to waste this single resource on figuring things out by irrevocably dying.
↑ comment by JoshuaZ · 2013-05-09T20:35:50.242Z · LW(p) · GW(p)
Simple explanations are good, but not necessarily correct.
Right, simpler explanations start with a higher probability of being correct. And if two explanations for the same data exist, you should assign a high chance to the one that is simpler.
It's awfully easy to say they're all nutcases, but it's still easy and a bit more fair to say that they're mostly nutcases but maybe some of them are correct. Maybe. I think it's best to give it a chance at least.
Why should one give "it a chance" and what does that mean? Note also that "nutcase" is an overly strong conclusion. Human reasoning and senses are deeply flawed, and very easy to have problems. That doesn't require nutcases. For example, I personally get sleep paralysis. When that occurs, I get to encounter all sorts of terrible things, demons, ghosts, aliens, the Borg, and occasionally strange tentacled things that would make Lovecraft's monsters look tame. None of those things exist- I have a minor sensory problem. The point of using something like schizophrenia is an example is that it is one of the most well-known explanations for the more extreme experiences or belief sets. But the general hypothesis that's relevant here isn't "nutcase" so much as "brain had a sensory or reasoning error, as they are wont to do."
↑ comment by Bugmaster · 2013-05-09T21:37:55.888Z · LW(p) · GW(p)
Many people would disagree that atheism is the null hypothesis... and in those circles people honestly believe they've been personally contacted by God.
In this case, "there are no gods" is still the null hypothesis, but (from the perspective of those people) it has been falsified by overwhelming evidence. Some kind of overwhelming evidence coming directly from a deity would convince me, as well; but, so far, I haven't see any (which is why I haven't mentioned it in my post, above).
Many won't even accept that there is a possibility, and I think this is just as dangerous as blind faith.
I can't speak for other atheists, but I personally think that it is entirely possible that certain gods exist. For example, I see no reason why the Trimurti (Brahma/Vishnu/Shiva) could not exist in some way. Of course, the probability of their existence is so vanishingly small that it's not worth thinking about, but still, it's possible.
Replies from: None↑ comment by [deleted] · 2013-05-09T22:31:40.958Z · LW(p) · GW(p)
I appreciate that you try to keep the possibility open, but I think it's kind of silly to say that there is a possibility, just a vanishingly small one. Mathematically, there's no sense in saying that an infinitesmal is actually any greater than 0 expect for technical reasons—so perhaps you technically believe that the Trimurti could exist, but for all intents and purposes the probability is 0.
Replies from: drethelin↑ comment by drethelin · 2013-05-09T22:50:49.854Z · LW(p) · GW(p)
If you're ruling out infinitesimals then yes, I don't think there's any chance any chance the gods worshipped by humans exist.
Replies from: None↑ comment by [deleted] · 2013-05-09T23:13:50.292Z · LW(p) · GW(p)
A chance of 0 or effectively 0 is not conducive to a rational analysis of the situation. And I don't think there's enough evidence out there for a probability that small.
Replies from: Bugmaster, drethelin↑ comment by Bugmaster · 2013-05-10T00:11:56.284Z · LW(p) · GW(p)
A chance of 0 or effectively 0 is not conducive to a rational analysis of the situation.
Why not ? What probability would you put on the proposition that the following things exist ?
- Tolkien-style Elves
- Keebler Elves
- Vishnu, the Preserver
- Warhammer-style Orcs
- Thor, the Thunderer
- Chernobog/Bielobog, the Slavic gods of fortune (bad/good respectively)
- Unicorns
I honestly do believe that all of these things could, potentially, exist.
Replies from: None↑ comment by [deleted] · 2013-05-10T10:38:29.389Z · LW(p) · GW(p)
If I really thought about it, I would have to say that there's quite a good chance that somewhere through all the universes there's some creature resembling a Keebler elf.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-05-11T16:40:15.941Z · LW(p) · GW(p)
All right, so does this mean that living your life as though Keebler Elves did not exist at all would be irrational ? After all, there's a small probability that they do exist...
Replies from: None↑ comment by [deleted] · 2013-05-11T20:48:20.973Z · LW(p) · GW(p)
I never called anyone irrational for not believing in elves. I only said that a perfectly rational person would keep the possibility open.
Please stop exaggerating my arguments (and those of, for instance, the Book of Mormon) in order to make them easier to dismiss. It's an elementary logical fallacy which I'm finding quite a lot of here.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-05-13T23:27:21.449Z · LW(p) · GW(p)
I never called anyone irrational for not believing in elves.
You kinda did:
A chance of 0 or effectively 0 is not conducive to a rational analysis of the situation.
In my own personal assessment, the probability of Keebler Elves existing is about the same as the probability of any major deities existing -- which is why I don't spend a lot of time worrying about it. My assessment is not dogmatic, though; if I met a Keebler Elf in person, or saw some reputable photographic evidence of one, or something like that, then I'd adjust the probability upward.
Replies from: Prismattic↑ comment by Prismattic · 2013-05-13T23:58:55.477Z · LW(p) · GW(p)
I'd assign a higher probability to Keebler Elves than to an interventionist diety. Keebler Elves don't have issues with theodicy.
Replies from: Bugmaster↑ comment by JoshuaZ · 2013-05-09T04:13:41.572Z · LW(p) · GW(p)
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking? If you say there aren't any, I won't believe you. I sincerely hope that you aren't afraid to expose your young ones to alternate viewpoints, as some parents and religions are. The optimal situation for you is that you've heard intelligent, thoughtful, rational criticism but your position remains strong.
Do you mean to ask this about specifically the religion issue or things in general? Keep in mind, that while policy debates should not be one sided, that's because reality is complicated and doesn't make any effort to make things easy for us. But, hypotheses don't function that way- the correct hypotheses really should look extremely one-sided, because they reflect what a correct description of reality is.
So the best arguments for an incorrect hypothesis are by nature going to be weak. But if I were to put on my contrarian arguer hat for a few minutes and give my own personal response, I'd say that first cause arguments are possibly the strongest argument for some sort of deity.
Replies from: None↑ comment by [deleted] · 2013-05-09T14:12:24.533Z · LW(p) · GW(p)
It's a good point. Of course, hundreds of years ago, the argument was also pretty one-sided, but that doesn't mean anyone was correct. I also don't think that the argument really is one-sided today, I just think that the two sides manage to ignore each other quite thoroughly. I
'm not expecting this site to house a debate on the possibility of God's existence. Clearly this site is for atheists. I'm asking, is that actually necessary? I suppose you're saying that yes, it is impossible for rationality and religion to coexist, and that's why there are very few theistic rationalists. I'm still not convinced of that.
First cause arguments are a strange existential puzzle, depending on the nature of your God. Any thought system that portrays God as a sort of person will run into the same problem of how God came into existence.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-05-09T16:07:22.346Z · LW(p) · GW(p)
I'm asking, is that actually necessary? I suppose you're saying that yes, it is impossible for rationality and religion to coexist, and that's why there are very few theistic rationalists.
A rationalist should strive to have a given belief if and only if that belief is true. I want to be a theist if and only if theism is correct.
Note also that getting the right answers to these sorts of questions matters far more than some would estimate. If Jack Chick is correct, then most people here (and most of the world) is going to burn in hell unless they are saved. And this sort of remark applies to a great deal of religious positions (less so for some Muslims, most Jews and some Christians but the basic point is true for a great many faiths). In the other direction, if there isn't any protective, intervening deity, then we need to take serious threats to humanity's existence, like epidemics, asteroids, gamma ray bursts, nuclear war, bad AI, nanotech, etc. a lot more seriously, because no one is going to pick up the pieces if we mess up.
To a large extent, most LWians see the basics of these questions as well-established. Theism isn't the only thing we take that attitude about. You also won't see here almost any discussion of continental philosophy for example.
Replies from: None↑ comment by [deleted] · 2013-05-09T17:02:25.622Z · LW(p) · GW(p)
So is LW for people who think highly rationally, or for atheists who think highly rationally? Are those necessarily the same? If not, where are the rational theists?
A rationalist should strive to have a given belief if and only if that belief is true. I want to be a theist if and only if theism is correct.
You're assuming that "no God" is the null hypothesis. Is there a good, rational reason for this? One could just as easily argue that you should be an atheist if and only if it's clear that atheism is correct. Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?
Replies from: None, TheOtherDave, Desrtopa, JoshuaZ↑ comment by [deleted] · 2013-05-09T20:33:17.901Z · LW(p) · GW(p)
You're assuming that "no God" is the null hypothesis. Is there a good, rational reason for this? One could just as easily argue that you should be an atheist if and only if it's clear that atheism is correct. Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?
IMO there's no such thing as a null hypothesis; epistemology doesn't work like that. The more coherent approach is bayesian inference, where we have a prior distribution and update that distribution on seeing evidence in a particular way.
If there were no empirical evidence either way, I'd lean towards there being an anthropomorphic god (I say this as a descriptive statement about the human prior, not normative).
The trouble is that once you start actually looking at evidence, nearly all anthropomorphic gods get eliminated very quickly, and in fact the whole anthropomorphism thing starts to look really questionable. The universe simply doesn't look like it's been touched by intelligence, and where it does, we can see that it was either us, or a stupid natural process that happens to optimize quite strongly (evolution).
So while "some sort of god" was initially quite likely, most particular gods get eliminated, and the remaining gods are just as specific and unlikely as they were at first. So while the "gods" subdistribution is getting smashed, naturalistic occamian induction is not getting smashed nearly as hard, and comes to dominate.
The only gods remaining compatible with the evidence are things like "someone ran all possible computer programs", which is functionally equivalent to metaphysical "naturalism", and gods of very specific forms with lots of complexity in the hypothesis that explains why they constructed the world to look exactly natural, and then aren't intervening yet.
Those complex specific gods only got a tiny slice of the god-exists pie at the beginning and cannot collect more evidence than the corresponding naturalistic explanation (because they predict the same), so they are pretty unlikely.
And then when you go to make predictions, what these gods might do gets sliced up even further such that the only useful predictive framework is the occamian naturalism thing.
There is of course the chance that there exists things "outside" the universe, and the major implication from that is that we might some day be able to break out and take over the metauniverse as well.
↑ comment by TheOtherDave · 2013-05-09T19:21:06.201Z · LW(p) · GW(p)
So is LW for people who think highly rationally, or for atheists who think highly rationally?
Neither, really. It's for people who are interested in epistemic and instrumental rationality.
There are a number of such folks here who identify as theists, though the majority don't.
Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?
Can you clarify what you mean by "some sort of Deity"? It's difficult to have a coherent conversation about evidence for X without a shared understanding of what X is.
↑ comment by Desrtopa · 2013-05-09T17:14:21.894Z · LW(p) · GW(p)
You're assuming that "no God" is the null hypothesis. Is there a good, rational reason for this?
In general, it's not rational to posit that anything exists without evidence. Out of the set of all things that could be posited, most do not exist.
"Evidence" need not be direct observation. If you have a model which has shown good predictive power, which predicts a phenomenon you haven't observed yet, the model provides evidence for that phenomenon. But in general, people here would agree that if there isn't any evidence for a proposition, it probably isn't true.
ETA: see also Absence of evidence is evidence of absence.
Replies from: None↑ comment by [deleted] · 2013-05-09T17:38:06.075Z · LW(p) · GW(p)
Certainly. But why is "God" the proposition, and not "no God?"
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-09T18:19:12.244Z · LW(p) · GW(p)
Because nearly all things that could exist, don't. When you're in a state where you have no evidence for an entity's existence, then odds are that it doesn't exist.
Suppose that instead of asking about God, we ask "does the planet Hoth, as portrayed in the Star Wars movies, exist?" Absent any evidence that there really is such a planet, the answer is "almost certainly not."
If we reverse this, and ask "Does the planet Hoth, as portrayed in the Star Wars movies, not exist?" the answer is "almost certainly."
It doesn't matter how you specify the question, the informational content of the default answer stays the same.
Replies from: None↑ comment by [deleted] · 2013-05-10T11:13:40.482Z · LW(p) · GW(p)
I don't think that the Hoth argument applies here, because what we're looking for is not just some teapot in some random corner of the univers—it's a God actively involved in our universe. In other words, in God does exist, He's a very big part of our existence, unlike your teapot or Hoth.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T13:47:34.153Z · LW(p) · GW(p)
That's a salient difference if his involvement is providing us with evidence, but not if it isn't.
Suppose we posit that gravitational attraction is caused by invisible gravity elves, which pull masses towards each other. They'd be inextricably tied up in every part of our existence. But absent any evidence favoring the hypothesis, why should we suspect they're causing the phenomenon we observe as gravity? In order for it to make sense for us to suspect gravity elves, we need evidence to favor gravity elves over everything else that could be causing gravity.
Replies from: None, None↑ comment by [deleted] · 2013-05-10T14:28:58.391Z · LW(p) · GW(p)
That's a salient difference if his involvement is providing us with evidence, but not if it isn't.
I suppose it's fair to say that if our universe was created by a clockmaker God who didn't interfere with our world, then it wouldn't matter to us whether or not He existed. But since there's a lot of reason to think that God does interact with us humans (like, transcripts of His conversations with them), then it does matter.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T14:41:50.139Z · LW(p) · GW(p)
Well, I'm willing to discuss the evidence for and against that proposition. Naturally, I would not be an atheist if I thought the weight of evidence was in favor of an interventionist god existing.
Replies from: None↑ comment by [deleted] · 2013-05-10T15:18:20.593Z · LW(p) · GW(p)
Naturally. But there have been a lot of debates about which way the evidence points, and none of them seem to have convinced anyone.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T15:22:28.069Z · LW(p) · GW(p)
Some of them have certainly convinced people. I've convinced a number of people myself, and I've known plenty of other people who were convinced by debates with other people (or even more often, by observing debates between other people, since it's easier to change your mind when you're not locked in an adversarial debate mindset. This is why it's important not to fall into the trap of thinking of your debate partner as an opponent.)
A lot of religious debates are not productive, people tend to go into them very attached to their conclusions, but they're by no means uniformly fruitless.
Replies from: None↑ comment by [deleted] · 2013-05-10T14:25:02.481Z · LW(p) · GW(p)
We don't actually have any idea what causes gravity. Your elves may well be Higgs Bosons or something like that. (God Particles...)
So no, we don't have any evidence that "elves" of some kind cause gravity, or that anything at all does. And so the question is open—we don't suspect anything, but we don't particularly suspect nothing either.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T14:34:44.573Z · LW(p) · GW(p)
It's rather disingenuous to speak of the Higgs Boson as gravity elves though.
With gravity, we're not really in a state of no evidence, because as I said before, if you have an effective predictive model, then you have evidence for the things the model predicts. So we have evidence favoring things that could plausibly fit into our existing models over things that couldn't.
If we're discussing, for instance, what caused the universe to come into existence, and it turns out that there is a first cause, but it has nothing that could be described as thoughts or intentions, then it doesn't save the god hypothesis to say that something was there, because what was there doesn't resemble anything that it's useful to conceive of as god.
↑ comment by JoshuaZ · 2013-05-09T17:09:50.721Z · LW(p) · GW(p)
A rationalist should strive to have a given belief if and only if that belief is true. I want to be a theist if and only if theism is correct.
You're assuming that "no God" is the null hypothesis.
Not really. Bayesian reasoning doesn't have any notion of a null hypothesis. I could just as well have said "I want to be an atheist if and only if atheism is correct".
Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?
One can talk about the prior probability of a given hypothesis, and that's a distinct issue which quickly gets very messy. In particular, it is extremely difficult to both a) establish what priors should look like and b) not get confused about whether one is taken for granted very basic evidence about the world around us (e.g. its existence). One argument, popular at least here, is that from an Occam's razor standpoint, most deity hypotheses are complicated and only appear simple due to psychological and linguistic issues. I'm not sure how much I buy that sort of argument. But again, it is worth emphasizing that one doesn't need control of the priors except at a very rough level.
It may help if you read more on the difference between Bayesian and frequentist approaches. The general approach of LW is primarily Bayesian, whereas notions like a "null hypothesis" are essentially frequentist.
Replies from: None↑ comment by [deleted] · 2013-05-09T17:20:11.596Z · LW(p) · GW(p)
You're right that prior probability gets very, very messy. It's a bit too abstract to actually be helpful to us.
So, then, all we can do is look at the evidence we do have. You're saying that the argument is one-sided; there is no evidence in favor of theism, at least no good evidence. I agree that there is a lot of bad evidence, and I'm still looking for good evidence. You've said you don't know of any. Thank you. That's what I wanted to know. In general I don't think it's healthy to believe the opposing viewpoint literally has no case.
Replies from: JoshuaZ, khafra↑ comment by JoshuaZ · 2013-05-09T17:28:14.481Z · LW(p) · GW(p)
In general I don't think it's healthy to believe the opposing viewpoint literally has no case.
Do you think that young earth creationists have no substantial case? What about 9/11 truthers? Belief in astrology? Belief that cancer is a fungus(no I'm not making that one up)? What about anything you'll find here?
The problem is that some hypotheses are wrong, and will be wrong. There are always going to be a lot more wrong hypothesis than right ones. And in many of these cases, there are known cognitive biases which lead to the hypothesis type in question. It may help to again think about the difference between policy issues (shouldn't be one-sided), and factual questions (which once one understands most details, should be).
↑ comment by khafra · 2013-05-14T18:54:26.244Z · LW(p) · GW(p)
You're right that prior probability gets very, very messy. It's a bit too abstract to actually be helpful to us.
You cannot escape the necessity of dealing with priors, however messy they are.
So, then, all we can do is look at the evidence we do have.
The available evidence supports an infinite number of hypotheses. How do you decide which ones to consider? That is your prior, and however messy it may be, you have to live with it.
↑ comment by Desrtopa · 2013-05-09T16:07:56.261Z · LW(p) · GW(p)
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking? If you say there aren't any, I won't believe you.
How legitimate does "most legitimate" have to be? If I thought there were any criticisms sufficiently legitimate to seriously reconsider my viewpoints, I would have changed them already. To the extent that my religious beliefs are different than they were, say, fifteen years ago, it's because I spent a long time seeking out arguments, and if I found any persuasive, I modified my beliefs accordingly. But I reached a point where I stopped finding novel arguments for theism long before I stopped looking, so if there are any arguments for theism that I would find compelling, they see extremely little circulation.
The arguments for "theism" which I see the least reason to reject are ones which don't account for anything resembling what we conventionally recognize as theism, let alone religion, so I'm not sure those would count according to the criteria you have in mind.
Replies from: None↑ comment by [deleted] · 2013-05-09T20:02:41.656Z · LW(p) · GW(p)
I'd be happy to hear what you've got. I can't just ask you to share all of your life-changing experiences, obviously. Having looked for new arguments and not found any good ones is a great position, I think, because then you can be pretty sure you're right. I don't know if I could ever convince myself there are no new arguments, though.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-09T21:06:12.377Z · LW(p) · GW(p)
I'm certainly not convinced that there are no new arguments, but if there were any good arguments, I would expect them to have more currency.
If you want to explain what good arguments you think there are, I'd certainly be willing to listen. I don't want to foist all the work here onto you, but honestly, having you just cover what you think are the good arguments would be simpler than me covering all the arguments I can think of, none of which I actually endorse, without knowing which if any you ascribe to.
Replies from: None↑ comment by [deleted] · 2013-05-10T10:34:33.331Z · LW(p) · GW(p)
I'm sorry, I can't help you with that. I'm sure that you've done much more research on this than I have. I'm looking for decent arguments because I don't believe all these people who say there aren't any.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T14:02:39.524Z · LW(p) · GW(p)
Well, what do you mean by decent? Things I accept as having a significant weight of evidence, or things I can understand how people would see them as convincing, even if I see reasons to reject them myself?
In the latter sense, it makes sense to assume that there must be good arguments, because if there weren't arguments that people found convincing, then so much of the world would most likely not be convinced. But in the former sense, it doesn't make sense to assume that there must be good arguments in general, because for practical purposes it means you'd be assuming the conclusion that a god is real, and it makes even less sense to assume that I specifically would have any, because if I did, I wouldn't disbelieve in the proposition that there is a god.
One of the things that those of us who're seriously trying to be rational share is that we try to conduct ourselves so that when the weight of evidence favors a particular conclusion, we don't just say "well, that's a good point, and I acknowledge it," we adopt that conclusion. Our positions should represent, not defy, the evidence available to us.
Replies from: None↑ comment by [deleted] · 2013-05-10T14:50:28.861Z · LW(p) · GW(p)
This is largely a problem of the nature of each side's evidence. MOst of the evidence in favor of God is quickly dismissed by those who think they're more rational than the rest of humanity, and the biggest piece of evidence I'm being given against God is that there is no evidence for Him (at least none that you guys accept). Absence of evidence is at best a passive, weak argument (which common wisdom would generally reject).
And no, I'm not assuming that God is real, I'm simply assuming that there's a non-negligible chance of it. Is that too much to ask?
Replies from: TheOtherDave, Desrtopa↑ comment by TheOtherDave · 2013-05-10T15:44:22.390Z · LW(p) · GW(p)
And the same question arises that has been raised several times: how ought I address the evidence from which many Orthodox Jews conclude that Moses was the last true Prophet of YHWH?
From which many Muslims conclude that Mahomet was the last true Prophet of YHWH?
From which many Christians conclude that Jesus was the last true Prophet of YHWH?
From which millions of followers of non-Abrahamic religions conclude that YHWH is not the most important God out there in the first place?
Is it not reasonable to address the evidence from which Mormons conclude that Lehi, or Kumenohni, or Smith, or Monson, were/are Prophets of YHWH the same way, regardless of what tradition I was raised in?
If skepticism about religious claims is not justified, then it seems to follow naturally that skepticism about religious claims is not justified.
Replies from: None↑ comment by [deleted] · 2013-05-10T17:00:42.579Z · LW(p) · GW(p)
last true Prophet of YHWH?
It's important to note that in fact, most Muslims and many Christians (I don't know Judaism as well) believe that Moses, Mohammed, and Jesus were all true prophets. They differ in a few details, but the general message is the same.
I think it is definitely reasonable to address all of this evidence. One of Thomas Monson's predecessors expressly stated that he believed God truly did appear to Mohammed.
I never said I was necessarily skeptical of claims by Jews or Muslims. Some of them must have been brain glitches, just as some claims by Mormons probably are too. But I have no problem accepting that Jews, Muslims, and Christians (maybe even atheists) can all receive divine revelation.
As I said before, it's impractical to try to stretch this logic to argue in favor of any one religion. I'm talking about the existence of God in general.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T20:19:09.684Z · LW(p) · GW(p)
FWIW, the form of Judaism I was raised in entails the assertion that Jesus Christ was not the Messiah, so is logically incompatible with most forms of Christianity.
That aside, though, I'm content to restrict our discussion to non-sectarian claims; thanks for clarifying that. I've tried to formalize this a little more in a different thread; probably best to let this thread drop here.
Replies from: None↑ comment by [deleted] · 2013-05-10T20:49:43.705Z · LW(p) · GW(p)
Judaism
You're right, silly me, I honestly should have remembered that. Judaism seems less...open...in that way. But I still think that details of the nature of God aside, the general message of each of these religions, namely "la ilaha ila allah," is the same. ("There is no God but God," that is. It's much more elegant in Arabic.)
This whole mess is certainly in need of some threads being dropped or relocated. Good idea—where is it?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T21:24:45.397Z · LW(p) · GW(p)
I refer to this thread.
Replies from: None↑ comment by Desrtopa · 2013-05-10T15:08:55.768Z · LW(p) · GW(p)
Well, if we're mistaken in dismissing the evidence theists raise in support of the existence of gods, then of course, with the weight of evidence in favor of it, it's reasonable to assign a non-negligible probability to it.
The important question here is whether the people dismissing the purported evidence in favor are actually correct.
Suppose we're discussing the question of how old the earth is. One camp claims the weight of evidence favors the world being about 4.5 billion years old, another claims the weight of evidence favors it being less than 12,000 years old. Each camp has arguments they raise in favor of this point, and the other camp has reasons for rejecting the other camp's claims.
At least one of these camps must be wrong about the weight of evidence favoring their position. There's nothing wrong with rejecting purported evidence which doesn't support what its advocates claim it supports. Scientists do this amongst each other all the time, picking apart whether the evidence of their experiments really supports the authors' conclusions or not. You have to do that sort of thing effectively to get science done.
As far as I've seen, you haven't yet asked why we reject what you consider to be evidence in favor of an interventionist deity. Why not do that? Either we're right in rejecting it or we're not. You can try to find out which.
Replies from: None↑ comment by [deleted] · 2013-05-10T15:53:54.625Z · LW(p) · GW(p)
As long as we're not sure of the truth (you may be, but our society in general is not), it's silly to go around saying who's "correct" in accepting or rejecting a particular piece of evidence.
I believe I understand why you reject all evidence in favor of God. I know a lot of atheists, and I've read a lot of rationalism. To simplify: the books are all made up and the modern revelation is all brain glitches. And you believe that according to your rationalist way of thinking, this is the only "correct" conclusion to draw.
I think that you're fully justified in rejecting this evidence based on the way you look at the situation. I look at things differently, and I accept some of the evidence. And thus we disagree. What I'm wondering now is whether you think it's necessarily "wrong" to accept such evidence.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T16:31:25.805Z · LW(p) · GW(p)
As long as we're not sure of the truth (you may be, but our society in general is not), it's silly to go around saying who's "correct" in accepting or rejecting a particular piece of evidence.
Suppose a researcher performs an experiment, and from its results, concludes that lemons cure cancer. Another scientist analyzes their procedure, and points out "Your methodology contains several flaws, and when I perform experiments with those same flaws, I can show with the same level of significance that ham, beeswax, sugarpill, and anything else I've tested, also cures cancer. But if I correct those flaws in the methodology, I stop getting results that indicate that any of these things cure cancer."
Do you continue to accept the experiment as evidence that lemons cure cancer?
I think that you're fully justified in rejecting this evidence based on the way you look at the situation. I look at things differently, and I accept some of the evidence. And thus we disagree. What I'm wondering now is whether you think it's necessarily "wrong" to accept such evidence.
It's hard to get around this without seeming arrogant or condescending, but yes, I do.
It's a major oversimplification to say that my position is simply "the books are all made up and the modern revelation is all brain glitches," but I do believe that every standard of evidence I've encountered in support of any religion (and I've encountered a lot) can be re-applied in other situations where the results are easier to check, and be shown to be ineffective in producing right answers.
If a person does science poorly, then the poorness of their research isn't a matter of opinion, it's a fact about how effectively their experiments allow them to draw true conclusions about reality.
Replies from: None↑ comment by [deleted] · 2013-05-10T17:24:41.824Z · LW(p) · GW(p)
Do you continue to accept the experiment as evidence that lemons cure cancer?
No, I don't, and here's why: in the context of clinical trials, there are established agreements about right and wrong methodology.
But if I correct those flaws in the methodology, I stop getting results that indicate that any of these things cure cancer.
What does this correspond to in your analogy? What this part does is show that the scientist questioning the methodology is correct, and the original experimenter is wrong. However I don't see any objective evidence that your "methodology" is better than a methodology that allows for God.
However, if you're trying to mean that your "correct" methodology is science in general, and that accepting evidence of God is inherently unscientific...
yes, I do.
yes, that's what you're saying. OK. That's largely what I was wondering—in your mind, there's no possible way to reconcile religion and rationality. Because the only evidence for God was found using a bad methodology, namely, personal experience.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-10T17:51:17.924Z · LW(p) · GW(p)
What does this correspond to in your analogy? What this part does is show that the scientist questioning the methodology is correct, and the original experimenter is wrong. However I don't see any objective evidence that your "methodology" is better than a methodology that allows for God.
However, if you're trying to mean that your "correct" methodology is science in general, and that accepting evidence of God is inherently unscientific...
If you want to raise specific points of evidence for god, I can explain how the analogy relates, unless you have better evidence which I haven't heard before.
yes, that's what you're saying. OK. That's largely what I was wondering—in your mind, there's no possible way to reconcile religion and rationality. Because the only evidence for God was found using a bad methodology, namely, personal experience.
"Personal experience" as a general term does not describe a set of methodologies which are universally bad. In my experience, the set of methodologies which have been used to produce evidence for god are all bad, but it's not because they're personal experiences. Besides which, not all proposed evidence for god comes in the form of personal experience. I didn't spend years studying religion just so I could brush it all away by shoving it all into a single category I could dismiss out of hand, or so that I could argue persuasively that it wasn't true.
I think it's a mistake of rationality to try to reconcile religion and rationality, in the way that it seems to me that you're doing, because in general you don't want to try to reconcile rationality with any specific conclusion. You just follow the evidence to find what conclusion it supports.
Does the available evidence support the conclusion that the earth is 4.5 billion years old? It either does or it doesn't, and if it doesn't, then the conclusion probably isn't true. Does the available evidence support invisible gravity elves? A link between HIV and AIDS? In each of these cases, the answer is simply yes or no.
Sometimes we make mistakes in our judgment of evidence. We don't expect any human to be perfect at it. We have disagreements here about factual matters, and we acknowledge that this occurs because some or all of us are making mistakes as fallible human beings. But most of us agree on the matter of religion because we think the evidence is clear-cut enough to lead us to the same conclusion.
Replies from: None↑ comment by [deleted] · 2013-05-10T20:55:36.698Z · LW(p) · GW(p)
most of us agree on the matter of religion because we think the evidence is clear-cut enough to lead us to the same conclusion.
Right, OK.
But one thing:
In each of these cases, the answer is simply yes or no.
Science is not nearly so black-and-white. If it were simply a matter of running an experiment with "good methodology," it would be easy. But I know how academia works. It's messy.
For instance, does the available evidence support the conclusion that this new thing causes cancer? Yes or no, please. Because the scientists don't agree, and it's not a simple matter of figuring out which side is being irrational.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-11T01:39:47.582Z · LW(p) · GW(p)
For instance, does the available evidence support the conclusion that this new thing causes cancer? Yes or no, please.
Which new thing?
As I said, humans are fallible, we have disagreements about factual matters. If we were all perfect judges of evidence, then all scientists with access to the same information would agree on how likely it is that some thing causes cancer. Sometimes making judgments of evidence is hard, sometimes it's easier. That doesn't mean that there isn't a right answer in each case.
Replies from: None↑ comment by [deleted] · 2013-05-11T08:11:12.396Z · LW(p) · GW(p)
Which new thing?
Any one of many things whose safety is disputed. The point is that it's not so simple as right and wrong in science
Sometimes making judgments of evidence is hard
That's what I mean. Even with the same evidence available, scientists don't all come to a the same conclusion.
Does the available evidence support the conclusion that the earth is 4.5 billion years old? It either does or it doesn't
And so I think that while in the case of the age of the earth it clearly does, but in many cases we just can't tell.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-11T13:09:07.222Z · LW(p) · GW(p)
Right. It's not that there isn't always a yes or no answer, it's just that it's sometimes difficult for us to work out what the correct judgment is.
It's possible that religion is such a case, but most of us here agree that the state of the evidence there is easier to judge than, for instance, the latest carcinogen suspect.
↑ comment by TheOtherDave · 2013-05-08T23:39:46.708Z · LW(p) · GW(p)
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?
That's a complicated question in general, because "our own way of thinking" is not a unary thing. We spend a lot of time disagreeing with each other, and we talk about a lot of different things.
But if you specifically mean atheism in its "it is best to reason and behave as though there are no gods, because the alternative hypotheses don't have enough evidence to justify their consideration" formulation, I think the most legitimate objection is that it may turn out to be true that, for some religious traditions -- maybe even for most religious traditions -- being socially and psychologically invested in that tradition gets me more of what I want than not being invested in it, even if the traditions themselves include epistemically unjustifiable states (such as the belief that an entity exists that both created the universe and prefers that I not eat pork) or false claims about the world (as they most likely do, especially if this turns out to be true for religious traditions that disagree with one another about those claims).
I don't know if that's true, but it's plausible, and if it is true it's important. (Not least of which because it demonstrates that those of us who are committed to a non-religious tradition need to do more work at improving the pragmatic value of our social structures.)
Replies from: None↑ comment by [deleted] · 2013-05-09T13:56:15.523Z · LW(p) · GW(p)
As for atheism, I don't mean those that think religion is good for us and we ought to believe it whether or not it's true. I meant rational thinkers who actually believe God realistically could exist. It's definitely interesting to think about trying to convince yourself to believe in God, or just act that way, but is it possible to actually believe with a straight face?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-09T16:21:55.995Z · LW(p) · GW(p)
Well, you asked for the most legitimate criticisms of rejecting religious faith.
Religious faith is not a rational epistemology; we don't arrive at faith by analyzing evidence in an unbiased way.
I can make a pragmatic argument for embracing faith anyway, because rational epistemology isn't the only important thing in the world nor necessarily the most important (although it's what this community is about).
But if you further constrain the request to seeking legitimate arguments for treating religious faith (either in general, or that of one particular denomination) as a rational epistemology, then I can't help you. Analyzing observed evidence in an unbiased way simply doesn't support faith in YHWH as worshiped by 20th-century Jews (which is the religious faith I rejected in my youth), and I know of no legitimate epistemological criticism that would conclude that it does, nor of any other denomination that doesn't have the same difficulty.
Now, if you want to broaden your search to include not only counterarguments against rejecting religious faith of specific denominations, but also counterarguments against rejecting some more amorphous proto-religious belief like "there exist mega-powerful entities in the universe capable of feats I can barely conceive of" (without any specific further claims like "and the greatest one of them all divided the Red Sea to free our ancestors from slavery in Egypt" or "and the greatest one of them all wrote this book so humanity would know how to behave" or even "and they pay attention to and direct human activity") then I'd say the most legitimate counterargument is Copernican: I start out with low confidence that my species is the most powerful entity in the universe, and while the lack of observed evidence of such mega-powerful entities necessarily raises that confidence, it might not legitimately raise it enough to accept.
But we've now wandered pretty far afield from "my way of thinking," as I'm perfectly comfortable positing the existence of mega-powerful entities in the universe capable of feats I can barely conceive of.
Replies from: None↑ comment by [deleted] · 2013-05-09T16:49:42.736Z · LW(p) · GW(p)
if you further constrain the request to seeking legitimate arguments for treating religious faith (either in general, or that of one particular denomination) as a rational epistemology, then I can't help you.
Thank you for answering my question. If I read it right you're saying "No, it's not possible to reconcile religion and rationality, or at least I can't refer you to any sane person who tried."
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-09T18:26:03.394Z · LW(p) · GW(p)
If I understand what you're using "religion" and "rationality" to mean, then I would agree with the first part. (In particular, I understand you to be referring exclusively to epistemic rationality.)
As for the second part, there are no doubt millions of sane people who tried. Hell, I've tried it myself. The difficulty is not in finding one, but rather in finding one who provides you with what you're looking for.
↑ comment by Qiaochu_Yuan · 2013-05-09T04:05:47.178Z · LW(p) · GW(p)
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?
What do you mean by "your own way of thinking" here? I can think of the following possible interpretations:
- The way I personally think about things
- The way this community thinks about things
- Atheism and skepticism in general
↑ comment by [deleted] · 2013-05-09T14:16:04.665Z · LW(p) · GW(p)
Any of these, really. It takes incredible strength to recognize flaws in your entire way of thinking, but if anyone can do it, the Rationalists ought to be able to.
What I'd really love is a link to someone smart saying "This is why I think the Less Wrong people are all misled, and here are good reasons why." But that's probably too much to expect, even around here.
Replies from: Qiaochu_Yuan, wedrifid↑ comment by Qiaochu_Yuan · 2013-05-10T05:01:44.666Z · LW(p) · GW(p)
Okay. This may not be the kind of thing you had in mind, but the way I personally think about things:
is probably not focused enough on emotions. I'm not very good at dealing with emotions, either myself or other people's, and I imagine that someone who was better would have very different thoughts about how to deal with people both on the small scale (e.g. interpersonal relationships) and on the large scale (e.g. politics).
may overestimate the value of individuals (e.g. in their capacity to affect the world) relative to organizations.
The way this community thinks about things:
is biased too strongly in directions that Eliezer finds interesting, which I suppose is somewhat unavoidable but unfortunate in a few respects. For example, Eliezer doesn't seem to think that computational complexity is relevant to friendly AI and I think this is a strong claim.
is biased towards epistemic rationality when I think it should be more focused on instrumental rationality. This is a corollary of the first bullet point: most of the Sequences are about epistemic rationality.
is biased towards what I'll call "cool ideas," e.g. cryonics or the many-worlds interpretation of quantum mechanics. I've been meaning to write a post about this.
is hampered by a lack of demographic diversity that is probably bad for cognitive diversity (my impression is that LW is overwhelmingly male, white, 18-24 years old, etc.).
Atheism and skepticism in general:
- is likely to be another form of belief as attire in practice. As in, I think many people who identify very strongly as atheists or skeptics are doing it to signal tribal affiliation more than anything else.
It takes incredible strength to recognize flaws in your entire way of thinking
Eh, does it? I think it just requires a cultural meme about criticism being a good thing. LW has this, maybe too much of this, and my impression is that so does Judaism (based on e.g. avoiding your belief's real weak points). This is some evidence that you are thinking reasonably but it isn't extremely strong evidence.
Replies from: Kawoomba, Nornagest, None↑ comment by Kawoomba · 2013-05-13T19:44:59.521Z · LW(p) · GW(p)
For example, Eliezer doesn't seem to think that computational complexity is relevant to friendly AI
Could you elaborate?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-13T22:29:23.219Z · LW(p) · GW(p)
On why Eliezer doesn't seem to think that or why I think that this is a strong claim? We had a brief discussion about this here.
↑ comment by Nornagest · 2013-05-10T07:31:52.671Z · LW(p) · GW(p)
I think it just requires a cultural meme about criticism being a good thing.
That usually gets you a culture of inconsequential criticism, where you can be as loudly contrarian as you want as long as you don't challenge any of the central shibboleths. This is basically what Eliezer was describing in "Real Weak Points", but it shows up in a lot of places; many branches of the modern social sciences work that way, for example. It gets particularly toxic when you mix it up with a cult of personality and the criticism starts being all about how you or others are failing to live up to the Great Founder's sacrosanct ideals.
I'm starting to think it might not be possible to advocate for a coherent culture that's open to changing identity-level facts about itself; you can do it by throwing out self-consistency, but that's a cure that's arguably worse than the proverbial disease. I don't think strength of will is what's missing, though, if anything is.
Replies from: None↑ comment by [deleted] · 2013-05-10T11:34:10.916Z · LW(p) · GW(p)
Yes. And that's what I'm unrealistically looking for—not just disagreement, but fundamental disagreement. And by fundamental I don't mean the nature of the Singularity, as central as that is to some. I mean things like "rational thought is better than irrational thought" or "religion is not consistent with rational thought." Even if they're not spoken, they're important and they're there, which means they ought to be up for debate. I mean "ought to" in the sense that the very best, most intellectually open society imaginable would have already debated these and come to a clear conclusion, but would be willing to debate them again at any time if there was reason to do so.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T15:07:35.811Z · LW(p) · GW(p)
What, on your view, constitutes a reason to debate issues about which a community has come to a conclusion?
Relatedly, on your view, can the question of whether a reason to debate an issue actually exists or not ever actually be settled? That is, shouldn't the very best, most intellectually open society imaginable on your account continue to debate everything, no matter how settled it seems, because just because none of its members can currently think of a reason to do so is insufficient grounds not to?
↑ comment by [deleted] · 2013-05-10T15:33:17.496Z · LW(p) · GW(p)
I think it's safe to end a debate when it's clear to outside observers (these are important) that it's not going anywhere new. An optimal society listens to outsiders as well.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-10T15:46:47.067Z · LW(p) · GW(p)
OK. Thanks for answering my question.
↑ comment by [deleted] · 2013-05-10T11:45:38.710Z · LW(p) · GW(p)
These are good, thank you.
About epistemic vs. instrumental rationality, though: I had never heard those terms but it seems like a pretty simple difference of what rationality is to be used for. The way I understand it, Less Wrong is quite instrumentally focused. There are many posts as well as sequences (and all of HPMOR) about how to apply rationality to your everyday life, in addition to those dealing only with technical probabilities (like Pascal's Mugging—not realistic).
Personally I'm more interested in the epistemic side of things and not a fan of assurances that these sequences will substantially improve your relationships or anything like that. But that's just me.
↑ comment by wedrifid · 2013-05-09T14:19:15.145Z · LW(p) · GW(p)
What I'd really love is a link to someone smart saying "This is why I think the Less Wrong people are all misled, and here are good reasons why." But that's probably too much to expect, even around here.
There are people here who say that kind of thing all the time... whether they are smart and the reasons are actually good is somewhat less certain.
Replies from: None↑ comment by [deleted] · 2013-05-09T14:26:53.471Z · LW(p) · GW(p)
Right, that's the problem. There are plenty of sites saying why LW is a cult, just as there are plenty of ignorant religion-bashers. I've found many intelligent atheists, and I'm sure that there are rational intellectuals out there who disagree with LW. But where are they?
Replies from: wedrifid, Intrism↑ comment by wedrifid · 2013-05-09T14:43:31.846Z · LW(p) · GW(p)
I've found many intelligent atheists, and I'm sure that there are rational intellectuals out there who disagree with LW. But where are they?
If you mean rational intellectuals who are theists and disagree with LW I cannot help you. Finding those who disagree with LW on core issues is less difficult. Robin Hanson for example. For an intelligent individual well informed of LW culture who advocates theism you could perhaps consider Will Newsome. Although he has, shall we say, 'become more eccentric than he once was' so I'm not sure if that'll satisfy your interest.
Replies from: None↑ comment by Intrism · 2013-05-09T16:09:40.023Z · LW(p) · GW(p)
I've found many intelligent atheists, and I'm sure that there are rational intellectuals out there who disagree with LW. But where are they?
As far as I know, most criticism of LW focuses on its taking certain strange problems seriously, not on atheism. LW has an unusual focus on Pascal-like problems, on artificial intelligence, on acausal trade, on cryonics and death in general, and on Newcomb's Problem. Many of these focuses result in beliefs that other rationalist communities consider "strange." There is also some criticism of Eliezer's position on quantum mechanics, but I'm not familiar enough with that issue to comment on it.
comment by Pablo (Pablo_Stafforini) · 2013-05-03T16:40:13.189Z · LW(p) · GW(p)
Together with Vallinder, I'm working on a paper on wild animal suffering. We decided to poll some experts on animal perception about their views on the likelihood that various types of animals can suffer. It now occurs to me that it might be interesting to compare their responses with those of the LW community. So, if you'd like to participate, click on one of the links below. The survey consists of only five questions and completing it shouldn't take more than a minute.
Click here if your year of birth is an even number
Click here if your year of birth is an odd number
(The two surveys are identical, except for the order in which the questions are presented. Please only take one of the surveys. Thanks!)
Replies from: fubarobfusco, Pablo_Stafforini, Prismattic, None↑ comment by fubarobfusco · 2013-05-03T20:09:08.309Z · LW(p) · GW(p)
"Foos can suffer" could mean "all foos can suffer", "the prototypical foo can suffer", or "there exists a foo that can suffer".
You might clarify whether "mammals" is meant to include humans and other primates.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-05-03T21:11:23.831Z · LW(p) · GW(p)
Thanks. In the cover email we sent to the researchers, we did make it clear that the survey was about suffering in non-human animals, so the statement about mammals should be read as excluding members of our species (but not other primates). As for the alternative interpretations of 'x can suffer', we thought the natural interpretation was 'At least some species in this group can suffer', but I agree that we could have phrased the sentence less ambiguously.
↑ comment by Pablo (Pablo_Stafforini) · 2013-06-05T22:48:52.197Z · LW(p) · GW(p)
Thanks to everyone who participated. The survey is now closed, and the results are here. There is one tab for LessWrong respondents and one tab for expert respondents.
↑ comment by Prismattic · 2013-05-04T03:32:46.005Z · LW(p) · GW(p)
Nitpicking: The set of mammals includes humans.
comment by jooyous · 2013-05-02T20:50:50.138Z · LW(p) · GW(p)
I have a question about linking sequence posts in comment bodies! I used to think it was a nice, helpful thing to do, such as citing your sources and including a convenient reference. But then it struck me that it might come off as patronizing to people that are really familiar with the sequences. Oops. Any pointers for striking a good balance?
Replies from: orthonormal, DaFranker, PhilGoetz, Kawoomba, TimS, TheOtherDave↑ comment by orthonormal · 2013-05-03T04:22:00.448Z · LW(p) · GW(p)
Linking old posts helps all of the new readers who are following the conversation; this is probably more important than any effects on the person you're directly responding to.
↑ comment by DaFranker · 2013-05-02T21:04:20.852Z · LW(p) · GW(p)
Always err on the side of littering your comment with extra links. IME, that's more practical and helpful, and I've never personally felt irked when reading posts or comments with lots of links to basic Sequence material.
In most cases, I've found that it actually helps remember the key points by seeing the page again, and helps most arguments flow more smoothly.
↑ comment by TimS · 2013-05-03T14:55:32.653Z · LW(p) · GW(p)
The only failure mode to avoid is implicitly or explicitly stating "Because you haven't read X, your input is not worth considering."
There was a time when that was a common failure mode on LW ("Go read the Sequences, then we'll talk"). Less so now.
↑ comment by TheOtherDave · 2013-05-03T16:09:38.063Z · LW(p) · GW(p)
I generally take a moment to think about how relevant the Sequence post is. Most of the time, I conclude that <10% of the post is actually relevant to my point, so I don't bother linking, as it seems like it enormously diffuses what I'm trying to express. (I don't link nominally relevant wikipedia articles for similar reasons.)
comment by Qiaochu_Yuan · 2013-05-05T07:56:51.042Z · LW(p) · GW(p)
Anyone here have experience hiring people on sites like Mechanical Turk, oDesk, TaskRabbit, or Fiverr? What kind of stuff did you hire them to do, and how good were they at doing it? It seems like these services could be potentially quite valuable so I'd like to get an idea of what it's possible to do with them.
Replies from: lukeprog, niceguyanon, Benquo, Matt_Simpson↑ comment by niceguyanon · 2013-05-06T18:23:06.553Z · LW(p) · GW(p)
I have used Fiverr to hire a professional voice actor to read short messages. For small scripting jobs or Photoshop work, I have always found reddit's r/forhire subreddit useful.
↑ comment by Benquo · 2015-02-15T13:45:04.698Z · LW(p) · GW(p)
I've hired TaskRabbits for the following tasks, with the following levels of success:
Drive me from DC to Baltimore and back the next day - perfect & cheap
Assemble a Superintelligence owl costume and deliver it to me on the same day, with just a picture and a suggestion for the method - perfect
Pick up laundry from my back porch, have it washed, dried, folded, and return in in boxes - perfect
Make me an Anki flashcard deck for some faces and names from a business's Our Team page - perfect
Data entry - Good, though slow
Find me a good haircut place and style - meh
Find Toastmasters clubs nearby, schedule times for me to sit in on a meeting - okay, did most of it but the calendar invitations they sent me were in the wrong time zone so the times were off.
Find me a Rolfer - tried, but people didn't return their calls. However, I had immediate success when I made calls myself, so I have to wonder how hard they tried.
Assemble furniture, put privacy window film on windows - furniture ok, windows no
Pack and mail a bunch of books - nope. Took books, brought them back. Cost me time.
↑ comment by Matt_Simpson · 2013-05-10T20:17:40.779Z · LW(p) · GW(p)
Experimental economists use mechanical turk sometimes. At least, were encourage to use it in the experimental economics class I just took.
comment by FiftyTwo · 2013-05-02T14:00:39.980Z · LW(p) · GW(p)
As a stereotypical twenty-something recent graduate I am lacking in any particular career direction. I've been considering taking various psychometric or career aptitude tests, but have found it difficult to find unbiased reports on their usefulness. Does anyone have any experience or evidence on the subject?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-05-02T19:13:28.561Z · LW(p) · GW(p)
imagine your ideal workplace, try to quantify what makes it ideal, and then work backwards. Or just try to make the most money you can since you're young and probably have a high stress tolerance given the lack of stressors elsewhere (children, marriage, housing, health, etc.)
comment by Shmi (shminux) · 2013-05-09T22:25:12.140Z · LW(p) · GW(p)
I have looked through this thread, bravely started by ibidem, and I have noticed what seems like a failure mode by all sides. A religious person does not just believe in God, s/he alieves in God, too, and logical arguments are rarely the best way to get through to the relevant alieving circuit in the brain. Oh, they work eventually, given enough persistence and cooperation, but only indirectly. If the alief remains unacknowledged, we tend to come up with logical counterarguments which are not "true rejections". As long as the alief is there, the logic will bounce off with marginal damage, if any. I wonder if there is a more effective level of discourse.
Just to refresh, here is the definition:
alief is associative, action-generating, affect-laden, arational, automatic, agnostic with respect to its content, shared with animals, and developmentally and conceptually antecedent to other cognitive attitudes
from the original paper, and some examples:
So, for example, subjects are reluctant to drink from a glass of juice in which a completely sterilized dead cockroach has been stirred, hesitant to wear a laundered shirt that has been previously worn by someone they dislike, and loath to eat soup from a brand-new bedpan. They are disinclined to put their mouths on a piece of newly-purchased vomit-shaped rubber (though perfectly willing to do so with sink stopper of similar size and material), averse to eating fudge that has been formed into the shape of dog feces, and far less accurate in throwing darts at pictures of faces of people they like than at neutral faces
I am guessing that part of any religious belief is the alief in a just universe.
Replies from: Intrism, None↑ comment by Intrism · 2013-05-10T19:58:28.795Z · LW(p) · GW(p)
A religious person does not just believe in God, s/he alieves in God, too, and logical arguments are rarely the best way to get through to the relevant alieving circuit in the brain.
If I were talking to a religious person elsewhere, that would make sense. But, this is LessWrong, and the respectful way to have this discussion here is to depend upon logic and rationalism. Anything else, and in my opinion we'd be talking down to him.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-10T20:25:49.411Z · LW(p) · GW(p)
Sorry, we don't live in a should-universe, either. If your goal is to influence a religious person's perception of his/her faith, you do what it takes to get through, not complain that the other party is not playing by some real or imaginary rules. But hey, feel free to keep talking about logic, rationalism and respect. That's what two-boxers do.
Replies from: rocurley↑ comment by rocurley · 2013-05-24T22:17:32.228Z · LW(p) · GW(p)
That's what two-boxers do.
Two boxers don't only do wrong things, and it's not obvious this is actually related to two-boxing.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-24T22:50:06.832Z · LW(p) · GW(p)
Two-boxers live in a should-universe, given how they insist on following "logic" over evidence.
↑ comment by [deleted] · 2013-05-10T10:05:43.810Z · LW(p) · GW(p)
Interesting. I'd never heard of alief but it's a good way of explaining things. This is partly why I said (somewhere) that I don't think science will ever be able to fully prove this issue one way or the other—religion or lack thereof is necessarily a matter of alief as well as belief, and it's impossible in practice to look at this issue entirely rationally.
(I'm sure it's much too late now to claim I never intended to start a debate about religion. Now that there are about fifteen people all arguing against me I don't think I can keep it up, but I sure was asking for it.)
Replies from: bartimaeus↑ comment by bartimaeus · 2013-05-10T17:39:45.816Z · LW(p) · GW(p)
Remember, your post has (at the time of this comment at least) a score of 4. Subjects that are "taboo" on LessWrong are taboo because people tend to discuss them badly. You asked some legitimate questions, and some people provided you with good responses.
If you're willing to consider changing your mind, the next step would be to read the sequences. A lot of what you mention is answered there, such as:
Absence of evidence is evidence of absence The Fallacy of Grey (specifically, when you mention that because we don't know the whole truth, we can't objectively evaluate evidence) 0 and 1 are not probabilities This one actually supports what you were saying, where you were entirely right that you can't assign a probability of 0 to the existence of God. But you still don't know if this probability is 0.9, 0.1, 0.01 or 0.0000001. See http://lesswrong.com/lw/ml/but_theres_still_a_chance_right/
Replies from: None↑ comment by [deleted] · 2013-05-10T19:15:44.876Z · LW(p) · GW(p)
I've read several of the sequences, and I'm fairly familiar with this community's way of thinking.
Everyone is referring me to Absence of Evidence; I think that it's a weak argument in the first place, but it also seems to be the only one a lot of people have.
Replies from: Desrtopa, bartimaeus↑ comment by Desrtopa · 2013-05-14T14:55:13.691Z · LW(p) · GW(p)
Everyone is referring me to Absence of Evidence; I think that it's a weak argument in the first place
Do you think it's a weak argument in general, or just a weak argument with respect to religion in particular?
If the former, it would certainly help if you could explain that. If the latter, do you think that religion is a special case with respect to need for evidence, or are you simply arguing that there is evidence available to us? And if the last one, why not discuss that evidence?
Replies from: None↑ comment by [deleted] · 2013-05-14T19:01:40.293Z · LW(p) · GW(p)
I think it's weak when it's essentially the only argument a person has against religion.
Replies from: drnickbone, shminux, Desrtopa↑ comment by drnickbone · 2013-05-14T21:55:17.541Z · LW(p) · GW(p)
Hardly anyone treats it as the only argument against religion, but for many people here it is a fully sufficient argument. You just need to apply the principle of parsimony (Occam's razor) correctly.
Now a very weak way of applying it is as follows "In the absence of evidence of a deity, a hypothesis of no god is simpler/more parsimonious than the hypothesis that there is a god. So there is no god". If that's what you think we're arguing, I can understand why you think it weak.
However, a much stronger formulation looks like this. "If there were a deity, we would reasonably expect the world to look very different from the way we find it. True, it is possible to hypothesize a deity who intervenes - and fails to intervene - in exactly the right way to create the world that we see, including the various religious beliefs within it. But such a hypothetical being involves so many ad hoc auxiliary hypotheses and wild excuses that it is highly unparsimonious. So we should not believe in such a being".
Here are some examples of the ad hoc hypotheses and excuses needed:
A god creates complex livings beings, but chooses to create them in precisely the one way (evolution by natural selection) that would also work without an intelligent designer/creator. This happens to be a woefully inefficient form of design and creation; about the least efficient means possible.
In case that method might lead to some doubt about its existence and powers, the god then carefully hides all evidence of the method it used, by burying them in ancient rocks and deep inside the creatures' DNA. Further, the god ensures that the creatures cannot even imagine the correct explanation for their existence until all the evidence is eventually dug-up and pieced together. Further, that they will fiercely resist the correct explanation when it is finally discovered. Instead they will infer creation by other, directly supernatural means, and hence come to believe in the god by erroneous reasoning.
The god is capable of inducing belief directly in its creatures, but doesn't do so because it regards that as a violation of their free will. However it is happy to use other forceful means of inducing belief, such as early childhood indoctrination, constant repetition and ritual, strong cultural expectation and moral pressure, ostracism for disbelief, or even state persecution/coercion for disbelief. These are presumably NOT considered violations of free will.
Notwithstanding point 3, the god chooses to reveal itself directly to some of its creatures, but then chooses methods of revelation which are highly inconsistent between subjects, and indistinguishable from various forms of sensory illusion and mental illness. Evidences of these revelations are then arranged to be preserved imperfectly in oral accounts and eventually written up in unprofessionally-authored documents, rather than being preserved via more reliable recording methods.
Is all of that story impossible? No it isn't.
Is it at all plausible? No it isn't.
Does the principle of parsimony make us reject such a story? Yes, it does.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-05-16T00:36:58.095Z · LW(p) · GW(p)
I'll point out here that even in America, many theists accept evolution, but most believe in guided evolution, where the deity set the process in motion and then directed the course of evolution to the desired result. This doesn't offer predictions that deviate nearly as much from our observations as the predictions of creationism, but our observations still contain a suspicious number of evolutionary dead ends, do-overs, and failures to use the best available evolutionary mechanisms (why couldn't our evolutionary guide have given us eyes more like squid eyes?)
↑ comment by Shmi (shminux) · 2013-05-14T19:37:59.368Z · LW(p) · GW(p)
I have looked through a bunch of your recent replies, and they exhibit a number of standard cognitive biases worth addressing before you can profitably carry any religion-related discussion. Or any rational discussion for that matter. Learning about the biases and learning to identify them in yourself is an important part of instrumental rationality. After you are at a reasonable discourse level, and have critically examined your epistemology, as most regulars here have done and still do on occasion, you might or might not choose to be a Mormon for religious and/or possibly social reasons. Or you may decide to not open that particular Pandora's box, who knows. But you are not there yet. Your religion-related arguments are of the level of a physics newbie arguing against relativity with the race car-on-a-train idea, or Draco arguing for blood purity in HPMOR. You cannot even understand the arguments presented to you, and so you reject them out of hand.
Replies from: None↑ comment by [deleted] · 2013-05-14T19:56:51.378Z · LW(p) · GW(p)
a number of standard cognitive biases
Um, I'd be happy to hear all about them. Like, specific biases and examples. It's not much help for me just to be told I'm completely clueless.
Your religion-related arguments are of the level of a physics newbie arguing against relativity with the race car-on-a-train idea
Keep in mind that I never intended to challenge atheism. I'm not trying to convert anybody, because I know how that would appear.
You cannot even understand the arguments presented to you, and so you reject them out of hand.
Obviously I have to disagree. I've heard many arguments here that educated me and expanded my understanding, and a few people have said that they agree with points I have made. But if you insist on fixating upon my newness—what specifically would you recommend I read to improve? I've read most of the sequences, and I've been keeping up with general discussion for a few weeks now.
↑ comment by Desrtopa · 2013-05-14T19:31:23.678Z · LW(p) · GW(p)
That doesn't really answer my question.
Also, keep in mind that you've deliberately been keeping the discussion away from any actual religion, and focused simply on the question of theism. I think nearly everyone here would have more arguments against all existing religions.
↑ comment by bartimaeus · 2013-05-10T19:54:08.109Z · LW(p) · GW(p)
Absence of Evidence is directly tied to having a probabilistic model of reality. There might be an inferential gap when people refer you to it, because on its own the argument doesn't seem strong. But it's a direct consequence of Bayesian reasoning, which IS a strong argument.
(Just to clarify: I didn't mean to accuse you of ignorance, and I sympathize with having everyone spam you with links to the same material, which must be aggravating.)
Replies from: None↑ comment by [deleted] · 2013-05-10T20:02:58.995Z · LW(p) · GW(p)
It's certainly an important point, but I think that atheists tend to overuse it. I can't begin to criticize Bayesian reasoning, especially not here.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-05-14T03:41:24.948Z · LW(p) · GW(p)
Bayesian probabilistic reasoning is the unique (up to isomorphism) generalization of Aristotelian (two-valued) logic to reasoning about uncertainty. You can't throw it out without inconsistency.
Replies from: Nonecomment by gwern · 2013-05-07T18:56:37.205Z · LW(p) · GW(p)
EDIT: I am closing analysis on this poll now. Thanks to the 104 respondents.
This is a poll on a minor historical point which came up on #lesswrong
where we wondered how obscure some useless trivia was; please do not look up anything mentioned here - knowing the answers does not make you a better person, I'm just curious - and if you were reading that part of the chat, likewise please do not answer.
Do you know what a "holystone" is and is used for?
[pollid:462]
In this passage:
"Tu Mu relates a stratagem of Chu-ko Liang, who in 149 BC, when occupying Yang-p'ing and about to be attacked by Ssu-ma I, suddenly struck his colors, stopped the beating of the drums, and flung open the city gates, showing only a few men engaged in sweeping and sprinkling the ground. This unexpected proceeding had the intended effect; for Ssu-ma I, suspecting an ambush, actually drew off his army and retreated."
Do you know why the men are "sprinkling the ground"?
[pollid:463]
If yes, please reply to this comment using rot13 with what you believe they are doing and why.
In this passage:
"Simplicity of life, even the barest, is not a misery, but the very foundation of refinement: a sanded floor and whitewashed walls, and the green trees, and flowery meads, and living waters outside; or a grimy palace amid the smoke with a regiment of housemaids always working to smear the dirt together so that it may be unnoticed; which, think you, is the most refined, the most fit for a gentleman of those 2 dwellings?"
Does "sanded floor" refer to...?
[pollid:464]
(I'm writing a little essay on the topic; if you're curious, respond non-anonymously and in a week or three I'll ping you with a link to it.)
Replies from: Morendil, Nornagest, iconreforged, Emile, Eneasz, insufferablejake, ArisKatsaris, Lumifer, Qiaochu_Yuan↑ comment by Morendil · 2013-05-08T09:51:33.940Z · LW(p) · GW(p)
why the men are "sprinkling the ground"?
Unsure rather than "yes", but: xrrcvat gur qhfg qbja?
Replies from: gwern↑ comment by gwern · 2013-05-08T14:46:08.120Z · LW(p) · GW(p)
Lrf.
Replies from: ygert↑ comment by ygert · 2013-05-09T20:26:20.549Z · LW(p) · GW(p)
FYI, this is a good example of a case where rot13ing doesn't help at all. The instant I glanced at gwern's comment I got what was being said, simply from length considerations. In this case it's more or less OK, as it's not a major spoiler point and one would need to unrot13 Morendil's comment in order to actually get what you were saying "Lrf" about, but had gwern written the comment unrot13ed, I would have gotten exactly the same information from glancing at it.
(But maybe other people would not automatically infer the message from, say, the length? For me, it was something perfectly natural that my brain did automatically, but who knows, that might just be my brain. I am curious: do other people's brains also automatically react like that in situations like this?)
Replies from: Zaine↑ comment by iconreforged · 2013-05-23T13:26:00.596Z · LW(p) · GW(p)
Vs V'z guvaxvat pbeerpgyl, lneqf hfrq gb or ragveryl qveg, fhpu gung vs lbh fcevaxyrq gur lneq jvgu jngre, lbh pbhyq nibvq evfvat qhfg.
Replies from: gwern↑ comment by Emile · 2013-05-10T16:10:46.668Z · LW(p) · GW(p)
I also answered "unsure", and thought it was gb xrrc gur qhfg qbja (ybbxf yvxr V'z gur guveq bar va gung pnfr).
Replies from: gwern↑ comment by gwern · 2013-05-10T16:33:16.306Z · LW(p) · GW(p)
Vagrerfgvat ubj srj crbcyr xabj vg, vfa'g vg, jura nf sne nf V pna gryy vg'f n cresrpgyl beqvanel cneg bs yvsr va znal pbhagevrf naq unf orra sbe zvyyraavn? Ohg ba gur bgure unaq, vg ybbxf yvxr nyzbfg rirel bar vf trggvat gur 'fnaqrq sybbe' dhrfgvba evtug, juvpu fgevxrf zr nf jrveq orpnhfr lbh jbhyq guvax gung crbcyr jbhyq vasre gung vg ersref gb cnvagvat be pbafgehpgvba be fbzrguvat. V'ir fgnegrq gb jbaqre vs V fperjrq hc gur cbyy ol chggvat gur bgure dhrfgvbaf svefg... V guvax V znl arrq gb qb nabgure cbyy, creuncf ba tjrea.arg, jurer V punatr gur beqre bs gur dhrfgvbaf be znlor nfx bayl gur fnaq dhrfgvba be hfr n qvssrerag dhbgr... Uz.
↑ comment by Eneasz · 2013-05-08T22:14:51.245Z · LW(p) · GW(p)
#2 V nafjrerq hafher orpnhfr gur dhrfgvba jnf nzovthbhf. V nffhzr gurl'er fcevaxyvat jngre gb xrrc gur qhfg qbja - wnavgbevny jbex. Jnf gur dhrfgvba nobhg guvf yvgrenyyl, be nfxvat nobhg gur fvtavsvpnapr bs fubjvat n srj zra qbvat wnavgbevny jbex gb gur rarzl? Orpnhfr V qba'g xabj jung gur fvtavsvpnapr bs gung fbeg bs jbex fcrpvsvpnyyl vf.
Replies from: gwern↑ comment by insufferablejake · 2013-05-08T18:00:09.767Z · LW(p) · GW(p)
For #2 Fcevaxyvat jngre ba gur tebhaq gb xrrc vg sebz envfvat qhfg?
Replies from: gwern↑ comment by gwern · 2013-05-08T18:40:59.890Z · LW(p) · GW(p)
Whfg fb.
Replies from: insufferablejake↑ comment by insufferablejake · 2013-05-08T18:53:35.744Z · LW(p) · GW(p)
Vs V nz ubarfg, gura, V zhfg nqzvg gung gur cenpgvpr vf pbzzba va fbhgu Vaqvn, va gur fznyy gbja naq ivyyntrf. Pbzr penpx bs qnja lbh'yy svaq jbzra fjrrcvat naq jngrevat gur ragenaprf gb gur gurve ubzrf :) Ner lbh jevgvat na rffnl nobhg fbhgu Vaqvn? Gur fnaqrq sybbef naq gur juvgrjnfurq jnyyf ner nyfb erzvaqref bs gur fnzr guvat.
Replies from: gwern↑ comment by gwern · 2013-05-08T19:08:48.193Z · LW(p) · GW(p)
Lrf, gung jbhyqa'g fhecevfr zr ng nyy. Zl rffnl vfa'g nobhg fbhgu Vaqvn ohg npghnyyl zber nobhg Ratynaq naq Arj Ratynaq (gung'f jurer gur Tbbtyr Obbxf uvgf pbzr sebz sbe "fnaqrq sybbe", fb gung'f jurer gur rffnl tbrf), naq gurer gbb fnaqrq sybbef ner nffbpvngrq jvgu juvgrjnfurq jnyyf. Ner lbh sebz fbhgu Vaqvn naq pna qvfphff guvf, be qb lbh xabj bs nal hfrshy fbheprf? Na Vaqvna rknzcyr gb tb jvgu gur Puvarfr rknzcyr jbhyq or avpr.
Replies from: insufferablejake, insufferablejake↑ comment by insufferablejake · 2013-05-08T20:07:35.021Z · LW(p) · GW(p)
V'z fbeel V cbfgrq zber naq gura qryrgrq vg, V ernyvmrq gung guvf jnf n choyvp sbehz naq V nz cnenabvq nobhg cevinpl. Cyrnfr rznvy zr ng zl yj unaqyr ng tznvy, V'yy or unccl gb nafjre nal dhrfgvbaf lbh unir.
↑ comment by insufferablejake · 2013-05-08T19:59:46.988Z · LW(p) · GW(p)
V'z fbeel V cbfgrq zber naq gura qryrgrq vg, V ernyvmrq gung guvf jnf n choyvp sbehz naq V nz cnenabvq nobhg cevinpl. Cyrnfr rznvy zr ng zl yj unaqyr ng tznvy, V'yy or unccl gb nafjre nal dhrfgvbaf lbh unir.
Replies from: gwern↑ comment by gwern · 2013-05-08T20:13:35.686Z · LW(p) · GW(p)
V qvqa'g svaq nalguvat rvgure, ohg V qvq qvfpbire fbzrguvat nyzbfg nf tbbq: nccneragyl vg'f fgvyy n yvggyr avpur evghny guvat va Wncna pnyyrq 'hpuvzvmh', naq gurer'f dhvgr n srj cubgbf bs vg bayvar:
- uggcf://frpher.syvpxe.pbz/frnepu/?j=nyy&d=hpuvzvmh&z=grkg
- uggc://jjj.jbeyq-vafvtugf.pbz/hpuvzvmh-fcevaxyr-jngre-ba-gur-ebnq/
- uggc://jjj.qnaalpubb.pbz/cbfg/ra/1015/Hpuvzvmh.ugzy
Ab qvpr ba fnaqrq sybbef gubhtu.
↑ comment by ArisKatsaris · 2013-05-08T10:47:30.684Z · LW(p) · GW(p)
Likewise replied with "unsure" in the 2nd question but my guess was fcevaxyvat jvgu fbzrguvat yvxr fnyg fb gung ab jrrqf jvyy tebj.
Replies from: gwern↑ comment by Lumifer · 2013-06-29T04:27:24.751Z · LW(p) · GW(p)
For question 2, V oryvrir gurl ner jngrevat gur qhfgl tebhaq gb xrrc gur qhfg qbja. Bgurejvfr fjrrcvat whfg envfrf pybhqf bs qhfg vagb gur nve naq vf abg nyy gung hfrshy.
For question 3, zl rkcrpgngvba vf gung fbzr uneq sybbe (cbffvoyl pynl be qveg) vf pbirerq jvgu n yvtug ynlre bs fnaq -- fvzvyne gb ubj fnjqhfg jnf hfrq ba sybbef bs chof naq gnireaf. Gung'f abg na bcgvba va gur cbyy, gubhtu :-)
↑ comment by Qiaochu_Yuan · 2013-06-29T04:09:08.467Z · LW(p) · GW(p)
Regarding question 3, V guvax gur pbeerpg nafjre jnf gbb rnfl gb thrff; va cnegvphyne, V nz cerggl fher V thrffrq vg pbeerpgyl jvgubhg rire univat urneq gung grez orsber (ol nanybtl jvgu fnaqrq jbbq).
Replies from: gwern↑ comment by gwern · 2013-06-29T16:08:46.734Z · LW(p) · GW(p)
V nterr. Gur evtug nafjre vf bofpher naq fubhyq or ng n fvzvyne % nf gur bgure dhrfgvbaf, ohg vg'f jnl uvture; gb zr, guvf fnlf V sbezhyngrq gur dhrfgvba jebat. V'ir orra zrnavat gb eha n frpbaq cbyy guebhtu tjrea.arg gb trg n qvssrerag nhqvrapr bs erfcbaqragf, ohg V unira'g orra noyr gb guvax bs ubj gb nfx gur fnaql-sybbe dhrfgvba pbeerpgyl.
comment by Tenoke · 2013-05-05T11:36:28.866Z · LW(p) · GW(p)
I got a decent smartphone (SGS3) a few days ago and am looking for some good apps for LessWrong-related activities. I am particularly interested in recommendations for lifelogging apps but would look into any other type of recommendations. Also I've rooted the phone.
Replies from: mstevens, Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-08T07:02:22.053Z · LW(p) · GW(p)
You mean like "get Chrome so you can browse LW on your phone" or like "get Sleep Cycle and, even if you don't trust its measure of how good your sleep is, you can at least log when you go to sleep and wake up every day"?
comment by sixes_and_sevens · 2013-05-04T14:39:47.851Z · LW(p) · GW(p)
Would learning Latin confer status benefits?
I've recently gotten the idea in my head of taking a twelve-week course in introductory Latin, mostly for nerdy linguistic reasons. It occurs to me that learning an idiosyncratic dead language is archetypal signalling behaviour, and this fits in with my observations. The only people I know with any substantial knowledge of the language either come from privileged backgrounds and private education, or studied Classics at university (which also seems to correlate with a privileged background).
A lot of the bonding that takes place over Latin doesn't even seem to involve being able to actually use it. A shared experience of the horrors of conjugate forms and declension tables seems to be enough. While a twelve-week introductory course isn't going to equip someone with much in the way of usable skills, it will certainly satisfy this criteria.
It seems odd that it's possible to just acquire an elite status marker like this.
Replies from: wedrifid, Qiaochu_Yuan, army1987↑ comment by Qiaochu_Yuan · 2013-05-05T07:58:45.986Z · LW(p) · GW(p)
Taboo "status." Who do you want to impress?
↑ comment by A1987dM (army1987) · 2013-05-04T16:55:00.054Z · LW(p) · GW(p)
Would learning Latin confer status benefits?
It probably depends on where you are, how old you are, and what your social circle is like.
comment by lukeprog · 2013-05-10T20:29:15.512Z · LW(p) · GW(p)
A monthly "Irrational Quotes" thread might be nice. My first pick would be:
Basically, Godel’s theorems prove the Doctrine of Original Sin, the need for the sacrament of penance, and that there is a future eternity.
Samuel Nigro, "Why Evolutionary Theories are Unbelievable."
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-05-11T17:00:18.506Z · LW(p) · GW(p)
Previous threads: Anti-rationality quotes and Arational quotes. There have also been A sense of logic and A Kick in the Rationals, though these were not restricted to quotes.
comment by A1987dM (army1987) · 2013-05-08T16:01:57.328Z · LW(p) · GW(p)
Suppose I have several different points to make in response to a given comment. Do I write all of them in a single comment, or do I write each of them in a separate comment? There doesn't seem to be an universally accepted norm about this -- the former seems to be more common, but there's at least one regular here who customarily does the latter and I can't remember anyone complaining about that.
Advantages of writing separate comments:
- I can retract each of them individually, in case I change my mind about one of them but still stand by the others (as here).
- Each of them can be upvoted or downvoted separately, so I don't have to guess what people are enjoying or objecting to.
- If each comment gives rise to a discussion, and someone is only interested in one of them, they can collapse the other ones.
Disadvantages of writing separate comments:
- It clutters the Recent Comments sidebar and page, my contribution history, and the mailbox of the author of the parent comment.
- It increases the total number of comments in the thread, making it more likely for it to exceed the maximum number a reader has set and for other comments to get hidden.
- It may come across as something unusual done in the attempt of getting more total karma that I could otherwise have.
Should we standardize on one possibility, or decide on a case-by-case basis?
Replies from: TimS, drethelin, TimS↑ comment by TimS · 2013-05-08T16:33:08.255Z · LW(p) · GW(p)
Should we standardize on one possibility, or decide on a case-by-case basis?
As a more serious response, I personally try to make one response, unless the commenter is still actively part of the discussion and the discussion has clearly split into two topics. In practice, that tends to weigh very strongly against splitting.
One major disadvantage of splitting an active conversation is that interesting points may go into only one branch, and end up missed in the other branch. Especially if one's main method of browsing is clicking the recent comments.
↑ comment by TimS · 2013-05-08T16:29:39.936Z · LW(p) · GW(p)
I'm just enjoying that this post is upvoted for asking a question, by the upvoter did not make any suggestion for the answer. My sense of humor is apparently quite degenerate.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-05-08T16:34:17.886Z · LW(p) · GW(p)
Maybe the upvoter wants my comment to be more visible because they are also interested in other people's opinion on this, but didn't have anything to add to what I said themselves.
Replies from: TimS↑ comment by TimS · 2013-05-08T16:37:25.592Z · LW(p) · GW(p)
I think you took my comment more seriously than I intended. Anyway, I don't sort by karma because I find it confusing to follow conversations when comments aren't listed in the order made. But I'm not trained by Reddit (or where-ever the sort-by-karma norms are coming from).
comment by khafra · 2013-05-08T13:58:59.418Z · LW(p) · GW(p)
Michael Chwe, a game theorist at UCLA, just wrote a book on Jane Austin. It combines game theory and social signaling, so it looks like it'll be on the LW interest spectrum:
Austen’s clueless people focus on numbers, visual detail, decontextualized literal meaning, and social status. These traits are commonly shared by people on the autistic spectrum; thus Austen suggests an explanation for cluelessness based on individual personality traits. Another of Austen’s explanations for cluelessness is that not having to take another person’s perspective is a mark of social superiority over that person. Thus a superior remains clueless about an inferior to sustain the status difference, even though this prevents him from realizing how the inferior is manipulating him.
A later expansion on that gives a list of biases to avoid, including the typical mind fallacy and a few new ones:
Austen gives five explanations for cluelessness, the conspicuous absence of strategic thinking. First, Austen suggests that cluelessness can result from a lack of natural ability: her clueless people have several personality traits (a fixation with numeracy, visual detail, literality, and social status) often associated with autistic spectrum disorders. Second, if you don’t know much about another person, it is difficult to put yourself into his mind; thus cluelessness can result from social distance, for example between man and woman, married and unmarried, or young and old. Third, cluelessness can result from excessive self-reference, for example thinking that if you do not like something, no one else does either. Fourth, cluelessness can result from status differences: superiors are not supposed to enter into the minds of inferiors, and this is in fact a mark or privilege of higher status. Fifth, sometimes presuming to know another’s mind actually works: if you can make another person desire you, for example, then his prior motivations truly don’t matter.
comment by Adele_L · 2013-05-02T00:21:52.451Z · LW(p) · GW(p)
I had a small thought the other day. Average utilitarianism appeals to me most it the various utilitarianisms I have seen, but has the obvious drawback of allowing utility to be raised simply by destroying beings with less than average utility.
My thought was that maybe this could be solved by making the individual utility functions permanent in some sense, i. e. killing someone with low utility would still cause average utility to decrease if they would have wanted to live. This seems to match my intuitions on morality better than any other utilitarianism I have seen.
One strange thing is that the preferences of our ancestors still would count just as much as any other person, but I had already been updating in this direction after reading an essay by gwern called the narrowing moral circle. I wasn't able to think of anything else too weird, but I haven't thought too much about this yet.
Anyway, I was wondering if anyone else has explored this idea already, or if anyone has any thoughts about it.
Replies from: Luke_A_Somers, Nornagest, Thrasymachus↑ comment by Luke_A_Somers · 2013-05-02T13:46:40.669Z · LW(p) · GW(p)
You don't evaluate the level of contemporary preference at each future time.
You evaluate the current preferences, which are evaluated over the future history of the universe.
The people to be slain will likely object to this plan based on these current preferences.
↑ comment by Nornagest · 2013-05-02T01:08:27.230Z · LW(p) · GW(p)
That's even less tractable a problem than summing over the utility functions of all existing agents, but that's not necessarily a game-changer. There are some other odd features of this idea, though:
- It only seems to work with preference utilitarianism; pleasure/pain utilitarianism would still treat the painless death of an agent with neutral expected utility as neutral. Fair enough; preference utilitarianism seems less broken than conventional utilitarianism anyway.
- Contingent on using preference utilitarianism, certain ways of doing the summing lead to odd features regarding changing cultural values: if future preferences are unbounded in time, a big enough stack of dead ancestors with strong enough preferences could render arbitrary social changes unethical. This could be avoided by summing only over potential lifespan, time-discounting in some way, or using some kind of nonstandard aggregation function that takes new information into account.
- Let's say we're now at a point in time . We can plan for using only the preferences of existing or previous agents; all very intuitive so far. But let's say we consider a time further in the future. New agents will have been introduced between and , and there's no obvious way to take their preferences into account; every option gives us potential inconsistencies between optimal actions planned at and optimal actions taken at time . The least bad option seems to be doing a probability-weighted average over agents extant in all possible futures, but (besides being just ridiculously intractable) that seems to introduce some weird acausal effects that I'm not sure I want to deal with. Taking the average at least avoids some of the crazier possible consequences, like the utilitarian go forth and multiply that I'm sure you've thought of already.
↑ comment by Adele_L · 2013-05-02T02:48:57.370Z · LW(p) · GW(p)
Yeah this only makes sense for preference utilitarianism, I should have mentioned that.
It is strange to be sure. I wonder what the aggregated preferences of humanity would look like. I wouldn't be to surprised if it ended up being really similar to the aggregated preferences of current humans. Also, adding some sort of EV to this would probably make any issue here go away. But in any case, it seems to be an open problem on how to chose the starting set of utility functions in a moral way. Once things were running, it might work pretty well, especially once death is solved.
Why not just plan for whatever the current set of utility functions is? In the context of a FAI, it probably wouldn't want the aggregate utility function to change anyway. But again, deciding which functions to aggregate seems to be unsolved.
Replies from: latanius↑ comment by latanius · 2013-05-02T03:39:53.571Z · LW(p) · GW(p)
Aren't utility functions kind of... invariant to scaling and addition of a constant value?
That is, you can say that "I would like A more than B" but not "having A makes me happier than you would be having it". Neither "I'm neither happy or unhappy, so me not existing wouldn't change anything". It's just not defined.
Actually, the only place different people's utility functions can be added up is in a single person's mind, that is, "I value seeing X and Y both feeling well twice as much as just X being in such a state". So "destroying beings with less than average utility" would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions.
(that is, do we count the utility function of the person before or after giving them antidepressants?)
Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the "right way of summing utiliity functions".
Replies from: Nornagest↑ comment by Nornagest · 2013-05-02T06:00:53.230Z · LW(p) · GW(p)
It's hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they're implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article's kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you'd probably be mapping preference orderings over possible world-states onto the reals in some way.
There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven't received much attention in the ethics world.
↑ comment by Thrasymachus · 2013-05-04T22:28:38.358Z · LW(p) · GW(p)
There are probably two stronger objections to average util along the lines you mention.
1) Instead of talking about killing someone with net positive utility, consider bringing someone into existence who has positive utility, but below the world average. It seems intuitive to say that would be good (especially if the absolute levels were really high), yet avutil rules it out. To make it more implausible, say the average is dragged up by blissfully happily aliens outside of our lightcone.
2) Consider a world where there are lives that are really bad, and better off not lived at all. Should you add more lives that are marginally less really bad than those lives that currently exist. Again, intuition says no, but negutil says yes - indeed, you should add as many of these lives as you can, as each subsequent not-quite-as-awful life raises average utility by progressively smaller fractions.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-06-22T03:23:48.388Z · LW(p) · GW(p)
Again, intuition says no, but negutil says yes
I think you meant 'avutil'.
comment by whpearson · 2013-05-01T22:32:48.901Z · LW(p) · GW(p)
I'd like some comments on the landing page of a website I am working on Experi-org. It is to do with experimenting with organisations.
I mainly want feedback on tone and clarity of purpose. I'll work on cleaning it up more (getting a friend who is a proof reader to give it the once over), once I have those nailed down.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-02T01:15:37.732Z · LW(p) · GW(p)
You might be interested in Trust: The Social Virtues and The Creation of Prosperity. More generally, I was a little surprised at the pure experimental approach that didn't have a look at the degree of corruption in different real-world societies.
Corruption is widespread through our society. From major events like the Enron scandel to low level inefficiency in government it has a massive impact on our day to day lives. People aren't inherently evil, so it is the type of organisations that we create that are at fault.
I recommend "From major events like the Enron scandal to low level inefficiency in government, corruption has a massive effect on our day to day lives."
As for the next sentence, I'm not sure whether I don't understand you or don't agree with you. Admittedly, there will be more crime when there are weak barriers to crime, but I also believe that people who want to get away with something will, if they have the power, try to shape organizations which will let them get away with what they want.
Something to contemplate: Man creates huge Ponzi scheme in EVE Online just to prove he can do it. When it's over, he considers returning the money, which he has no use for, but he just can't make himself do it.
Replies from: whpearson↑ comment by whpearson · 2013-05-02T19:40:13.217Z · LW(p) · GW(p)
You might be interested in Trust: The Social Virtues and The Creation of Prosperity.
Thanks. I'll have a look at the book.
More generally, I was a little surprised at the pure experimental approach that didn't have a look at the degree of corruption in different real-world societies.
I did mention looking at various subjects in the What>Explore section, one of which will be looking current real world societies.
I focus on experimentation for a few different reasons
1) Experimentation is hard. You can't do it on your own, you need other people, so the most focus goes on it. Otherwise people might just read books and make observations, which leads to the second point.
2) Experiments are a teaching tool. People have to learn that a different way can be better for them and the best way is to try it out for themselves.
3) There are lots of different societal norms and structures we haven't tried, so their might be opportunities to escape our current local optima.
I recommend "From major events like the Enron scandal to low level inefficiency in government, corruption has a massive effect on our day to day lives."
Thanks! I'll change that.
As for the next sentence, I'm not sure whether I don't understand you or don't agree with you. Admittedly, there will be more crime when there are weak barriers to crime, but I also believe that people who want to get away with something will, if they have the power, try to shape organizations which will let them get away with what they want.
I should probably put a qualifying "Most" in front of the people. I was writing it when I was trying to avoid weasel words.
But there is the question of why those you think "evil" get power? Who gets power is also somewhat a societal question.
comment by Prismattic · 2013-05-14T00:32:56.510Z · LW(p) · GW(p)
There is an article on impending AI and its socioeconomic consequences in the current issue of Mother Jones.
Karl Smith's reaction sounds rather Hansonian, except he doesn't try to make it sound less dystopian.
comment by NancyLebovitz · 2013-05-03T23:13:15.693Z · LW(p) · GW(p)
Does anyone remember a post (possibly a comment) with a huge stack of links about animal research not transferring to humans?
Replies from: gwern, TheOtherDave↑ comment by gwern · 2013-05-04T02:20:52.681Z · LW(p) · GW(p)
It was indeed me. You can find it somewhere, but I copied it over to http://www.gwern.net/DNB%20FAQ#fn95
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-13T14:53:04.157Z · LW(p) · GW(p)
I've started reading the links.
I was interested because I'd seen anti-animal experimentation people say that animal experimentation is unnecessary because we can use computer models. I concluded that these people were nitwits, and assumed that their primary argument must be wrong. Is there a name for that logical fallacy/bias?
I'm surprised that a lot of the uselessness seems to come from bad experimental design. I'd assumed the major problem would be that there are significant, non-obvious differences between humans and animals.
↑ comment by TheOtherDave · 2013-05-03T23:32:02.974Z · LW(p) · GW(p)
Well, yes, in that you aren't hallucinating.
No, in that I can't find it either on about 3 minutes of googling.
I vaguely recall gwern being involved, but may be confabulating.
↑ comment by NancyLebovitz · 2013-05-03T23:56:18.324Z · LW(p) · GW(p)
I was betting on either gwern or lukeprog.
comment by MedicJason · 2013-05-03T00:02:39.692Z · LW(p) · GW(p)
Hi, my name is Jason, this is my first post. I have recently been reading about 2 subjects here, Calibration and Solomoff Induction; reading them together has given me the following question:
How well-calibrated would Solomonoff Induction be if it could actually be calculated?
That is to say, if one generated priors on a whole bunch of questions based on information complexity measured in bits - if you took all the hypotheses that were measured at 10% likely - would 10% of those actually turn out to be correct?
I don't immediately see why Solomonoff Induction should be expected to be well-calibrated. It appears to just be a formalization of Occam's Razor, which itself is just a rule of thumb. But if it turned out not to be well-calibrated, it would not be a very good "recipe for truth." What am I missing?
Replies from: Viliam_Bur, DaFranker, MileyCyrus↑ comment by Viliam_Bur · 2013-05-03T07:13:32.390Z · LW(p) · GW(p)
Solomonoff Induction could be well-calibrated across mathematically possible universes. If a hypothesis has a probability 10%, you should expect it to be true in 10% of the universes.
Important thing is that Solomonoff priors are just a starting point in our reasoning. Then we update on evidence, which is at least as important as having reasonable priors. If it does not seem well calibrated, that is because you can't get good calibration without using evidence.
Imagine that at this moment you are teleported to another universe with completely different laws of physics... do you expect any other method to work better than Solomonoff Induction? Yes, gradually you get data about the new universe and improve your model. But that's exactly what you are supposed to do with Solomonoff priors. You wouldn't predictable get better results by starting from different priors.
It appears to just be a formalization of Occam's Razor, which itself is just a rule of thumb.
To me it seems that Occam's Razor is a rule of thumb, and Solomonoff Induction is a mathematical background explaining why the rule of thumb works. (OR: "Choose the most simple hypothesis that fits your data." Me: "Okay, but why?" SI: "Because it is more likely to be the correct one.")
But if it turned out not to be well-calibrated, it would not be a very good "recipe for truth." What am I missing?
You can't get a good "recipe for truth" without actually looking at the evidence. Solomonoff Induction is the best thing you can do without the evidence (or before you start taking the evidence into account).
Essentially, the Solomonoff Induction will help you avoid the following problems:
Getting inconsistent results. For example, if you instead supposed that "if I don't have any data confirming or rejecting a hypothesis, I will always assume its prior probability is 50%", then if I give you two new hypotheses X and Y without any data, you are supposed to think that p(X) = 0.5 and p(Y) = 0.5, but also e.g. p(X and Y) = 0.5 (because "X and Y" is also a hypothesis you don't have any data about).
Giving so extremely low probability to a reasonable hypothesis that available evidence cannot convince you otherwise. For example if you assume that prior probability of X is zero, then with proper updating no evidence can convince you about X, because there is always an alternative explanation with a very small but non-zero evidence (e.g. the lords of Matrix are messing with your brain). Even if the value is technically non-zero, it could be very small like 1/10^999999999, so all the evidence you could get within your human life could not make you change your mind.
On the other hand, some hypotheses do deserve very low prior probability, because reasoning like "any hypothesis, however unlikely, has prior probability at least 0.01" can be exploited by a) Pascal's mugging, b) constructing multiple mutually exclusive hypotheses which together have arbitrarily high probability (e.g. "AAA is the god of this world and I am his prophet", "AAB is the god of this world and I am his prophet"... "ZZZ is the god of this world and I am his prophet").
↑ comment by MedicJason · 2013-05-03T14:35:11.706Z · LW(p) · GW(p)
Thank you for your reply. It does clear up some of the virtues of SI, especially when used to generate priors absent any evidence. However, as I understand it, SI does take into account evidence - one removes all the possibilities incompatible with the evidence, then renormalizes the probablities of the remaining possibilities. Right?
If so, one could still ask - after taking account of all available evidence - is SI then well-calibrated? (At some point it should be well-calibrated, right? More calibrated than human beings. Otherwise, how is it useful? Or why should we use it for induction?)
Essentially the theory seems to predict that possible (evidence-compatible) events or states in the universe will occur in exact or fairly exact proportion to their relative complexities as measured in bits. Possibly over-simplifying, this suggests that if I am predicting between 2 (evidence-compatible) possibilities, and one is twice as information-complex as the other, then it should actually occur 1/3 of the time. Is there any evidence that this is actually true?
(I can see immediately that one would have to control for the number of possible "paths" or universe-states or however you call it that could lead to each event, in order for the outcome to be directly proportional to the information-complexity. I am ignoring this because the inability to compute this appears to be the reason SI as a whole cannot be computed.)
You suggest above that SI explains why Occam's razor works. I could offer another possibility - that Occam's Razor works because it is vague, but that when specified it will not turn out to match how the universe actually works very precisely. Or that Occam's Razor is useful because it suggests that when generating a Map one should use only as much information about the Territory is as is necessary for a certain purpose, thereby allowing one to get maximum usefulness with minimum cognitive load on the user.
I am not arguing for one or the other. Instead I am just asking, here among people knowledgeable about SI - Is there any evidence that outcomes in the universe actually occur with probablities in proportion to their information-complexity? (A much more precise claim than Occam's suggestion that in general simpler explanations are preferable.)
Maybe it will not be possible to answer my question until SI can at least be estimated, in order to actually make the comparison?
(Above you refer to "all mathematically possible universes." I phrased things in terms of probabilities inside a single universe because that is the context in which I observe & make decisions and would like SI to be useful. However I think you could just translate what I have said back into many-worlds language and keep the question intact.)
Replies from: Pfft, Viliam_Bur, DaFranker↑ comment by Pfft · 2013-05-03T22:52:01.831Z · LW(p) · GW(p)
after taking account of all available evidence - is SI then well-calibrated?
Yes. The prediction error theorem states that as long as the true distribution is computable, the estimate will converge quickly to the true distribution.
However, almost all the work done here, comes from the conditioning. The proof uses that for any computable mu, M(x) > 2^(-K(mu)) mu(x). That is, M does not assign a "very" small probablility to any possible observation.
The exact prior you pick does not matter very much, as long as it dominates the set of all possible distributions mu in this sense. If you have some other distribution P, such that for every mu there is a C with P(x) > C mu(x), you get a similar theorem, differing by just the constant in the inequality.
So I disagree with this:
Essentially the theory seems to predict that possible (evidence-compatible) events or states in the universe will occur in exact or fairly exact proportion to their relative complexities as measured in bits
It's ok if the prior is not very exact. As long as we don't overlook any possibilities as a priori super-unlikely when they are not, we can use observations to pin down the exact proportions later.
↑ comment by Viliam_Bur · 2013-05-03T16:00:58.595Z · LW(p) · GW(p)
However, as I understand it, SI does take into account evidence - one removes all the possibilities incompatible with the evidence, then renormalizes the probablities of the remaining possibilities. Right?
I am not sure about the terminology. I would call the described process "Solomonoff priors, plus updating", but I don't know the official name.
after taking account of all available evidence - is SI then well-calibrated?
I believe the answer is "yes, with enough evidence it is better calibrated then humans".
How much would "enough evidence" be? Well, you need some to compensate for the fact that humans are already born with some physiology and instincts adapted by evolution to our laws of physics. But this is a finite amount of evidence. All the evidence that humans get, should be processed better by the hypothetical "Solomonoff prior plus updating" process. So even if the process would start from zero and get the same information as humans, at some moment it should become and remain better calibrated.
the theory seems to predict that possible (evidence-compatible) events or states in the universe will occur in exact or fairly exact proportion to their relative complexities as measured in bits [...] if I am predicting between 2 (evidence-compatible) possibilities, and one is twice as information-complex as the other, then it should actually occur 1/3 of the time
Let's suppose that there are two hypotheses H1 and H2, each of them predicting exactly the same events, except that H2 is one bit longer and therefore half as likely as H1. Okay, so there is no evidence to distinguish between them. Whatever happens, we either reject both hypotheses, or we keep their ratio at 1:2.
Is that a problem? In real life, no. We will use the system to predict future events. We will ask about a specific event E, and by definition both H1 and H2 would give the same answer. So why should we care whether the answer was derived from H1, from H2, or from a combination of both. The question will be: "Will it rain tomorrow?" and the answer will be: "No." That's all, from outside.
Only if you try to look inside and ask "What was your model of the world that you used for this prediction?" the machine would tell you about H1, H2, and infinitely many other hypotheses. Then, you could ask it to use Occam's razor to only choose the simplest one and display it to you. But internally, it could keep all of them (we already suppose it has an infinite memory and infinite processing power). Note, if I understand it correctly, that it would be actually impossible for the machine to tell whether in general two hypotheses H1 and H2 are evidence-compatible.
Is there any evidence that outcomes in the universe actually occur with probablities in proportion to their information-complexity?
They don't. To get the probabilities about something occuring in our universe, you need to get the information about our universe first. Solomonoff Induction tells you how to do that, in a random universe. After you get enough evidence to understand the universe, only then you start getting good results.
In other words, the laws of our universe don't say "things are probable according to their information complexity". Instead they say other things. The problem is... at the beginning, you don't know the laws of our universe exactly. So how can you learn them?
Imagine yourself living centuries ago. If you knew Solomonoff Induction, it would give you a non-zero probability for quantum physics (and many other things, most of them wrong). A hypothetical machine with infinite power, able to do all the calculations, could in theory derive the quantum physics just by receiving the evidence you see. Isn't that awesome?
I phrased things in terms of probabilities inside a single universe because that is the context in which I observe & make decisions and would like SI to be useful.
Me too. But we still don't know all the laws of our universe. So in that aspect "what universe do we live in" remains a bit unknown.
However I think you could just translate what I have said back into many-worlds language and keep the question intact.
Careful. There is a difference between quantum "many worlds" which are all supposed to follow the same laws of physics, and between hypothetical universes with other laws of physics, called the Tegmark multiverse.
Again, I agree that we should only about our laws of physics, and about our branch of "many worlds". But still we have a problem of not knowing exactly what the laws are, and which branch it is. So we need a method to work with multiple possible laws, and multiple possible branches. With enough updating on our evidence, the probabilities of the other laws and other branches will get close to zero, and the remaining ones will be the most relevant for us.
Replies from: MedicJason, MedicJason↑ comment by MedicJason · 2013-05-03T18:39:17.501Z · LW(p) · GW(p)
They don't. To get the probabilities about something occuring in our universe, you need to get the information about our universe first. Solomonoff Induction tells you how to do that, in a random universe. After you get enough evidence to understand the universe, only then you start getting good results.
Yes, but we already have lots of information about our universe. So, making use of all that, if we could start using SI to, say, predict the weather, would its predictions be well-calibrated? (They should be - modern weather predictions are already well-calibrated, and SI is supposed to be better than how we do things now.) That would require that, of all predictions compatible with currently known info, ALL of them would have to occur in EXACT PROPORTION to their bit-length complexity.
Is there any evidence that this is the case?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-05-04T09:11:33.789Z · LW(p) · GW(p)
of all predictions compatible with currently known info, ALL of them would have to occur in EXACT PROPORTION to their bit-length complexity
I admit I am rather confused here, but here is my best guess:
It is not true, in our specific world, that all predictions compatible with the past will occur in exact proportion to their bit-length complexity. Some of them will occur more frequently, some of them will occur less frequently. The problem is, you don't know which ones. Because all of them are compatible with the past, so how could you tell the difference, except by a lucky guess? How could any other model tell the difference, except by a lucky guess? How could you tell which model guessed the difference correctly, except by a lucky guess? So if you want to get the best result on average, assigning the probability according to the bit-length complexity is best.
↑ comment by MedicJason · 2013-05-03T18:22:31.589Z · LW(p) · GW(p)
You quoted me
"the theory seems to predict that possible (evidence-compatible) events or states in the universe will occur in exact or fairly exact proportion to their relative complexities as measured in bits [...] if I am predicting between 2 (evidence-compatible) possibilities, and one is twice as information-complex as the other, then it should actually occur 1/3 of the time"
then replied
"Let's suppose that there are two hypotheses H1 and H2, each of them predicting exactly the same events, except that H2 is one bit longer and therefore half as likely as H1. Okay, so there is no evidence to distinguish between them. Whatever happens, we either reject both hypotheses, or we keep their ratio at 1:2."
I am afraid I may have stated this unclearly at first. I meant, given 2 hypotheses that are both compatible with all currently-known evidence, but which predict different outcomes on a future event.
↑ comment by DaFranker · 2013-05-03T15:10:10.785Z · LW(p) · GW(p)
Is there any evidence that outcomes in the universe actually occur with probablities in proportion to their information-complexity?
Yes, and the first piece of evidence is rather trivial. For any given law of physics, chemistry, etc. or basically any model of anything in the universe, I can conjure up an arbitrary amount of more and more complicated hypotheses that match the current data, but all or nearly-all of which will fail utterly against new data obtained later.
For a very trivial thought experiment / example, we could have an alternate hypothesis which includes all of the current data, with only instructions to the turing machine to print this data. Then we could have another which includes all the current data twice, but tells the turing machine to only print one copy. Necessarily, both of these will fail against new data, because they will only print the old data and halt.
We could conjure any infinities of copies similar to this which also contain arbitrary amounts of gibberish right after the old data, gibberish which will be unlikely to match the new data (with probability 1/2^n where n is the length of the new data / gibberish, assuming perfect randomness).
Replies from: MedicJason↑ comment by MedicJason · 2013-05-03T18:15:47.631Z · LW(p) · GW(p)
This seems reasonable - it basically makes use of the fact that most statements are wrong, therefore adding a given statement whose truth-value is as-yet-unknown is likely to be wrong.
However, that's vague. It supports Occam's Razor pretty well, but does it also offer good evidence that that those likelihoods will manifest in real-world probabilities IN EXACT PROPORTION to the bit-lengths of their inputs? That is a much more precise claim! (For convenience I am ignoring the problem of multiple algorithms where hypotheses have different bit-lengths.)
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T19:20:59.400Z · LW(p) · GW(p)
It supports Occam's Razor pretty well, but does it also offer good evidence that that those likelihoods will manifest in real-world probabilities IN EXACT PROPORTION to the bit-lengths of their inputs?
Nope, and we have no idea where we'd even start on evaluating this precisely because of the various problems relating to different languages. I think this is an active area of research.
It does seem though, by observation and inference (heh, use whatever tools you have), that more efficient languages tend to formulate shorter hypotheses that tend to hint at this.
There's also been some demonstrations of how well SI works for learning and inferring about a completely unknown environment. I think this was what AIXI was about, though I can't recall specifics.
↑ comment by DaFranker · 2013-05-03T14:04:16.006Z · LW(p) · GW(p)
Viliam_Bur makes a great run-down of what's going on. For a more detailed introduction though, see this post explaining Solomonoff Induction, or perhaps you'd prefer to jump straight to this paragraph (Solomonoff's Lightsaber) that contains an explanation of why shorter (simpler) hypotheses are more likely under Solomonoff Induction.
To make the bridge between that and what Viliam is saying, basically, if we consider all mathematically possible universes, then half the universes will start with a 1, and the other half will start with a 0. Then a quarter will start with 11, and another with 10, and so on. Which means that, to reuse the example in the above-linked post, 01001101 (which matches observed data perfectly so far) will appear in 1 out of 256 mathematically-possible universes, and 1000111110111111000111010010100001 (which also matches the data just as perfectly) will only appear in 1 out of 17179869184 mathematically-possible universe.
So if we expect to live in one out of all mathematically-possible universe, but we have no idea what properties it has (or if you just got warped to a different universe with different laws of physics), which of the two hypotheses do you want? The one that is true more often, in more of the possible universes, because you're more likely to be in one of those than in one that has the longer, rarer hypothesis.
That's the basic simplified logic behind it.
Replies from: MedicJason↑ comment by MedicJason · 2013-05-03T14:58:35.672Z · LW(p) · GW(p)
Yes, that was the post I read that generated my current line of questioning.
My reply to Viliam_Bur was phrased in terms of probabilities in a single universe, while your post here is in terms of mathematically possible universes. Let me try to rephrase my point to him in many-worlds language. This is not how I originally thought of the question, though, so I may end up a little muddled in translation.
Taking your original example, where half of the Mathematically Possible Universes start with 1, and the other half with 0. It is certainly possible to imagine a hypothetical Actual Multiverse where, nevertheless, there are 5 billion universes with 1, and only 5 universes with 0. Who knows why - maybe there is some overarching multiversal law we are unaware of, or may it's just random. The point is that there is no a priori reason the Multiverse can't be that way. (It may not even be possible to say that the multiverse probably isn't that way without using Solomonoff Induction or Occam's Razor, the very concepts under question.)
If this were the case, and I were somehow universe-hopping, I would over time come to the conclusion that SI was poorly calibrated and stop using it. This, I think, is basically the many-worlds version of my suggestion to Viliam_Bur. As I said to him, I am not arguing for or against SI, I am just asking knowledge people if there is any evidence that the probablities in this universe, or distributions across the multiverse, are actually in proportion to their information-complexities.
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T15:58:01.822Z · LW(p) · GW(p)
Hmm, I think I see what you mean.
Yes, there's no reason for Solomonoff to be well-calibrated in the end, but once we obtain information that most of the universes starting with 0 do not work, that is data against which most of the hypotheses starting with 0 will fail. At this point, brute solomonoff induction will be obviously inefficient, and we should begin using the heuristic of testing almost only hypotheses starting with 1.
In fact, we're already doing this: We know for a fact that we live in the subset of universes where the acceleration between two particles is not constant and invariant of distance. So it is known that the simpler hypothesis where gravitational attraction is "0.02c/year times the total mass of the objects" is not more likely than the one where gravitational attraction also depends on distance and angular momentum and other factors, despite the former being much less complex than the latter (or so we presume).
There's still murky depths and open questions, such as (IIRC) how to calculate how "long" (see Kolmogorov complexity) the instructions are.
Because suppose we build two universal turing machines with different sets of internal instructions.
We run Solomonoff Induction on the first machine, and it turns out that 01110101011110101010101111011 is the simplest possible program that will output "110", and by analyzing the language and structure of the machine we learn that this corresponds to the hypothesis "2*3", with the output being "6". Meanwhile, on the second machine, 1111110 will also output "110", and by analyzing it we find out that this corresponds to the hypothesis "6", with the output being "6".
On the first machine, to do the hypothesis "6", we must write 101010101111110110101111111110000000111111110000110, which is much more complex than the earlier "2*3" hypothesis, while on the second machine the "2*3" hypothesis is input as 1010111010101111, which is much longer than the "6" hypothesis.
Which hypothesis, between "2*3" and "6", is simpler and less complex, based on what we observe from these two different machines? Which one is right? AFAIK, this is still completely unresolved.
Replies from: Pentashagon↑ comment by Pentashagon · 2013-05-03T18:31:03.168Z · LW(p) · GW(p)
Which hypothesis, between "2*3" and "6", is simpler and less complex, based on what we observe from these two different machines? Which one is right? AFAIK, this is still completely unresolved.
If we're considering hypotheses across all mathematically possible universes then why not consider hypotheses across all mathematically possible languages/machines as well?
Replies from: Viliam_Bur, DaFranker↑ comment by Viliam_Bur · 2013-05-04T09:17:04.986Z · LW(p) · GW(p)
What weight will we assing to the individual languages/machines? Their complexity... according to what? Perhaps we could make a matrix saying how complex a machine A is when simulated by a machine B, and then find the eigenvalues of the matrix?
Must stop... before head explodes...
↑ comment by DaFranker · 2013-05-03T18:49:50.992Z · LW(p) · GW(p)
If we're considering hypotheses across all mathematically possible universes then why not consider hypotheses across all mathematically possible languages/machines as well?
This is also my intuition as well, though it has to be restricted to turing-complete systems I think. I was under the impression that there was already some active research in this direction, but I've never taken the time to look into that too deeply
↑ comment by MileyCyrus · 2013-05-03T00:20:09.834Z · LW(p) · GW(p)
.
comment by Panic_Lobster · 2013-05-03T01:26:16.072Z · LW(p) · GW(p)
Has anyone here heard of Michael Marder and his "Plant Thinking" - there is this book being published by Columbia University which argues that plants need to be considered as subjects with ethical value, and as beings with "unique temporality, freedom, and material knowledge or wisdom." This is not satire. He is a research professor of philosophy at a European university.
http://www.amazon.ca/Plant-Thinking-A-Philosophy-Vegetal-Life/dp/0231161255 and here is a review http://ndpr.nd.edu/news/39002-plant-thinking-a-philosophy-of-vegetal-life/
I don't want to live on this planet anymore
Replies from: Jack, gwern, MrMind↑ comment by Jack · 2013-05-03T02:27:25.369Z · LW(p) · GW(p)
In Gender Trouble (1990), Judith Butler
...
accommodates plants' constitutive subjectivity, drastically different from that of human beings, and describes their world from the hermeneutical perspective of vegetal ontology (i.e., from the standpoint of the plant itself)"
...
So, in addition to the "vegetal différance" and "plants' proto-writing" (112) associated with Derrida, we're told that plant thinking "bears a close resemblance to the 'thousand plateaus'" (84) of Deleuze and Guattari. At the same time, plant thinking is "formally reminiscent of Heidegger's conclusions apropos of Dasein" (95),
So it's that kind of book.
Just so everyone is clear: this is the kind of "philosophy" that, in the States or the UK, would be done only at unranked programs or in English departments.
The review literally name checks every figure of shitty continental philosophy.
↑ comment by gwern · 2013-05-03T02:15:43.429Z · LW(p) · GW(p)
It's too bad; a book on what plants might think or what their views might look like - a look which took the project seriously in extrapolating a possible plant civilization and its views and ethics, a colossally ambitious and scientificly-grounded work of SF - could be pretty awesome. But from the sound of that review, it's exactly where Marder falls down.
Replies from: NancyLebovitz, None, drethelin, None↑ comment by NancyLebovitz · 2013-05-03T20:18:03.930Z · LW(p) · GW(p)
After contemplating how odd it is that people have a revulsion against weapons which use disease and poison that they don't seem to have against weapons which use momentum and in fact are apt to consider momentum weapons high status, I wondered if there could be sentients with a reversed preference.
I think sentient trees could fill the requirement. IIRC, plants modulate their poisons according to threat level.
↑ comment by [deleted] · 2013-05-03T05:56:37.475Z · LW(p) · GW(p)
Olaf Stapledon's 'Star Maker'. The whole thing is filtered through semi-communist theology, but its a fascinating trek through the author's far-flung ideas about all kinds of creatures and what they could hold in common versus major differences that come from their natures. One of the dozens of races he describes is a race of plant-men on an airless world that locked up all its volatiles in living soup in the deep valleys, they stand at the shore and soak up energy from their star in a meditative trance during the day and do more animal-style activity at night... his writing style is NOT for everyone nor is his philosophy but I heartily enjoyed it.
Replies from: gwern, NancyLebovitz↑ comment by gwern · 2013-05-03T15:49:22.042Z · LW(p) · GW(p)
Yes! Star Maker is one of the very few books that I'd place up there with Blindsight and a few others in depicting truly alien aliens; and he doesn't do it once but repeatedly throughout the book. It's really impressive how Stapledon just casually scatters around handfuls of jewels that lesser authors might belabor singly throughout an entire book.
↑ comment by NancyLebovitz · 2013-05-03T07:38:58.219Z · LW(p) · GW(p)
That book and Last and First Men and possibly Last and First Men in London are amazing. He's got paragraphs that a normal science fiction writer would flesh out into novels.
Replies from: None↑ comment by MrMind · 2013-05-03T10:20:33.169Z · LW(p) · GW(p)
If I'm not mistaken, there have been some study on plant communication and data elaboration from their roots, enough to classify them as at least primitively intelligent. Anyway, since they are in fact living and autonomous being, I don't see why they shouldn't be considered subjects of ethical reflections...
Replies from: falenas108↑ comment by falenas108 · 2013-05-03T16:16:26.498Z · LW(p) · GW(p)
If we don't say bacteria need ethical reflections, then it is very unlikely that plants will either.
Replies from: MrMind↑ comment by MrMind · 2013-05-06T08:30:34.140Z · LW(p) · GW(p)
Well, deciding when to stop caring at a certain complexity level is a sort of ethical reflection.
Anyway, if we care about humans and animals because they have some sort of thinking life, then if these studies are valid we should start paying attention to plants too. Of course we could simply decide we need to care on some other basis.
↑ comment by Panic_Lobster · 2013-05-08T23:11:25.429Z · LW(p) · GW(p)
We can reasonably say that something has a "thinking life" if it functions as a state machine where 'states' correspond to abstract models of sensory data (patterns in external stimuli). The complexity of the possible mental states is correlated with the complexity (information content) of the sensory data that can be collected and incorporated into models.
A cat's brain can be reasonably interpreted as working this way. A nematode worm's 302 neurons probably can't. A plant's root system almost definitely can't.
Note that this concept of a "thinking life" or sentience is a much weaker and more inclusive than the concept of "personhood" or sapience.
comment by NancyLebovitz · 2013-05-14T14:41:36.834Z · LW(p) · GW(p)
Stanford University is offering a from-scratch introduction to physics, taught by Leonard Susskind.
This is a notification, not a review, since I've only listened to a few minutes of the first lecture, which is at least intriguing. I'm wondering where Susskind could go with the question of allowable laws of physics.
comment by CAE_Jones · 2013-05-04T21:27:02.823Z · LW(p) · GW(p)
Has there been an atempt at a RATIONAL! Wizard of Oz? I spontaneously started writing one in dialog form, then realized I would need to scrap it and start over with actual planning if I wanted to keep going. I like this idea, but I'm not sure how motivated I am to go through with it; I'd rather read an existing such fic, if one exists.
Replies from: bogus, Matt_Simpson↑ comment by bogus · 2013-05-04T22:28:55.978Z · LW(p) · GW(p)
The Wizard of Oz. was originally written as a satirical take on the economic effects of the gold standard, although this important feature of the work has been mostly forgotten nowadays. Once you unpack the allegories, it actully shows quite a lot of rationality and common sense.
Replies from: gwern↑ comment by gwern · 2013-05-04T22:58:30.480Z · LW(p) · GW(p)
The Wizard of Oz. was originally written as a satirical take on the economic effects of the gold standard
That's debatable: http://en.wikipedia.org/wiki/Political_interpretations_of_The_Wonderful_Wizard_of_Oz#Overview One has to wonder about a successful satire that takes 70 years to be unearthed as part of a convenient way to teach highschool students about history. http://www.halcyon.com/piglet/Populism.htm seems like a fairly convincing rebuttal.
↑ comment by Matt_Simpson · 2013-05-10T20:26:12.597Z · LW(p) · GW(p)
The book, Wicked is based on Wizard of Oz and has some related themes IIRC. (I really didn't like the musical based on the book though. But I might just dislike musicals in general; FWIW I also didn't like the only other musical I've seen in person - Rent.)
comment by [deleted] · 2013-05-02T14:30:04.735Z · LW(p) · GW(p)
There's an argument in the metaethics sequence, to the effect that there are no universally compelling moral arguments. This argument seems to be an important cashed thought (in don't mean that in any pejorative sense) in LW discussions of morality. This argument also seems to me to be faulty. Can anyone help me see what I'm missing?
The argument is from No Universally Compelling Arguments:
Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space. If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.
This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it.
The central inference in the argument seems to me to go like this:
P1) Any universal generalization over minds ('All minds m: X(m)') is very unlikely to be true.
P2) A purportedly universally compelling moral argument has the form 'All minds m: X(m)'
C) A purportedly universally compelling moral argument is very unlikely to be true.
The reason I think this is faulty is that P1 is itself an argument of the form 'All minds m: X(m)', that is, it's a universal generalization over minds. If that's so, then P1 is very unlikely to be true, and we shouldn't accept the argument. In order to save the argument, we would have to weaken P1 to cover a more specific set of generalizations over minds (so that P1 itself is excluded) but if we do this, then the argument is invalid, since universally compelling moral arguments may end up excluded as well. We might have good reasons for thinking they won't be, but no such reasons are given in the sequence post.
Replies from: Adele_L, falenas108, DaFranker↑ comment by Adele_L · 2013-05-02T14:43:51.510Z · LW(p) · GW(p)
I don't see how your P1 is a statement over all minds, it looks more like a statement over most arguments.
Replies from: Qiaochu_Yuan, None↑ comment by Qiaochu_Yuan · 2013-05-02T18:39:33.500Z · LW(p) · GW(p)
Agreed. P1 is quantifying over arguments, not over minds.
↑ comment by [deleted] · 2013-05-02T14:58:57.223Z · LW(p) · GW(p)
I see the symmetry between P1 and a universally compelling moral argument in this: they both make a claim about the application of an argument quantifying over all minds in mind-space.
The claim EY is refuting is 'For all minds m, m: (moral argument X is compelling)m.'
P1 makes the claim 'For all minds m, m:(an argument of the form 'for all minds m:X(m) is unlikely to be true)m.'
Is that not right?
Replies from: Nisan, OrphanWilde↑ comment by Nisan · 2013-05-02T18:10:13.689Z · LW(p) · GW(p)
It looks like your P1 is quantifying twice over the same variable. I don't think that's right.
Replies from: None↑ comment by [deleted] · 2013-05-02T20:16:08.891Z · LW(p) · GW(p)
Is it? I intended it to only quantify over the non-nested m. Am I committed to quantifying over the nested m as well?
Replies from: Nisan↑ comment by Nisan · 2013-05-02T21:17:45.338Z · LW(p) · GW(p)
Now I'm just confused by your syntax.
Replies from: None↑ comment by [deleted] · 2013-05-02T21:21:01.412Z · LW(p) · GW(p)
Or, more likely, I am confused by my syntax. If you were to formalize EY's argument, how would you put it?
Replies from: Nisan, Qiaochu_Yuan↑ comment by Nisan · 2013-05-02T22:32:15.998Z · LW(p) · GW(p)
At the risk of prolonging an unproductive thread, I'd say P1 is like
P1: For most predicates X: Not (For all minds m: X(m))
This isn't self-refuting.
Replies from: None↑ comment by [deleted] · 2013-05-02T22:42:41.002Z · LW(p) · GW(p)
Thanks, you're right that this isn't self refuting. But with that P1, the argument seems invalid:
P1: For most predicates X: Not (For all minds m: X(m))
P2: UCMAs are X
C: Not UMCA
is like
P1: For most prime numbers n: (odd)n
P2: 2 is prime
C: 2 is odd
Edit: you might think that the conclusion is not that not 'not UMCA' but 'UMCA is unlikely', but this doesn't follow either. I don't know quite how 'most' quantifiers work, but I don't think we can read a probabilistic conclusion off of them. I don't think it follows from the above, for example, that 2 is likely to be odd.
Replies from: Nisan↑ comment by Nisan · 2013-05-02T23:34:47.249Z · LW(p) · GW(p)
Yes, the crucial issue in this conversation is the concept of 'most' and 'probability'. What you can conclude from P1 is that a priori, a randomly selected predicate X probably does not satisfy X(m) for all m. If we had other reasons to believe that X(m) for all m, then we can update our beliefs. Similarly, we expect that a randomly selected prime number n is probably odd; but if we learn the further fact that n=2, then our belief changes.
Replies from: None↑ comment by [deleted] · 2013-05-03T01:06:13.546Z · LW(p) · GW(p)
So what do you make of this argument then? Suppose I were of the opinion that 2 is an even prime. You come to me with an argument to the effect that I should not believe 2 to be prime because a randomly selected prime number is very, very unlikely to be even. Should I be convinced by that? I may be convinced that in some sense, 2 is unlikely to be even, but I don't think I should accept that 2 is not even, or that the evenness of 2 is questionable.
Similarly, suppose someone believes an argument to be universally compelling. It seems to me that EY's argument should be unmoving: granting that it is unlikely for a randomly selected argument to be UC, but theirs is no randomly selected argument. And on DaFranker's reading of this argument, the thesis that a given X is unlikely to hold for of all minds relies on the assumption that for most X's, there is (something like) a 50% chance of its being true of some mind. But certainly a UCMAist won't accept that this is true of UCMA's. UCMA's, they will say, are exactly those X's for which this is not true.
The burden may be on them to justify the possibility of such an X, but that fact won't save the argument.
Replies from: Nisan↑ comment by Nisan · 2013-05-03T02:02:12.714Z · LW(p) · GW(p)
As for your first paragraph, well, this is a straightforward application of Bayes' theorem. If you're sure that 2 is even, then learning that 2 was randomly selected from some distribution over primes should not be enough to change your credence very much.
As for your second and third paragraphs: Yes, the argument of Eliezer you're talking about doesn't refute the existence of universally compelling arguments; it merely means that you shouldn't believe you have a universally compelling argument unless you have a good reason for believing so. If you think you have a good reason, then you don't have to worry about this argument.
There's a very simple argument refuting the existence of universally compelling arguments, and I believe it was stated elsewhere in this thread. It's that argument you have to refute, not this one.
Replies from: None↑ comment by [deleted] · 2013-05-03T17:21:05.484Z · LW(p) · GW(p)
There's a very simple argument refuting the existence of universally compelling arguments, and I believe it was stated elsewhere in this thread. It's that argument you have to refute, not this one.
Please point this out to me if you get a chance, as I haven't noticed it. And thanks for the discussion. I mean that: I can see that this wasn't helpful or interesting for you, but rest assured it was for me, so your indulgence is appreciated.
Replies from: Nisan↑ comment by Nisan · 2013-05-04T01:25:23.457Z · LW(p) · GW(p)
You're welcome! The refutation of universally compelling arguments I was referring to is this one. I see you responded that you're interested in a different definition of "compelling". On the word "compelling", you say
On the one hand, we could mean 'persuasive' where this means something like 'If I sat down with someone, and presented the moral argument to them, they would end up accepting it regardless of their starting view'. This seems to be a bad option, because the claim that 'there are no universally persuasive moral arguments' is trivial.
This is indeed the meaning of "compelling" that Eliezer uses, and Eliezer's original argument is indeed trivial, which perhaps explains why he spent so few words on it.
If you wanted to defend a different claim, that there are arguments that all minds are "rationally committed" to accepting or whatever, then you'd have to begin by operationalizing "committed", "reasons", etc. I believe there's no nontrivial way to do this. In any case the burden is on others to operationalize these concepts in an interesting way.
Replies from: None↑ comment by Qiaochu_Yuan · 2013-05-02T21:36:04.814Z · LW(p) · GW(p)
Why would you want to formalize the argument?
Replies from: None↑ comment by [deleted] · 2013-05-02T21:47:22.816Z · LW(p) · GW(p)
That I can't argue with, though it wouldn't follow from that that UCMAs are likely to be false.
EDIT: you edited your post, and so my reply doesn't seem to make sense. In answer to your new question, I would say 'I don't, I just want some presentation of the argument on which its validity (or invalidity) is obvious'.
↑ comment by OrphanWilde · 2013-05-02T16:47:49.019Z · LW(p) · GW(p)
UCMA is making a claim about all minds, P1 is making a claim about some undefined subset of all minds.
They both talk about "all minds," but only one of them makes a claim -about- all minds.
A parallel pair of arguments might be: All squares are rectangles The claim that all squares are rectangles is unlikely to be true of all squares.
The first claim is stronger than the second, and requires more proof. The fact that we can in fact prove it is irrelevant, and part of why I chose this example; consider the inverse propositions that all rectangles are squares, and that that claim is unlikely to be true, to see why this is important.
Replies from: None↑ comment by [deleted] · 2013-05-02T16:56:48.688Z · LW(p) · GW(p)
The claim that all squares are rectangles is unlikely to be true of all squares.
This is analogous to the conclusion of the above argument, not P1. An analogue to P1 would have to be something like 'Any argument of the form 'for all squares s:(X)s is unlikely to be true.' The question would then be this: does this analogue of P1 count as an argument of the form 's:(X)s'? That is, does it quantify over all squares?
You might think it doesn't, since it just talks about arguments. But my point isn't quite that it must count as such an argument, but rather that it must count as an argument of the same form as P2 (whatever that might be). The reason is that P2 is not like 'all squares are rectangles'. If it were, P2 would be a (purportedly) universally compelling moral argument. But P2 is rather the claim that there is such an argument. P2 is 'for all minds m:(Moral Argument X is compelling)m'.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-05-02T17:27:09.771Z · LW(p) · GW(p)
I see what you're talking about. My confusion originates in your definition of P2, rather than P1, where I thought the confusion was originated.
Suppose two minds, A and B. A has some function for determining truth, let's call it T. Mind B, on the other hand, is running an emulation of mind A, and its truth function is not(T).
Okay, yes, this is an utterly pedantic kind of argument, but I think it demonstrates that in -all- of mindspace, it's impossible to have any universally compelling argument, without relying on balancing two infinities (number of possible arguments and number of possible minds) against each other and declaring a winner.
Replies from: None↑ comment by [deleted] · 2013-05-02T20:14:26.898Z · LW(p) · GW(p)
That sounds pretty good to me, though I think it's an open question whether or not what you're talking about is possible. That is, a UCMA theorist would accuse you of begging the question if you assumed at the outset that the above is a possibility.
Replies from: DaFranker↑ comment by DaFranker · 2013-05-02T21:00:03.050Z · LW(p) · GW(p)
It's only an open question insofar as what are considered "minds" and "arguments" remain shrouded in mystery.
I'm rather certain that for a non-negligible fraction of all minds, the entire concept of "arguments" is nonsensical. There is, after all, no possible combination of inputs (or "arguments"), that will make the function "SomeMind(): Print 3" output that it is immoral to tube-feed chicken.
Replies from: None↑ comment by [deleted] · 2013-05-02T21:53:41.978Z · LW(p) · GW(p)
I'm rather certain that for a non-negligible fraction of all minds, the entire concept of "arguments" is nonsensical.
Why are you certain of this?
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T14:20:07.311Z · LW(p) · GW(p)
Because of my experience with programming and working with computation, I find it extremely unlikely that, out of all possible things, the specific way humans conceptualize persuasion and arguments would be a necessary requirement for any "mind" (which I take here as a 'sentient' algorithm in the largest sense) to function.
If the way we process these things called "arguments" is not a requirement for a mind, then there almost certainly exists at least one logically-possible mind design which does not have this way of processing things we call "arguments".
As another intuition, if we adopt the Occam/Solomonoff philosophy for what is required to have a "mind", then something as complicated as the process of understanding arguments, being affected, influenced or persuaded by them, by running through filters and comparing with prior knowledge and so on until some arguments convince or do not convince... that must be required for all possible minds as a component of an already-complex system called "minds"... sounds extremely much less common in the realm of all possible universes than the universes where simpler minds exist that do not have this property of understanding arguments and being moved by them.
Replies from: None↑ comment by [deleted] · 2013-05-03T17:28:13.832Z · LW(p) · GW(p)
I find it extremely unlikely that, out of all possible things, the specific way humans conceptualize persuasion and arguments would be a necessary requirement for any "mind" (which I take here as a 'sentient' algorithm in the largest sense) to function.
I don't have any experience with programming at all, and that may be the problem: I just don't see these reasons. To my mind (ha) a mind incapable of processing arguments, which is to say holding reasons in rational relations to each other or connecting premises and conclusions up in justificatory relations or whatever, isn't reasonably called a mind. This may just be a failure of imagination on my part So...
As another intuition, if we adopt the Occam/Solomonoff philosophy for what is required to have a "mind"
Could you explain this? I'm under the impression that being capable of solomonoff induction requires being capable of 1) holding beliefs, 2) making inferences about those beliefs, 3) changing beliefs. Yet this seems to me to be all that is required for 'understanding and being convinced by an argument'.
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T18:47:28.175Z · LW(p) · GW(p)
In my limited experience, UCMA supporters explicitly rejected the assertion that "arguments" and "being convinced by an argument" are equivalent to "evidence" and "performing a bayesian update on evidence". So those three would be enough for evidence and updates, but not enough for argument and persuasion according to my next-best-guess of what they mean by "argument" and "convinced".
For one, you need some kind of input system, and some process that looks at this input and connects it to pieces of an internal model, which requires and internal model and some structure that sends signals from the input to the process, and some structure where the process has modification access to other parts of the mind (to form the connections and perform the edits) in some way.
Then you need something that represents beliefs, and some weighing or filtering system where the elements of the input are judged (compared to other nodes in the current beliefs) and then evaluated using a bunch of built-in or learned rules (which implies having some rules of logic built-in to the structure of the mind, or the ability to learn such rules, both of which are non-trivial complexity-wise), and then those evaluations organized in a way where it can be concluded whether the argument is sound or not, and the previous judgments of the elements integrated so that it can be concluded whether the premises are also good, and then the mind also requires this result to send a signal to some dynamic process in the brain that modus ponens the whole thing into using the links to the concepts and beliefs to update and edit them to the new values prescribed by the compelling argument.
Whew, that's a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell. I sure hope we don't live in the kind of weird universes where sentience necessarily implies or requires all of the above!
Which is where the Occam/SI comes in. All of the above is weird, very specific, and extremely complex in most machine designs I can think of. Sentience is itself complex, but doesn't seem to require the above as far as we can tell. Positing that minds also require all these additional complexities seems like a very bad idea. Statistically, 'A' is always more likely than 'A and B'. Positing UCMA is a bit akin to positing 'A and B and C and Fk but not Re and not any of Ke through Kz and L1..273 except L22'.
Replies from: None↑ comment by [deleted] · 2013-05-03T19:29:12.935Z · LW(p) · GW(p)
In my limited experience, UCMA supporters explicitly rejected the assertion that "arguments" and "being convinced by an argument" are equivalent to "evidence" and "performing a bayesian update on evidence".
Eh, for the UCMA arguments I'm familiar with, they would be happy to work within the (excellent) Solomonoff framework as long as you allowed for probabilities of 0 and 1. I get that this isn't an unproblematic allowance, but nothing about the math actually requires us to exclude probabilities of 0 and 1 (so far as I understand it).
Whew, that's a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell.
What is necessary? It'll pay off for us to get this on the table.
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T20:16:13.497Z · LW(p) · GW(p)
What is necessary? It'll pay off for us to get this on the table.
If we knew exactly, someone would have a nobel for it and the nonperson predicate would be a solved problem by now, along with the Hard Problem of Consciousness and a throng of other things currently puzzling scientists the world over.
However, we do have a general idea of the direction to take, with an example here of some of the things involved. There's still the whole debate and questions around the so-called "hard problem of consciousness", but overall it doesn't even seem as if the ability to communicate is required for consciousness or sentience, let alone hold the ability to parse language in a form remotely close to ours or that allows anything akin to an argument as humans are used to the word.
But past that point, the argument is no longer about UCMAs, and becomes about morality engines (and whether morality or something akin to it must exist in all minds), consciousness, what constitutes an 'argument' and 'being convinced', and other things humans yet understand so very little about.
Replies from: None↑ comment by [deleted] · 2013-05-03T20:39:52.711Z · LW(p) · GW(p)
Okay, I see the problem. Let's say this: within the whole of mind-space there is a subset of minds capable of morally-evaluable behavior. For all such minds, the UCMA is true. This may be a tiny fraction, but the UCMAist won't be disturbed by that: no UCMAist would insist that the UCMA is UC for minds incapable of anything relevant to morality. How does that sound?
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T20:50:18.157Z · LW(p) · GW(p)
This sounds like a good way to avoid the heavyweight problems with all the consciousness debates, so it seems like a good idea.
However, it retains the problem of defining "morality", which is still unresolved. UCMAists will argue from theories of morality where UC is an element of the theory, while E.Y. already assumes a different metaethics where there is no clear boundaries of human "morality" and where morality-in-the-way-we-understand-it is a feature of humans exclusively, and other things might have things akin to morality that are not morality, and some minds would be able to evaluate moral behaviors without caring about morality in the slightest, while some other minds we might consider morally-important and yet would completely ignore any "UCMA" that would otherwise compel any human.
↑ comment by falenas108 · 2013-05-02T17:19:12.639Z · LW(p) · GW(p)
Without going into the details, you could hypothesize a simple mind than automatically rejects any argument. This would by itself prove the No Universally Compelling Arguments theory.
Replies from: None↑ comment by [deleted] · 2013-05-02T20:12:19.777Z · LW(p) · GW(p)
That would do it, though it may only attack a straw man: the thesis that the, say, categorical imperative is universally compelling is not the thesis that the CI is universally persuasive. Rather, I think the thought is that we are all rationally committed to the CI, whether we know or admit this or not.
Replies from: DSherron↑ comment by DSherron · 2013-05-02T20:59:38.585Z · LW(p) · GW(p)
Taboo compelling and restate. If compelling does not mean persuasive then what does it mean to you? Also taboo "committed" and "rational" - I think there's a namespace conflict over your use of rational and the common Less Wrong usage, so restate using different terms. As a hint, try and imagine what a universally compelling argument would look like. What properties does it have? How do different minds react to understanding it, assuming they are capable of doing so? For bonus points explain what it means to be rationally committed to something (without using those words or synonyms).
Also worth noting: P1 is a generalization over statements about minds, not minds.
Replies from: None↑ comment by [deleted] · 2013-05-02T21:35:51.020Z · LW(p) · GW(p)
Well, we have two options in tabooing 'compelling'. On the one hand, we could mean 'persuasive' where this means something like 'If I sat down with someone, and presented the moral argument to them, they would end up accepting it regardless of their starting view'. This seems to be a bad option, because the claim that 'there are no universally persuasive moral arguments' is trivial. No one (of significance) has ever held the contrary view.
So our other option is to take 'compelling' as something like what Kantians say about the CI, namely that every mind is committed to it, whether they accept this or not ('not' out of irrationality). As you say, this leaves us with a lot more tabooing and explaining to do. I'm happy to go on with this, since it's the sort of thing I enjoy, but it is a digression from my (perhaps confused) complaint about EY's argument. The important bit there is just that 'compelling' probably shouldn't be taken in such a way as to make EY's point trivial.
.
Replies from: DSherron↑ comment by DSherron · 2013-05-02T22:10:48.057Z · LW(p) · GW(p)
The problem here is that the second option you offer does nothing to explain what a compelling argument is; it just passes the recursive buck onto the word "committed". I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there's no reason to assume that Eliezer's point isn't trivial in the end. Philosophers have believed a lot of silly things, after all. The only sensible resolution I can come up with is where you take "committed to x" to mean "would, on reflection and given sufficient (accurate) information and a great deal more intelligence, believe x". The problem is that this is still trivially false in the entirety of mindspace. You might, although I doubt it, be able to establish a statement of that form over all humans (I think Eliezer disagrees with me on the likelihood here). You could certainly not establish one given a mindspace that includes both humans and paper clip maximizers.
Replies from: None↑ comment by [deleted] · 2013-05-02T22:31:41.740Z · LW(p) · GW(p)
I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there's no reason to assume that Eliezer's point isn't trivial in the end.
If what you're saying is this, then we agree: EY doesn't here present an argument that UCMAs are likely to be false, but he does successfully argue that a certain class of generalizations over mind-space are likely to be false (such as generalizations about what minds will find persuasive) along with the assumption that a UCMA will fall into that class.
If that's the line, then I think the argument is sound so far as it goes. UCMA enthusiasts (I am not among them, but I know them well) will not accept the final assumption, but you may be right that the burden is on them to show that UCMA's (whatever 'compelling' is supposed to mean) does not fall into this class.
Alternatively, we could just posit that we're only arguing against those people who do accept the assumption, that is those people who do take 'compelling' in UCMA to mean something like 'immediately persuasive', but then we're probably tilting at windmills.
Replies from: DSherron↑ comment by DSherron · 2013-05-03T00:00:04.138Z · LW(p) · GW(p)
I suspect that our beliefs are close enough to each other at this point that any perceived differences are as likely to be due to minor linguistic quibbles as to actual disagreement. Which is to say, I wouldn't have phrased it like you did (had I said it with that phrasing I would disagree) but I think that our maps are closer than our wording would suggest.
If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I'd love to hear it. Otherwise, I think the thread has reached its (happy) conclusion.
Replies from: None↑ comment by [deleted] · 2013-05-03T00:20:37.827Z · LW(p) · GW(p)
If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I'd love to hear it.
I'll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
Replies from: DSherron↑ comment by DSherron · 2013-05-03T01:09:11.592Z · LW(p) · GW(p)
I'll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
I think this may sound stronger than it actually is, for the same reasons that you can't convince an arbitrary mind who does not accept modus ponens that it is true.
More to the point, recall that one rationalist's modus tollens is another's modus ponens. This definition is defeated by any mind who possesses a strong prior that the given UCMA is false, and is willing to accept any and all consequences of that fact as true (even if doing so contradicts mathematical logic, Occam's Razor, Bayes, or anything else we take for granted). This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs. It's worth noting that "contradiction" is a notion from formal logic which not all minds need to hold as true; this definition technically imposes a very strong restriction on the space of all minds which have to be persuaded. The law of non-contradiction (~(A ^ ~A) ) is a UCMA by definition under that requirement, even though I don't hold that belief with certainty.
The arbitrary choice of priors, even for rational minds, actually appears to defeat any UCMA definition that does not beg the question. Of course, it is also true that any coherent definition begs the question one way or another (by defining which minds have to be persuaded such that it either demands certain arguments be accepted by all, or such that it does not). Now that I think about it, that's the whole problem with the notion from the start. You have to define which minds have to be persuaded somewhere between a tape recorder shouting "2 + 2 = 5!" for eternity and including only your brain's algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs.
And if you don't have to persuade any minds, then I hesitate to permit you to call your argument "universally compelling" in any context where I can act to prevent it.
Replies from: TimS, None↑ comment by TimS · 2013-05-03T01:56:51.891Z · LW(p) · GW(p)
Might we say something like:
the argument "A implies B" is not universally compelling unless every rational agent must accept that "P(B | A) > P(B | !A)"
More colloquially, one property of universally compelling evidence might be that all rational agents must agree on the particular direction a particular piece of evidence should adjust a particular prior.
Replies from: DSherron↑ comment by DSherron · 2013-05-03T04:55:34.493Z · LW(p) · GW(p)
You're just passing the recursive buck over to "rational". Taboo rational, and see what you get out; I suspect it will be something along the lines of "minds that determine the right direction to shift the evidence in every case", which, notably, doesn't include humans even if you assume that there is an objectively decidable "rational" direction. There is no objectively determinable method to determine what the correct direction to shift is in any case; imagine an agent with anti-occamian priors, who believes that because the coin has come up heads 100 times in a row, it must be more likely to come up tails next time. It's all a question of priors.
Replies from: TimS↑ comment by TimS · 2013-05-08T01:49:55.205Z · LW(p) · GW(p)
I think there is an objectively right direction to shift, given particular priors. Your anti-regularity observer seems to be making a mistake by becoming more confident if he actually sees heads come up next.
Also, I edited my post above to fix a notational error.
↑ comment by [deleted] · 2013-05-03T17:18:00.172Z · LW(p) · GW(p)
This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs.
You're right that I am committed to denying this, though I would also point out that it does not follow a priori that it is always possible to resolve the state of having contradictory beliefs by rejecting either side of a contradiction arbitrarily. However, in order to deny the above, I must claim that there are some beliefs a mind holds (or is committed to, where this means that these beliefs are deductively provable from what the mind does believe) just in virtue of being a mind. I'll bite that bullet, and claim that there exists a UCMA of this kind. I also think the Law of Non-Contradiction is a UCA, and in fact it's trivially so on my definition, but I think that'll hold up: there are no Bayesian reasons to think that ascribing it a probability of 1 is a problem, and I do think I can defend the claim that evidence against it is a priori impossible (EY's example reasons for doubt in the two articles you cite wouldn't apply in this case).
You have to define which minds have to be persuaded somewhere between a tape recorder shouting "2 + 2 = 5!" for eternity and including only your brain's algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs.
This isn't a problem on my definition of a UCA. My understanding of a UCA (which I think represents an honest to god position, namely Kant's) is consistant with any given mind believing the UCA to be false, perhaps because of reasons like the tape-recorder. Only, such a mind couldn't have consistant beliefs.
And if you don't have to persuade any minds, then I hesitate to permit you to call your argument "universally compelling" in any context where I can act to prevent it.
Remember that my definition of a UCMA isn't 'any mind under any circumstances could always be persuaded'. To attack this view of UCMAs is, I think, to attack a strawman. If we must take UCMAs to be arguments which are universally and actually persuasive for any mind in any circumstance in order to see EY's point (here or elsewhere) as valid, then this is a serious critique of EY.
Replies from: DSherron↑ comment by DSherron · 2013-05-03T18:54:24.153Z · LW(p) · GW(p)
Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you're not just spouting gibberish even though it seems from the inside to make sense. If you'd just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that "if bananas then clocks" was true with certainty 1, and feel from the inside like you we're making sense. Don't just dismiss minds you can't empathize with (meaning minds which you can't model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you're running on corrupted hardware; you can't be perfectly confident that it's not malfunctioning, and if you don't know that then you can't assign probability 1 to anything (on pain of being unable to update later).
Again, though, you've defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don't think you can. Particularly worth noting is that there's no way any "should" statement can be a UCA because I can have any preferences I want and still fit the definition, but "should" statements always engage with preferences.
Replies from: None↑ comment by [deleted] · 2013-05-03T19:09:33.683Z · LW(p) · GW(p)
How confident are you that logic works like you think it works; that you're not just spouting gibberish even though it seems from the inside to make sense.
I'm not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it's true, and if it's false, it's true. So it's true. This isn't entirely uncontroversial, there is Graham Priest after all.
Particularly worth noting is that there's no way any "should" statement can be a UCA because I can have any preferences I want and still fit the definition, but "should" statements always engage with preferences.
I'll channel Kant here, cause he's the best UCMAist I know. He would say that almost all 'should' statements involve preferences, but not all. Most 'should' statements are hypothetical: If you want X, do Y. But one, he says, isn't, it's categorical: Do Y. But there's nothing about 'should' statements which a priori requires the input of preferences. It just happens that most of them (all but one, in fact) do.
Now, Kant actually doesn't think the UCMA is UC for every mind in mind-space, though he does think it's UC for every mind capable of action. This is just to say that moral arguments are themselves only applicable to a subset of minds in mind-space, namely (what he calls) finite minds. But that's a pretty acceptable qualification, since it still means the UCMA is UC for everything to which morality is relevant.
Replies from: DSherron↑ comment by DSherron · 2013-05-03T22:13:14.355Z · LW(p) · GW(p)
I'm not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it's true, and if it's false, it's true. So it's true. This isn't entirely uncontroversial, there is Graham Priest after all.
You say you're not positive that you know how logic works, and then you go on to make an argument using logic for how you're certain about one specific logical proposition. If you're just confused and wrong, full stop, about how logic works then you can't be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It's unlikely, but not certain.
Also, you seem unduly concerned with pointing out that your arguments are not new. It's not anti-productive, but neither is it particularly productive. Don't take this as a criticism or argument, more of an observation that you might find relevant (or not).
The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don't follow it, and have no incentive to do so. It basically says "always cooperate on the prisoner's dilemma," which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It's hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different.
Also, should statements are always descriptive, never prescriptive (as a consequence of what "should" means). You can't propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn't make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective.
I don't really want to go into extreme detail on the issues with Kantian erhics; I'm relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it's full of elementary mistakes. If you still think it's got legs to stand I recommend reading some more of the sequences. Note that human morality is written nowhere except in our brains. I'm tapping out, I think.
Replies from: None↑ comment by [deleted] · 2013-05-04T00:11:52.444Z · LW(p) · GW(p)
I'm tapping out, I think.
Okay, fair enough. You've indulged me quite a ways with the whole UCMA thing, and we finished our discussion of EY's sequence argument, so thanks for the discussion. I've spent some years studying Kant's ethical theory though, so (largely for my own enjoyment) I'd like to address some of your criticisms of the CI in case curiosity provokes you to read on. If not, again, thanks.
I don't really want to go into extreme detail on the issues with Kantian erhics; I'm relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it's full of elementary mistakes.
This conclusion should set off alarm bells: if I told you I'd found a bunch of elementary mistakes in the sequences, having never read them but having discussed them with an acquaintance, you would bid me caution.
First, I don't follow it, and have no incentive to do so.
The issue of incentive is one that Kant really struggles with, and much of his writings on ethics following the publication of the Groundwork for the Metaphysics of Morals (where the CI is introduced) is concerned with this problem. So while on the one hand, you're correct to think that this is a problem for Kant, it's also a problem he spent a lot of time thinking about himself. I just can't do it any justice here, but very roughly Kant thinks that in order to rationally pursue happiness, you have to pursue happiness in such a way that you are deserving of it, and only by being morally good can you deserve happiness. This sounds very unconvincing as read, but Kant's view on this is both sophisticated and shifting. I don't know that he felt he ever had a great solution, and he died writing a book on the importance of our sense of aesthetics and its relation to morality.
It basically says "always cooperate on the prisoner's dilemma,"
The CI is not a decision theory, nor is a decision theory a moral theory. It's important not to confuse the two. If you gave Kant the prisoner's dilemma, he would tell you to always defect, because you should always be honest. You would be annoyed, because he's mucking around with irrelevant features of the set up, and he would point out to you that the CI is a moral theory and that the details of the setup matters. The CI says nothing consistant or interesting about the prisoner's dilemma, nor should it.
I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets.
You could, and that's how preferences work. So there could be no universal hypothetical imperative. But the categorical imperative doesn't involve reference to preferences. If you take yourself to have a reason to X, which makes no reference to preferences (terminal or otherwise), you at the same time take any arbitrary reasoner to have a reason to X. Suppose, for comparison, that a set of minds in mind-space happened to (against whatever odds) have exactly the same evidence as you for a proposition. You couldn't coherently believe that you had reason to believe the proposition, but that they did not. Reasons don't differentiate between reasoners that way.
You may think imperatives always make reference to preferences, but this is an argument you'd have to have with Kant. It's not a priori obvious or anything, so it's not enough to state it and say 'Kant is wrong'.
I should break the CI since what I want to do and what I want others to do are different.
The CI is not the claim that everyone should do what you want to do. The CI is the demand (essentially) that you act on reasons. The structure of reasons (like the fact that reasons don't discriminate between reasoners) gives you the whole 'universal' bit.
↑ comment by DaFranker · 2013-05-02T20:03:15.727Z · LW(p) · GW(p)
Oh my, the confusion.
First off, the quoted argument was, as far as I can tell, entirely meant as an illustrative abstraction. The culprit here is the devious function X().
Suppose I take the set of all possible logically coherent statements that could be made about any given mind. Within this set, 'X' is any given statement about one mind. X(m) represents whether this given statement is True, False or Undefined / Undecidable for this mind 'm'.
For all X1..Xn, for a given mind 'm1', find all the X that are true. Then for all X() for m2, find those that are true. Supposing any given X has 50% probability of being true of any given m, then X1 being true for m1 has probability 0.5, being true for both m1 and m2 has probability 0.25, and so on dividing the odds by 2 for each additional mind for which the conjunctive must hold true.
So for any given X, for m1..m(10^12), X has (1 / 2^12) probability of being true if we assume a priori 50% chance of that statement being true.
Conversely, for any given X, X(m) will be true for at least one m with 2^12 / (2^12 + 1) probability.
The central inference in the argument is that we do not know the structure of 'all possible minds' or of 'all possible arguments', but it is reasonable to believe that the space of all possible minds is sufficiently large and versatile that the subset A() of all possible statements X(), where A() are statements of the form "A(m) is true if mind 'm', when presented with the argument A, will change some specific belief / internal state / thought / behavior to state Y", is not true for all A(m). This latter part of the argument rests mostly on the following reasoning:
If there is any given argument A that will convince all currently known minds such that A(m) is true, and all known minds accept the argument, we can almost certainly construct a mind nearly identical to one of these m, but for which the input A is forbidden, or that will self-destruct immediately upon receiving it, or where the first and only possible result of A(m) is always false, or where arguments of the form which A takes are simply incoherent.
For example, if A is a UCMA of the form where you speak or write certain words and have the listener hear or read them, make the listener unable to understand the language. If the UCMA implies translation to a given language of choice that the listener understand, craft the listener so that the listener does not understand or have any languages. If the UCMA is some form of complex hacking by specifying complicated inputs that abuse internal properties of the mind in question... well, how likely is it that all possible minds share these exact internal properties?
This is a complex subject since "minds" and "arguments" are loaded terms with lots of anthropomorphic stuff hidden behind them, and much of the actual meat of the problem lies in things philosophers still disagree on despite overwhelming probability.
Replies from: None↑ comment by [deleted] · 2013-05-02T23:12:01.838Z · LW(p) · GW(p)
Excellently explained, thank you. The argument you present seems to me to be on the whole reasonable, but it involves two assumptions no UCMA enthusiast I know of would ever accept.
Supposing any given X has 50% probability of being true of any given m...
And
"A(m) is true if mind 'm', when presented with the argument A, will change some specific belief / internal state / thought / behavior to state Y"
These two assumptions aren't argued for, nor are they attributed to any UCMA enthusiast, so I cant see any reason why she should accept them. Do they seem plausible to you? If so, can you give me reasons to accept them?
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T14:31:15.331Z · LW(p) · GW(p)
Supposing any given X has 50% probability of being true of any given m (...)
This isn't a direct claim of fact, but a flat assumption to simplify illustration and calculations. The same argument extends for any arbitrary probability by showing that mathematically, no matter the probability in question (as long as it is a probability, and not a 0 or 1), as the number of possible minds grows towards infinity, the chance of X being true for all minds keeps decreasing in a similar manner.
The hidden assumption behind this, of course, is that I have a high prior that the number of different possible minds is sufficiently high and the probability comparably low enough for this compound probabilistic growth to become critical. Since the number of known different minds already exceeds seven billion and any given random statement of the form "Mind 'm' believes that X is immoral under Y circumstances or context Z" is extremely unlikely to be true for any given mind (and like above, scales to nigh-infinitesimal when conjugating it across all seven billion), I think this hidden assumption is a very reasonable one. An example for clarity:
Mind M ( John B Gato ) believes ( Looking at Jello ) is immoral in context ( Five days and seven minutes after every new moon for a period of three hours, or while scratching one's toe. )
Of course, this is an intuition about moral beliefs, not about being-convinced-by-arguments, but it's an intuition about the diversity of ways human minds process arguments that hints at the possible diversity among non-human mind structures.
"A(m) is true if mind 'm', when presented with the argument A, will change some specific belief / internal state / thought / behavior to state Y"
This is my own model / abstraction of Argument - Mind - Belief/Action. If a UCMA supporter does not believe that arguments lead to any change of belief or behavior in a mind once the argument is made to that mind, then that seems to directly contradict the very idea of a universally compelling argument that persuades any mind.
So the quote above is a model for "If the mind is compelled by the argument, it will have a certain property which allows the argument to compell it" (this property may be a simple emergent property of all possible minds that arises from a combination of the basic necessary properties of all minds, or might be implicit in the structure of logic, which I reckon is a main argument for some of the more sophisticated UCMA supporters).
Replies from: None↑ comment by [deleted] · 2013-05-03T17:41:52.166Z · LW(p) · GW(p)
This isn't a direct claim of fact, but...
It seems very unlikely to me that a UCMA enthusiast would grant that a UCMA has in any given case only a fifty percent chance of being UC. So to assume this begs the question against them. It may be that the UCMAist is being silly here, or that the burden is on them to show that things are otherwise, but that's not relevant to the question of the strength of EY's argument against UCMAs.
...I think this hidden assumption is a very reasonable one.
It is, but it's a bit too reasonable: that is, it's unreasonable to think that the UCMAist actually thinks that the UCMA is already explicitly accepted by everyone, or even that everyone could be immediately or in any circumstances persuaded that the UCMA is true. UCMA's on this conception are obviously false, but then EY's argument is wholly trivial. Nor would we need an argument: it is not hard to come up with a single case of moral disagreement, and that's all that would be necessary. But this would be to attack a stawman.
The UCMAist is committed to some sense being given to the UC bit, you're right. If we go to an actual UCMAist, like Kant, the explanation looks something like this: People say all sorts of things about their moral beliefs, but no one could have reasons to doubt the UCMA while holding consistant beliefs. This means that in principle, any mind could be persuaded to accept the UCMA, but not any mind under any circumstances. I (Kant) am committed to saying that every mind is so structured that the UCMA is an unavoidable deductive conclusion, not that every mind in every circumstance has or would arrive at the UCMA. So if this is what being a UCMA means:
"A(m) is true if mind 'm', when presented with the argument A, will change some specific belief / internal state / thought / behavior to state Y"
Then yes, UCMA's are impossible. But no one has ever thought otherwise, and it remains open whether something very much like them, namely moral arguments which every possible mind is committed to accepting (whether or not they do accept it) is possible.
Replies from: DaFranker↑ comment by DaFranker · 2013-05-03T18:25:16.114Z · LW(p) · GW(p)
It seems very unlikely to me that a UCMA enthusiast would grant that a UCMA has in any given case only a fifty percent chance of being UC. So to assume this begs the question against them. It may be that the UCMAist is being silly here, or that the burden is on them to show that things are otherwise, but that's not relevant to the question of the strength of EY's argument against UCMAs.
No no no. The point of the argument is that it doesn't matter what the probability is. Even if it's not 50%, the dynamics at work still make us end up with and exponentially small probability that something is universally compelling, just with the raw math.
The burden is on the UCMAist to show that there are structural reasons why minds must necessarily have certain properties that also happen to coincide with the ability to received, understand, and be convinced by arguments, and also coincide with the specific pattern where at least one specific argument will result in the same understanding and the same resulting conviction for all possible minds.
Both of these are a priori extremely unlikely due to certain intuitions about physics and algorithms and due to the mathematical argument Eliezer makes, respectively.
namely moral arguments which every possible mind is committed to accepting (whether or not they do accept it) is possible.
I'd require clarification on what is meant by "committed to accepting" here. They accept the argument and change their beliefs, or they do not accept the argument and do not change their beliefs. For either case, they either do this in all situations or only some situations. They may sometimes accept it and sometimes not accept it.
The Kant formulation you give seems explicitly about humans, only humans and exclusively humans and nothing else. The whole point of EY's argument against UCMAs is that there are no universally compelling arguments you could make to an AI built in a manner completely alien to humans that would convince the AI that it is wrong to burn your baby and use its carbon atoms to build more paperclips, even if the AI is fully sentient and capable of producing art and writing philosophy papers about consciousness and universally-compelling moral arguments.
There's other things I'd say are just wrong about the way this description models minds, but I think that for now I'll stop here until I've read some actual Kant or something.
Replies from: None↑ comment by [deleted] · 2013-05-03T19:00:27.413Z · LW(p) · GW(p)
The point of the argument is that it doesn't matter what the probability is.
Right, but I can't imagine a UMCAist thinking this is a matter of probability. That is, the UMCAist will insist that this is a necessary feature of minds. The burden may be up to them, but that's not EY's argument (its not an argument against UMCA's at all). And I took EY to be giving an argument to the effect that UMCA's are false or at least unlikely. You may be right that EY has successfully argued that if one has no good reasons to believe a UMCA exists, the probability of one existing must be assessed as low. But this isn't a premise the UMCAist will grant, so I don't know what work that point could do.
The Kant formulation you give seems explicitly about humans, only humans and exclusively humans and nothing else.
You might be able to argue that, bu that's not the way Kant sees it. Kant is explicit that this applies to all minds in mind-space (he kind of discovered the idea of mind-space, I think). As to what 'committed to accepting' means, you're right that this needs a lot of working out, working out I haven't done. Roughly, I mean that one could not have reasons for denying the UMCA while having consistant beliefs. Kant has to argue that it is structural to all possible minds to be unable to entertain an explicit contradiction, but that's at least a relatively plausible generalization. Still, tall order.
On the whole, I entirely agree with you that a) the burden is on the UCMAist, b) this burden has not been satisfied here or maybe anywhere. I just wanted to raise a concern about EY's argument in this post, to the effect that it either begs the question against the UCMAist, or that it is invalid (depending on how it's interpreted). The shortcomings of the UCMAist aren't strictly relevant to the (alleged) shortcomings of EY's anti-UCMAist argument.
comment by ThereIsNoJustice · 2013-05-02T00:59:49.130Z · LW(p) · GW(p)
Does anyone know the terms for the positions for and against in the following scenario?:
Let's assume you have a one in a million chance of winning the lottery. Despite the poor chance, you pay five dollars to enter, and you win a large sum of money. Was playing the lottery the right choice?
Replies from: fubarobfusco, gothgirl420666, moridinamael, army1987, Emile, latanius, Zaine↑ comment by fubarobfusco · 2013-05-02T01:24:02.239Z · LW(p) · GW(p)
Well, I would call them "expected value" and "hindsight".
Hindsight says, "Because we got a good result, it's all good."
Expected value says, "We got lucky, and cannot expect to get lucky again."
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-02T07:02:45.571Z · LW(p) · GW(p)
Rational Inquirer says "The world gave me a surprise. Is there something I can learn from this surprise?"
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-05-02T13:48:11.756Z · LW(p) · GW(p)
And then it says. "We learned something about the random variables that led to that lottery draw. This doesn't generalize well."
↑ comment by gothgirl420666 · 2013-05-02T02:16:15.075Z · LW(p) · GW(p)
I don't know if there are terms for the positions, but it seems pretty obvious that this is just a question of how you define "right choice". Not playing the lottery was the choice that seemed to maximize your utility given your knowledge at the time. Playing the lottery was the choice that actually maximized your utility. Which one you decide to call "right" is up to you. I think calling the former right is a little more useful because it describes how to actually make decisions, while the latter is only useful for looking back on decisions and evaluating them.
↑ comment by moridinamael · 2013-05-02T01:35:50.664Z · LW(p) · GW(p)
In decision theory, the "goodness" or "badness" of a decision is divorced from its actual outcome. Buying the lottery ticket was a bad decision regardless of whether you win.
However, don't forget that the value of money doesn't scale linearly with how much utility you assign to it. People tend to forget this. There is no rule that says you have to accept a certain $10 in exchange for a 10% chance at $100; on the contrary, it would be unusual to have a perfectly linear utility function in terms of money.
It's possible that your valuation of $5 is essentially 'nothing,' while your valuation of $1 million is 'extremely high.' If you'll permit me to construct a ridiculous scenario: let's say that you're guaranteed an income of $5 a day by the government, that you have no other way of obtaining steady income due to a disability, and that your living expenses are $4.99 per day. You will never be able to save $1 million; even if you save 1c per day and invest it as intelligently as possible, you will probably never accumulate $1 million. Let's further assume that you will be significantly happier if you could buy a particular house which costs exactly $1 million. If we take this artificial example, then it may be rational, or "a good decision" to play the lottery some fraction of the time, since it is essentially the only chance you have of obtaining your dream house.
e: In case the downvote is due to a belief that I am wrong in my assertions, I am prepared to provide citations and calculations to verify everything in this comment. Unexplained downvotes drive me nuts particularly when I know I'm right.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-05-02T02:29:45.847Z · LW(p) · GW(p)
In decision theory, the "goodness" or "badness" of a decision is divorced from its actual outcome.
How does this interact with the idea that rationalists should win?
Replies from: moridinamael, AspiringRationalist, MrMind, AlexSchell, army1987↑ comment by moridinamael · 2013-05-02T02:56:10.836Z · LW(p) · GW(p)
Since we're talking about probabilistic decision theories, if you consistently make "good decisions" you will still obtain "bad outcomes" some of the time. This should not be cause to start doubting your decision procedure. If you say you are 90% confident, you should be thrilled if you are wrong 10% of the time - it means you're perfectly calibrated.
A perfectly rational agent working with incomplete or incorrect information will lose some of the time. The decisions of the agent are still optimal from the agent's frame of reference.
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-02T03:40:22.337Z · LW(p) · GW(p)
In decision theory, the "goodness" or "badness" of a decision is divorced from its actual outcome.
How does this interact with the idea that rationalists should win?
Rationalists should follow winning strategies. If you followed a bad strategy and got lucky, that doesn't mean you should keep following it. The relevant question is what strategy you should follow going forward.
Asking whether a particular past choice was "right" or "wrong", if the answer has no impact on your future choices seems like a wrong question.
↑ comment by MrMind · 2013-05-02T08:26:13.504Z · LW(p) · GW(p)
How does this interact with the idea that rationalists should win?
Rationalists win more by virtue of having a more accurate model of the world, and clearly this helps only in some domain, while in others only a favorable position in some kind of potential landscape would matter (e.g.: beauty contest). Winning the lottery is one of those cases: buying the ticket is of course bad from a decision theory point of view, but one can always be luck enough to receive a great gain from those bad decisions. In the same way, an irrational person can have a correct belief by virtue of pure luck.
↑ comment by AlexSchell · 2013-05-02T04:49:45.919Z · LW(p) · GW(p)
The "divorce" is logical/conceptual, not evidential. It remains true that "rationalists should win", in the presumed intended sense that rationality wins in expectation, that winning is evidence of rationality, and that we should read the dictum a bit stronger to correct for our tendency to ascribe non-winning to bad luck.
↑ comment by A1987dM (army1987) · 2013-05-02T12:12:07.862Z · LW(p) · GW(p)
“Should” != “will always”. Once in a while, unlikely things do happen.
↑ comment by A1987dM (army1987) · 2013-05-02T12:09:59.584Z · LW(p) · GW(p)
More than two different positions, I think that's two different senses of “right”. Once you replace it with “yielding a better expected outcome given what you knew when making the choice” or “yielding a better outcome given what we know now”, people wouldn't actually disagree about anything.
(I myself prefer to use “right” with the former meaning.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-05-02T19:07:24.041Z · LW(p) · GW(p)
(I've seen people using “right” for the former and “lucky” for the latter, and people using “rational” for the former and “right” for the latter.)
↑ comment by Emile · 2013-05-02T07:58:12.857Z · LW(p) · GW(p)
Yet Another Comment Not Answering Your Question ....
Let's assume you have a one in a million chance of winning the lottery. Despite the poor chance, you pay five dollars to enter, and you win a large sum of money. Was playing the lottery the right choice?
A lot depends on whether this "large sum of money" is more or less than five million dollars.
Replies from: army1987, RolfAndreassen↑ comment by A1987dM (army1987) · 2013-05-02T12:16:48.494Z · LW(p) · GW(p)
I guess that in most ordinary situations the utility of $5M isn't anywhere near 1M times the utility of $5.
↑ comment by RolfAndreassen · 2013-05-02T21:49:31.973Z · LW(p) · GW(p)
Even with positive expected value, you may be better off passing up the bet depending on your tolerance for variance and the local shape of your utility-of-money function.
↑ comment by latanius · 2013-05-02T03:10:02.510Z · LW(p) · GW(p)
You won. Aren't rationalists supposed to be doing that?
As far as you know, your probability estimate for "you will win the lottery" (in your mind) was wrong. It is another question how that updates the probability of "you would win the lottery if you played next week", but whatever made you buy that ticket (even though the "rational" estimates voted against it... "trying random things", whatever it was) should be applied more in the future.
Of course, the result is quite likely to be "learning lots of nonsense from a measurement error", but you should definitely should update having seen that, and a decision you use for updates causing that decision to be made more in the future is definitely a right one.
If I won the lottery, I would definitely spend $5 for another ticket. And eventually you might realize that it's just Omega having fun. (actually, isn't one-boxing the same question?)
↑ comment by Zaine · 2013-05-02T04:29:25.026Z · LW(p) · GW(p)
Playing the lottery was an irrational decision, but was the right choice. The outcome, as stated by moridinamael, is divorced from the decision making processes that went into it.
Assuming an unambiguous result that can only either be good or bad, and the most rational choice based upon the evidence then at hand led to a bad outcome, one still made the best (most rational) decision - but, considering the bad result, 'twas the wrong choice.
This classification if useful when determining the competency of a leader - they may have been an extremely rational decision maker but made nothing but wrong choices due to poor quality of information.
I forget my source - as for the terms, fubarofusco's "Hindsight" fits well, while "Expected Value" does not.
comment by CAE_Jones · 2013-05-10T16:42:45.741Z · LW(p) · GW(p)
I have one particular project I'd like to work on, that seems like it should be horribly quick and easy--done and out the door in a week. I've tried starting it a number of times, and hit one of the most unpleasant Ugh Fields that squat in mindscape (blah, even essay-related Ugh Fields I broke through well enough to complete several college courses a few times).
I'm considering just paying a competent programmer to do it. I'd probably try finding someone on ODesk, if I/someone else doesn't get to it before then.
The project is a relatively simple image viewing/editing program, comparable to MSPaint or Powerpoint97, only optimized for the visually impaired. I have enough of an idea of how it should work that I could get the image viewing part up and running in one or two days, if trying to do so didn't start my brain screaming. (Confidence of at least 0.9).
I'm leaning toward hiring someone else to program it based on my design, which brings me to two questions: about how much would that cost, and is there anyone here who would be up for it?
comment by Eneasz · 2013-05-08T22:07:01.621Z · LW(p) · GW(p)
About thinking like a Slytherin - never take things at face value. Don't answer the surface question, answer the query that motivated the question.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-08T23:31:10.077Z · LW(p) · GW(p)
So, I was going to disagree with your summary, but after reading the article, I have to qualify that. In situations like the author describes, where I'm trying to sell something (and, yes, interviews qualify), then sure, look for and answer the "deep question".... which might not be a question at all. More generally, in such situations approach every interaction as though your actual goal were to alter your interlocutor's behavior, because, well, that is your actual goal.
That being said, I prefer my life when most of my interactions with people don't have the primary goal of altering their behavior.
comment by elharo · 2013-05-14T01:01:37.937Z · LW(p) · GW(p)
When is it appropriate to move a post to Main?
When is it appropriate to submit a post to main initially?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-14T05:08:06.064Z · LW(p) · GW(p)
When is it appropriate to submit a post to main initially?
If you are in doubt whether it is appropriate, it isn't. Err on the side of posting to Discussion.
When is it appropriate to move a post to Main?
When you get 20+ upvotes in Discussion and/or a couple of comments saying that your post is worth promoting to Main.
comment by RobertLumley · 2013-05-13T23:26:13.177Z · LW(p) · GW(p)
I initially thought I would really like this article on consiousness after death. I did not. The guy comes off as a complete crackpot, given my understanding of neurobiology. (Although I won't dispute his overall point, nor would many here, I think, that we continue to exist for a bit after we are legally dead.) I would appreciate anyone who is so motivated to look up some things on why a lot of the things he says are completely bogus. I replied to the person who sent me this article with a fairly superficial analysis, but if anyone knows of some solid studies on this, I would like to know. I will paste what I've written below:
Replies from: ZaineThe guy sounds progressively more insane as the article goes on. And progressively more like he doesn't know things that I learned in my intro to neurobiology class, which makes me question his credentials a bit.
That being said, he's definitely correct in that the vast majority of evidence does not suggest that consciousness stops at the moment of death, and, in fact, suggests the opposite. I'm glad someone is doing the research into this stuff, but I do wish it weren't this guy.
For some specifics:
"All the evidence we have shows an association between certain parts of the brain and certain mental processes. But it’s a chicken and egg question: Does cellular activity produce the mind, or does the mind produce cellular activity?"
This has been pretty conclusively proven. They've stuck electrodes in peoples brains and turned them on and forced them to raise their arms against their "will". I don't think this leaves much room for doubt in this instance.
"Scientists have come to believe that the self is brain cell processes, but there’s never been an experiment to show how cells in the brain could possibly lead to human thought. If you look at a brain cell under a microscope, and I tell you, “this brain cell thinks I’m hungry,” that’s impossible."
Things like this makes it sound like he's confused about the entire subject of neurobiology... That is impossible because no single cell is ever responsible for signalling that. Or anything close to that complex. Thoughts are interactions of whole complexes of cells. To take an analogy to the visual cortex, that would be like saying a single rod "thinks" that a laptop is in front of me right now, when that's the combined activity of millions of rods (and cones) and the cells that are connected to them, and the layers and layers of abstraction and processing of that input that goes on beyond that. And the pruning of these branches makes a huge difference in all of this. The way you actually learn what a face is is that your face learns that neurons 5a, 5c, 5d, and 5e need to be firing while neurons 5b, 5f, and 5g aren't means it's likely a face pattern. (In turn, 5a is turned on or off by the combination of 4a, 4b, and 4c, and so on, etc.)
In a wild generalization, the way neurological development works, you start out with the neurological connections to pattern match anything you're exposed to effectively anything (Say you're exposed to things that look like Picasso paintings very very often, you'd get good at recognizing them and seeing them as normal). Those pathways that aren't commonly used die, and you're only left with the pathways that recognize the patterns that you're exposed to a lot. Which is why you're really good at recognizing faces (absurdly absurdly good, when you think about it) but don't instantly analyze the same subtle details of, say, a couch.
"It could be that, like electromagnetism, the human psyche and consciousness are a very subtle type of force that interacts with the brain, but are not necessarily produced by the brain. The jury is still out."
I honestly just think he's pattern matching with the category "scientific sounding phrases". Electromagnetism is the least subtle physical force (ie. the strongest), when you throw out the strong and weak nuclear forces. Gravity is actually the weakest force we know of, (it's like 10 times weaker than magnetism I think?) but he sounds like an idiot if you replace "electromagnetism" with "gravity". And rightly so. But he does it because, to the lay reader "electromagnetism" sounds mysterious and scientific, and "gravity" sounds dull and obvious.
And he never goes on to explain why we haven't detected this "force" with any instrumentation, even though it would be wildly easier to detect than to fully explain the interactions of all of the neurons in our brain. "It could be" that deep inside the atoms and quarks of every cell in our brains there is a tiny midget who is controlling what they do with puppet strings. But the sheer insanity of that being the way the world actually works makes me believe that it is a priori so absurdly unlikely as to not be worth considering without some really conclusive evidence.
↑ comment by Zaine · 2013-05-17T00:58:59.485Z · LW(p) · GW(p)
"All the evidence we have shows an association between certain parts of the brain and certain mental processes. But it’s a chicken and egg question: Does cellular activity produce the mind, or does the mind produce cellular activity?"
Say you want to raise your arm. Your intent will initiate the mental processes required. We don't know how the subjective thinking of ”Raise arm!” initiates cellular processes. Intent may be related to a function of the parietal cortex, but how thinking something initiates cellular process we are unsure of. To this they refer.
The brain produces an electromagnetic field. They were hypothesising that the field has a reciprocal effect on the cells that produce it, and this effect is 'consciousness' or whatever our subjective experience communicates to initiate an action. Maybe when we can clone a human brain with green fluorescent protein we'll find out all neurones initiate other neurones, thus we function. We don't know yet.
I'd beware of dismissing an expert of a field in which one has no domain expertise - check or ask first. This is the corollary to trusting experts too much.
comment by [deleted] · 2013-05-08T21:36:34.769Z · LW(p) · GW(p)
Hello, I am a young person who recently discovered Less Wrong, HP:MOR, Yudkowsky, and all of that. My whole life I've been taught reason and science but I'd never encountered people so dedicated to rationality.
I quite like much of what I've found. I'm delighted to have been exposed to this new way of thinking, but I'm not entirely sure how much to embrace it. I don't love everything I've read although some of it is indeed brilliant. I've always been taught to be skeptical, but as I discovered this site my elders warned me to be skeptical of skepticism as well.
My problem is that I'd like an alternate viewpoint. New ideas are always refreshing, and it's certainly not healthy to constantly hear a single viewpoint, no matter how right your colleagues think they are. (It becomes even worse if you start thinking about a cult.)
Clearly, the Less Wrong community generally (unanimously?) agrees about a lot of major things. For example, religion. The vast majority of "rationalists" (in the [avoid unnecessary Yudkowsky jab] LW-based sense of the term) and all of the "top" contributors, as far as I can tell, are atheists.
Here I need to be careful to stay on topic. I was raised religious, and still am, and I'm not planning to quit anytime soon. I don't want to get into defending religion or even defending those who defend religion. My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking? If you say there aren't any, I won't believe you. I sincerely hope that you aren't afraid to expose your young ones to alternate viewpoints, as some parents and religions are. The optimal situation for you is that you've heard intelligent, thoughtful, *rational* criticism but your position remains strong.
In other words, one way to demonstrate an argument's strength is by successfully defending it against able criticism. I sometimes see refutations of pro-religious arguments on this site, but no refutations of good arguments.
Can you help? I don't necessarily expect you to go to all this trouble to help along one young soul, but most religious leaders are more than happy to. In any case, I think that an honest summary of your own weak points would go a long way toward convincing me that you guys are any better than my ministers.
Sincerely, and hoping not to be bitten, a thoughtful but impressionable youth
comment by CAE_Jones · 2013-05-02T19:16:59.783Z · LW(p) · GW(p)
I notice that most of the innovation in game accessibility (specifically accessibility to the visually impaired) comes from sighted or formerly-sighted developers. I feel like this is a bad thing. I'm not sure why I feel this way, considering that the source of innovation is less important than that it happens. Maybe it's a sort of egalitarian instinct?
(To clarify, I mean innovation in indie games like those in the audiogames.net database. Mainstream console/PC games have so little innovation toward accessibility as to be negligible, so far as I can tell.)
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-05-02T21:43:10.940Z · LW(p) · GW(p)
Have you adjusted for (what I assume is) the fact that most game developers are sighted? In fact, have you checked whether there even exist any not-even-formerly-sighted game developers? It seems like that would be a tough row to hoe even by the standards of blind-from-birth life.
That aside, I'm really not seeing the problem here. You're going to complain about people being altruistic towards the visually impaired? Really confused about your thought process.