Stupid Questions Open Thread Round 2

post by OpenThreadGuy · 2012-04-20T19:38:25.958Z · LW · GW · Legacy · 209 comments

From Costanza's original thread (entire text):

This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well.  Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent.  If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.

Meta:

 

 

209 comments

Comments sorted by top scores.

comment by Joshua Hobbes (Locke) · 2012-04-22T02:21:02.078Z · LW(p) · GW(p)

What practical things should everyone be doing to extend their lifetimes?

Replies from: Turgurth, FiftyTwo, curiousepic, curiousepic, Vaniver, army1987
comment by Turgurth · 2012-04-23T03:00:25.011Z · LW(p) · GW(p)

Michaelcurzi's How to avoid dying in a car crash is relevant. Bentarm's comment on that thread makes an excellent point regarding coronary heart disease.

There is also Eliezer Yudkowsky's You Only Live Twice and Robin Hanson's We Agree: Get Froze on cryonics.

comment by FiftyTwo · 2012-04-22T03:40:41.087Z · LW(p) · GW(p)

Good question.

Its probably easier to list things they shouldn't be doing that are known to significantly reduce life expectancy (e.g. smoking). I would guess it would mainly be obvious things like exercise and diet, but it would be interesting to see the effects quantified.

Replies from: Locke
comment by Joshua Hobbes (Locke) · 2012-04-22T04:10:18.232Z · LW(p) · GW(p)

What about vitamins/medication? Isn't Ray Kurzweil on like fifty different pills? Why isn't everyone?

Replies from: Mark_Eichenlaub, satt, drethelin
comment by satt · 2012-04-22T17:58:50.718Z · LW(p) · GW(p)

It's unclear whether taking vitamin supplements would actually help. (See also the Quantified Health Prize post army1987 linked.)

Regarding medication, I'll add that for people over 40, aspirin seems to be a decent all-purpose death reducer. The effect's on the order of a 10% reduction in death rate after taking 75mg of aspirin daily for 5-10 years. (Don't try to take more to enhance the effect, as it doesn't seem to work. And you have to take it daily; only taking it on alternating days appears to kill the effect too.)

comment by drethelin · 2012-04-22T04:26:29.316Z · LW(p) · GW(p)

Laziness and lack of information

Replies from: Locke
comment by Joshua Hobbes (Locke) · 2012-04-22T05:48:07.381Z · LW(p) · GW(p)

Isn't Less Wrong supposed to be partially about counteracting those? The topic must have come up at some point in the sequences.

Replies from: army1987
comment by curiousepic · 2012-04-26T16:18:45.490Z · LW(p) · GW(p)

I follow the "Bulletproof" diet.

comment by Vaniver · 2012-04-22T06:56:29.549Z · LW(p) · GW(p)

Basically, any effective plan boils down to diligence and clean living. But here are changes I've made for longevity reasons:

You can retain nervous control of your muscles with regular exercise; this is a good place to start on specifically anti-aging exercise.

Abdominal breathing can significantly reduce your risk of heart attacks. (The previously linked book contains one way to switch styles.)

Intermittent fasting (only eating in a 4-8 hour window, or on alternating days, or a few other plans) is surprisingly easy to adopt and maintain, and may have some (or all) of the health benefits of calorie restriction, which is strongly suspected to lengthen human lifespans (and known to lengthen many different mammal lifespans).

In general, I am skeptical of vitamin supplements as compared to eating diets high in various good things- for example, calcium pills are more likely to give you kidney stones than significantly improve bone health, but eating lots of vegetables / milk / clay is unlikely to give you kidney stones and likely to help your bones. There are exceptions: taking regular low doses of lithium can reduce your chance of suicide and may have noticeable mood benefits, and finding food with high lithium content is difficult (plants absorb it from dirt with varying rates, but knowing that the plant you're buying came from high-lithium dirt is generally hard).

Replies from: maia, John_Maxwell_IV, drethelin, army1987
comment by maia · 2012-04-25T16:04:28.092Z · LW(p) · GW(p)

Can you cite a source for your claim about lithium? It sounds interesting.

Replies from: gwern, Vaniver
comment by gwern · 2012-04-25T17:10:50.152Z · LW(p) · GW(p)

He's probably going off my section on lithium: http://www.gwern.net/Nootropics#lithium

Replies from: maia
comment by maia · 2012-04-26T00:16:00.334Z · LW(p) · GW(p)

Ah, yes. Sounds like it. Interestingly, the Quantified Health Prize winner also recommends low-dose lithium, but for a different reason: its effect on long-term neural health.

Replies from: gwern
comment by gwern · 2012-04-26T00:49:09.134Z · LW(p) · GW(p)

I don't think it's really a different reason; also, AFAIK I copied all the QHP citations into my section.

comment by Vaniver · 2012-04-25T19:39:25.511Z · LW(p) · GW(p)

Gwern's research, as linked here, is better than anything I could put together.

comment by John_Maxwell (John_Maxwell_IV) · 2012-05-28T06:16:52.944Z · LW(p) · GW(p)

Are there studies to support the abdominal breathing bit? If so, how were they conducted?

Replies from: Vaniver
comment by Vaniver · 2012-05-28T21:28:24.402Z · LW(p) · GW(p)

The one I heard about, but have not been able to find the last few times I looked for it, investigated how cardiac arrest patients at a particular hospital breathed. All (nearly all?) of them were chest breathers, and about 25% of the general adult population breathes abdominally. I don't think I've seen a randomized trial that taught some subjects how to breath abdominally and then saw how their rates compared, which is what would give clearer evidence. My understanding of why is that abdominal breathing increases oxygen absorbed per breath, lowering total lung/heart effort.

I don't know the terms to do a proper search of the medical literature, and would be interested in the results of someone with more domain-specific expertise investigating the issue.

comment by drethelin · 2012-04-25T16:19:39.010Z · LW(p) · GW(p)

What is your method of intermittent fasting?

Replies from: Vaniver
comment by Vaniver · 2012-04-25T19:56:37.540Z · LW(p) · GW(p)

Don't eat before noon or after 8 PM. Typically, that cashes out as eating between 1 and 7 because it's rarely convenient for me to start prepping food before noon, and I have a long habit of eating dinner at 5 to 6. On various days of the week (mostly for convenience reasons), I eat one huge meal, a big meal and a moderately sized meal, or three moderately sized meals, so my fasting period stretches from 16 hours at the shortest to ~21 hours at the longest.

I'm not a particularly good storehouse for information on IF- I would look to people like Leangains or Precision Nutrition for more info.

Replies from: drethelin
comment by drethelin · 2012-04-25T21:22:26.534Z · LW(p) · GW(p)

thank you. It seems like there's a lot of contradictory opinions on the subject :(

comment by A1987dM (army1987) · 2012-04-22T10:56:38.071Z · LW(p) · GW(p)

lots of [...] milk

I seem to recall a study suggesting that it can be bad for adults to drink lots of milk (more than a cup a day).

Replies from: tut
comment by tut · 2012-04-22T14:29:55.125Z · LW(p) · GW(p)

Bad in what way? The majority of humanity is lactose intolerant and should not drink milk for that reason. And milk contains a bunch of fat and sugar which isn't exactly good for you if you drink extreme amounts. Is that what you are talking about, or is it something new?

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-22T15:12:47.098Z · LW(p) · GW(p)

I've found it: it was in “Fear of a Vegan Planet” by Mickey Z. It suggests milk can lower the pH of the blood which will try to take calcium from the bones to compensate it, citing the 1995 radio show “Natural Living”. (It doesn't look as much as a reliable source to me now as I remembered it did.)

Replies from: aelephant, tut
comment by aelephant · 2012-04-23T14:46:16.784Z · LW(p) · GW(p)

I've found materials both supporting and refuting this idea. It IS possible for diet to effect your blood pH, but whether or not that effects the bones is not clear. Here are two research papers that discuss the topic: http://www.ncbi.nlm.nih.gov/pubmed/21529374 and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3195546/?tool=pubmed

comment by tut · 2012-04-22T15:53:03.881Z · LW(p) · GW(p)

Thank you

comment by A1987dM (army1987) · 2012-04-22T10:40:25.110Z · LW(p) · GW(p)

“Everyone” is tricky, since the main causes of mortality vary with your age. Anyway, I'd say, not smoking, exercising, not being obese (nor emaciated, but in the parts of the world where most Internet users are, short of anorexia nervosa this isn't likely to be a problem), driving less and in a less aggressive way, not committing suicide... Don't they teach this stuff in high school?

Replies from: Mark_Eichenlaub, AspiringRationalist, wedrifid
comment by Mark_Eichenlaub · 2012-04-22T19:50:49.678Z · LW(p) · GW(p)

The last sentence is patronizing, and especially inappropriate in a thread about asking stupid questions.

comment by NoSignalNoNoise (AspiringRationalist) · 2012-07-30T05:09:51.352Z · LW(p) · GW(p)

Don't they teach this stuff in high school?

Yes, they do teach this stuff in high school (and middle school and elementary school for that matter), but they generally had an agenda significantly different from "give students the most accurate possible information about how to be healthy." Based on my admittedly anecdotal recollections, the main goals were to scare us as much as possible about sex and drugs and avoid having to explain anything complicated. As such, I would trust the LW community far more than what I was taught in school.

Of course, if you want to get your health advice from DARE and the Food Pyramid, I guess that's your right.

comment by wedrifid · 2012-04-22T20:32:07.507Z · LW(p) · GW(p)

Don't they teach this stuff in high school?

To the extent that a given fact about life extension can be sneered at like that I would assume that the question was intended to encompass facts of at least one degree less obvious. ie. "What practical things should everyone be doing to extend their lifetimes apart from, you know, breathing, eating, sleeping, drinking?" is implicit.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-22T21:29:40.741Z · LW(p) · GW(p)

Given the huge number of smokers and obese people, I daresay the things I said in the grandparent are not that obvious (or most people aren't interested in living longer).

Replies from: handoflixue
comment by handoflixue · 2012-04-23T22:14:42.054Z · LW(p) · GW(p)

"Obvious to the population as a whole" and "obvious to a LessWrong reader" probably differ dramatically. I don't think repeating the advice is necessarily bad, since those are common points of failure, but the value of the advice is probably fairly minimal.

comment by Arran_Stirton · 2012-04-21T23:00:47.242Z · LW(p) · GW(p)

Is there anywhere I can find a decent analysis of the effectiveness and feasibility of our current methods of cryonic preservation?

(one that doesn't originate with a cryonics institute)

Replies from: gwern, handoflixue, ciphergoth
comment by gwern · 2012-04-22T00:44:36.446Z · LW(p) · GW(p)

Well, that doesn't seem too difficult -

(one that doesn't originate with a cryonics institute)

Oh.

So, who exactly do you expect to be doing this analysis? The most competent candidates are the cryobiologists, and they are ideologically committed to cryonics not working and have in the past demonstrated their dishonesty\*.

* Literally; I understand the bylaw banning any cryonicists from the main cryobiology association is still in effect. ** eg. by claiming on TV cryonics couldn't work because of the 'exploding lysosomes post-death' theory, even after experiments had disproven the theory.

Replies from: MartinB, Arran_Stirton
comment by MartinB · 2012-04-22T11:22:06.723Z · LW(p) · GW(p)

Cryonicists have the same incentive to lie. Reading the current article series of Mike Darwin on Chronopause.com makes a good case on how cryonics currently is broken.

Replies from: gwern, lsparrish
comment by gwern · 2012-04-22T13:42:53.337Z · LW(p) · GW(p)

I hope you appreciate the irony of bringing up Darwin's articles on the quality of cryopreservation in the context of someone demanding articles on quality by someone not associated with cryonics institutes.

Replies from: MartinB
comment by MartinB · 2012-04-22T14:39:09.625Z · LW(p) · GW(p)

No, since his articles make the case against current cryonics organisations, despite coming from a strong supporter of the idea.

comment by lsparrish · 2012-04-22T15:55:30.411Z · LW(p) · GW(p)

Do you have a specific example of a pro-cryonics lie? Because as far as I can tell, Mike is arguing for incompetence and not dishonesty or ideological bias as the culprit.

Replies from: MartinB
comment by MartinB · 2012-04-22T21:06:31.367Z · LW(p) · GW(p)

Incompetence is at least as bad as dishonesty. Not sure if it can be distinguished.

Replies from: lsparrish
comment by lsparrish · 2012-04-22T23:30:19.086Z · LW(p) · GW(p)

Incompetence is at least as bad as dishonesty. Not sure if it can be distinguished.

No! The distinction not only exists but is incredibly important to this context. Incompetence is a problem of an unqualified person doing the job. It can be fixed by many things, e.g. better on-the-job training, better education, or experience. Replacing them with a more qualified candidate is also an option, assuming you can find one.

With a dishonest person, you have a problem of values; they are likely to defect rather than behave superrationally in game-theoretic situations. The only way to deal with that is to keep them out of positions that require trust.

Dishonesty can be used to cover one's tracks when one is incompetent. (Bob Nelson was doing this.) I'm not arguing that incompetence isn't Bayesean evidence for dishonesty -- it is. However, there are plenty of other explanations for incompetence as well: cognitive bias (e.g. near/far bias), lack of relevant experience, personality not suited to the job, extreme difficulty of the job, lack of information and feedback to learn from mistakes, lack of time spent learning the job...

Of all these, why did your mental pattern-matching algorithms choose to privilege dishonesty as likely to be prevalent? Doesn't the fact that there is all this public information about their failings strike you as evidence that they are generally more interested in learning from their mistakes rather than covering their tracks?

I've even seen Max More (Alcor's current CEO) saying positive things about Chronosphere, despite having been personally named and criticized in several of Darwin's articles. The culture surrounding cryonics during the few years I've been observing it actually seems to be one of skeptical reserve and indeed hunger for criticism.

Moreover, the distinction cuts both ways: Multiple cryobiologists who are highly competent in their field have repeatedly made demonstrably false statements about cryonics, and have demonstrated willingness to use political force to silence the opposition. There is no inherent contradiction in the statement that they are competent and dishonest, both capable of doing a good job and willing to refuse to do so. Morality is not the same thing as ability.

comment by Arran_Stirton · 2012-04-22T18:49:39.651Z · LW(p) · GW(p)

So, who exactly do you expect to be doing this analysis?

No idea. Particularly if all cryobiologists are so committed to discrediting cryonics that they'll ignore/distort the relevant science. I'm not sure how banning cryonicists* from the cryobiology association is a bad thing though. Personally I think organisations like the American Psychiatric Association should follow suit and ban all those with financial ties to pharmaceutical companies.

I just want to know how far cryonics needs to go in preventing information-theoretic death in order to allow people to be "brought back to life" and to what extent current cryonics can fulfil that criterion.

* This is assuming that by cryonicists you mean people who work for cryonics institutes or people who support cryonics without having an academic background in cryobiology.

Replies from: gwern
comment by gwern · 2012-04-22T18:58:39.974Z · LW(p) · GW(p)

This is assuming that by cryonicists you mean people who work for cryonics institutes or people who support cryonics without having an academic background in cryobiology.

No.

Replies from: David_Gerard, Arran_Stirton
comment by David_Gerard · 2012-04-23T18:15:37.261Z · LW(p) · GW(p)

There are cryobiologists who are cryonicists, e,g. the authors of this paper.

Replies from: gwern
comment by gwern · 2012-04-23T18:49:13.032Z · LW(p) · GW(p)

The paper does not mention cryonics, nor does the lead author's bio mention being a member of the Society for Cryobiology.

comment by Arran_Stirton · 2012-04-22T19:31:13.926Z · LW(p) · GW(p)

So the by-law bans anyone sympathetic to cryonics?

Replies from: None
comment by [deleted] · 2012-04-22T19:36:53.623Z · LW(p) · GW(p)

See this article by Mike Darwin.

Replies from: Arran_Stirton
comment by Arran_Stirton · 2012-04-22T20:17:27.128Z · LW(p) · GW(p)

Thanks!

I'm starting to suspect that my dream of finding an impartial analysis of cryonics is doomed to be forever unfulfilled...

comment by handoflixue · 2012-04-23T22:28:03.943Z · LW(p) · GW(p)

chronopause.com/index.php/2011/02/23/does-personal-identity-survive-cryopreservation/#comment-247

I've been finding ChronoPause.com to be a very insightful blog. I can't speak to the degree of bias of the author, but most of the posts I've read so far have been reasonably well cited.

I found it sort of terrifying to read the case reports he links in that comment - I read 101, 102, and 103, and it largely spoke to this being a distinctly amateur organization that is still running everything on hope and guesswork, not precise engineering/scientific principles.

Case 101 in particular sort of horrifies me for the aspects of preserving someone who committed suicide, without any consent from the individual in question. I can't help but feel that "patient must be held in dry ice for at least two weeks" is also a rather bad sign.

Feel free to read them for yourself and draw your own conclusions - these reports are straight from CI itself, so you can reasonably assume that, if anything, they have a bias towards portraying themselves favorably.

comment by Paul Crowley (ciphergoth) · 2012-06-02T13:50:23.705Z · LW(p) · GW(p)

Only one technical analysis of cryonics which concludes it won't work has ever been written: http://blog.ciphergoth.org/blog/2011/08/04/martinenaite-and-tavenier-cryonics/

Replies from: Arran_Stirton
comment by Arran_Stirton · 2012-06-06T09:43:15.705Z · LW(p) · GW(p)

Interesting, thanks!

Have you come across any analysis that establishes cryonics as something that prevents information-theoretic death?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-06-06T10:12:06.222Z · LW(p) · GW(p)

We don't know whether it does or not. The current most in-depth discussion is Scientific Justification of Cryonics Practice

comment by David_Gerard · 2012-04-20T20:03:51.878Z · LW(p) · GW(p)

I've read the metaethics sequence twice and am still unclear on what the basic points it's trying to get across are. (I read it and get to the end and wonder where the "there" is there. What I got from it is "our morality is what we evolved, and humans are all we have therefore it is fundamentally good and therefore it deserves to control the entire future", which sounds silly when I put it like that.) Would anyone dare summarise it?

Replies from: hairyfigment, Alejandro1, army1987, XiXiDu, orthonormal, Incorrect, TheOtherDave, handoflixue
comment by hairyfigment · 2012-04-20T20:41:46.101Z · LW(p) · GW(p)

Morality is good because goals like joy and beauty are good. (For qualifications, see Appendices A through OmegaOne.) This seems like a tautology, meaning that if we figure out the definition of morality it will contain a list of "good" goals like those. We evolved to care about goodness because of events that could easily have turned out differently, in which case "we" would care about some other list. But, and here it gets tricky, our Good function says we shouldn't care about that other list. The function does not recognize evolutionary causes as reason to care. In fact, it does not contain any representation of itself. This is a feature. We want the future to contain joy, beauty, etc, not just 'whatever humans want at the time,' because an AI or similar genie could and probably would change what we want if we told it to produce the latter.

Replies from: bryjnar, David_Gerard
comment by bryjnar · 2012-04-21T13:33:25.072Z · LW(p) · GW(p)

Okay, now this definitely sounds like standard moral relativism to me. It's just got the caveat that obviously we endorse our own version of morality, and that's the ground on which we make our moral judgements. Which is known as appraiser relativism.

comment by David_Gerard · 2012-04-20T21:41:30.773Z · LW(p) · GW(p)

I must confess I do not understand what you just said at all. Specifically:

  • the second sentence: could you please expand on that?
  • I think I get that the function does not evaluate itself at all, and if you ask it just says "it's just good 'cos it is, all right?"
  • Why is this a feature? (I suspect the password is "Löb's theorem", and only almost understand why.)
  • The last bit appears to be what I meant by "therefore it deserves to control the entire future." It strikes me as insufficient reason to conclude that this can in no way be improved, ever.

Does the sequence show a map of how to build metamorality from the ground up, much as writing the friendly AI will need to work from the ground up?

Replies from: hairyfigment
comment by hairyfigment · 2012-04-20T22:12:47.181Z · LW(p) · GW(p)

the second sentence: could you please expand on that?

I'll try: any claim that a fundamental/terminal moral goal 'is good' reduces to a tautology on this view, because "good" doesn't have anything to it besides these goals. The speaker's definition of goodness makes every true claim of this kind true by definition. (Though the more practical statements involve inference. I started to say it must be all logical inference, realized EY could not possibly have said that, and confirmed that in fact he did not.)

I get that the function does not evaluate itself at all,

Though technically it may see the act of caring about goodness as good. So I have to qualify what I said before that way.

Why is this a feature?

Because if the function could look at the mechanical, causal steps it takes, and declare them perfectly reliable, it would lead to a flat self-contradiction by Lob's Theorem. The other way looks like a contradiction but isn't. (We think.)

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T22:23:15.984Z · LW(p) · GW(p)

Thank you, this helps a lot.

Though technically it may see the act of caring about goodness as good. So I have to qualify what I said before that way.

Ooh yeah, didn't spot that one. (As someone who spent a lot of time when younger thinking about this and trying to be a good person, I certainly should have spotted this.)

comment by Alejandro1 · 2012-04-20T20:38:41.845Z · LW(p) · GW(p)

This comment by Richard Chappell explained clearly and concisely Eliezer's metaethical views. It was very highly upvoted, so apparently the collective wisdom of the community considered it accurate. It didn't receive an explicit endorsement by Eliezer, though.

Replies from: buybuydandavis, David_Gerard
comment by buybuydandavis · 2012-04-21T03:25:58.284Z · LW(p) · GW(p)

From the comment by Richard Chappell:

(namely, whatever terminal values the speaker happens to hold, on some appropriate [if somewhat mysterious] idealization).

(i) 'Right' means, roughly, 'promotes external goods X, Y and Z' (ii) claim i above is true because I desire X, Y, and Z.

People really think EY is saying this? It looks to me like a basic Egoist stance, where "your values" also include your moral preferences. That is my position, but I don't think EY is on board.

"Shut up and multiply" implies a symmetry in value between different people that isn't implied by the above. Similarly, the diversion into mathematical idealization seemed like a maneuver toward Objective Morality - One Algorithm to Bind Them, One Algorithm to Rule them All. Everyone gets their own algorithm as the standard of right and wrong? Fantastic, if it were true, but that's not how I read EY.

It's strange, because Richard seems to say that EY agrees with me, while I think EY agrees with him.

Replies from: Alejandro1
comment by Alejandro1 · 2012-04-21T16:04:11.526Z · LW(p) · GW(p)

I think you are mixing up object-level ethics and metaethics here. You seem to be contrasting an Egoist position ("everyone should do what they want") with an impersonal utilitarian one ("everyone should do what is good for everyone, shutting up and multiplying"). But the dispute is about what "should", "right" and related words mean, not about what should be done.

Eliezer (in Richard's interpretation) says that when someone says "Action A is right" (or "should be done"), the meaning of this is roughly "A promotes ultimate goals XYZ". Here XYZ is in fact the outcome of a complicated computation based from of the speaker's state of mind, which can be translated roughly as "the speaker's terminal values" (for example, for a sincere philanthropist XYZ might be "everyone gets joy, happiness, freedom, etc"). But the fact that XYZ are the speaker's terminal values is not part of the meaning of "right", so it is not inconsistent for someone to say "Everyone should promote XYZ, even if they don't want it" (e.g. "Babyeaters should not eat babies"). And needless to say, XYZ might include generalized utilitarian values like "everyone gets their preferences satisfied", in which case impersonal, shut-up-and-multiply utilitarianism is what is needed to make actual decisions for concrete cases.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-04-21T21:53:52.101Z · LW(p) · GW(p)

But the dispute is about what "should", "right" and related words mean, not about what should be done.

Of course it's about both. You can define labels in any way you like. In the end, your definition better be useful for communicating concepts with other people, or it's not a good definition.

Let's define "yummy". I put food in my mouth. Taste buds fire, neural impulses propagate fro neuron to neuron, and eventually my mind evaluates how yummy it is. Similar events happen for you. Your taste buds fire, your neural impulses propagate, and your mind evaluates how yummy it is. Your taste buds are not mine, and your neural networks are not mine, so your response and my response are not identical. If I make a definition of "yummy" that entails that what you find yummy is not in fact yummy, I've created a definition that is useless for dealing with the reality of what you find yummy.

From my inside view of yummy, of course you're just wrong if you think root beer isn't yummy - I taste root beer, and it is yummy. But being a conceptual creature, I have more than the inside view, I have an outside view as well, of you, and him, and her, and ultimately of me too. So when I talk about yummy with other people, I recognize that their inside view is not identical to mine, and so use a definition based on the outside view, so that we can actually be talking about the same thing, instead of throwing our differing inside views at each other.

Discussion with the inside view: "Let's get root beer." "What? Root beer sucks!" "Root beer is yummy!" "Is not!" "Is too!"

Discussion with the outside view: "Let's get root beer." "What? Root beer sucks!" "You don't find root beer yummy?" "No. Blech." "OK, I'm getting a root beer." "And I pick pepsi."

If you've tied yourself up in conceptual knots, and concluded that root beer really isn't yummy for me, even though my yummy detector fires whenever I have root beer, you're just confused and not talking about reality.

But the fact that XYZ are the speaker's terminal values is not part of the meaning of "right"

This is the problem. You've divorced your definition from the relevant part of reality - the speaker's terminal values, and somehow twisted it around to where what he *should" do is at odds with his terminal values. This definition is not useful for discussing moral issues with the given speaker. He's a machine that maximizes his terminal values. If his algorithms are functioning properly, he'll disregard your definition as irrelevant to achieving his ends. Whether from the inside view of morality for that speaker, or his outside view, you're just wrong. And you're also wrong from any outside view that accurately models what terminal values people actually have.

Rational discussions of morality start with the observation that people have differing terminal values. Our terminal values are our ultimate biases. Recognizing that my biases are mine, and not identical to yours, is the first step away from the usual useless babble in moral philosophy.

comment by David_Gerard · 2012-04-20T21:43:00.622Z · LW(p) · GW(p)

Shifting the lump under the rug but not getting rid of it is how it looks to me too. But I don't understand the rest of that comment and will need to think harder about it (when I'm less sleep-deprived).

I note that that's the comment Lukeprog flagged as his favourite answer, but of course I can't tell if it got the upvotes before or after he did so.

comment by A1987dM (army1987) · 2012-04-22T23:01:08.834Z · LW(p) · GW(p)

Let me try...

Something is green if it emits or scatters much more light between 520 and 570 nm than between 400 and 520 nm or between 570 and 700 nm. That's what green means, and it also applies to places where there are no humans: it still makes sense to ask whether the skin of tyrannosaurs was green even though there were no humans back then. On the other hand, the reason why we find the concept of ‘something which emits or scatters much more light between 520 and 570 nm than between 400 and 520 nm or between 570 and 700 nm’ important enough to have a word (green) for it is that for evolutionary reasons we have cone cells which work in those ranges; if we saw in the ultraviolet, we might have a word, say breen, for ‘something which emits or scatters much more light between 260 and 285 nm than between 200 and 260 nm or between 285 and 350 nm’. This doesn't mean that greenness is relative, though.

Likewise, something is good if it leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc. That's what good means, and it also applies to places where there are no humans: it still makes sense to ask whether it's good for Babyeaters to eat their children even though there are no humans on that planet. On the other hand, the reason why we find the concept of ‘something which leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc.’ important enough to have a word (good) for it is that for evolutionary reasons we value such kind of things; if we valued heaps composed by prime numbers of pebbles, we might have a word, say pood, for ‘something which leads to lots of heaps with a prime number of pebbles in each’. This doesn't mean that goodness is relative, though.

comment by XiXiDu · 2012-04-21T09:25:09.486Z · LW(p) · GW(p)

I have recently read this post and thought it describes very well how I always thought about morality, even though it talks about 'sexiness'.

Would reading the metaethics sequence explain to me that it would be wrong to view morality in a similar fashion as sexiness?

Replies from: Grognor
comment by Grognor · 2012-04-21T09:34:08.864Z · LW(p) · GW(p)

Yes.

comment by orthonormal · 2012-04-21T02:37:14.718Z · LW(p) · GW(p)

One part of it that did turn out well, in my opinion, is Probability is Objectively Subjective and related posts. Eliezer's metaethical theory is, unless I'm mistaken, an effort to do for naive moral intuitions what Bayesianism should do for naive probabilistic intuitions.

comment by Incorrect · 2012-04-20T20:55:56.076Z · LW(p) · GW(p)

I think it's just Meta-ethical moral relativism.

Replies from: None, Manfred
comment by [deleted] · 2012-04-21T04:01:15.732Z · LW(p) · GW(p)

"I am not a moral relativist." http://lesswrong.com/lw/t9/no_license_to_be_human/

"I am not a meta-ethical relativist" http://lesswrong.com/lw/t3/the_bedrock_of_morality_arbitrary/mj4

"what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain." http://lesswrong.com/lw/sm/the_meaning_of_right/

Replies from: Jadagul, buybuydandavis
comment by Jadagul · 2012-04-21T09:43:39.147Z · LW(p) · GW(p)

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right" and "wrong" do not stand subject to universal truth conditions at all." Eliezer doesn't like that because in each speaker's language, terms like "good" stand subject to universal truth conditions. But each speaker speaks a slightly different language where the truth conditions on the word represented by the string "good" stands subject to a slightly different set of universal truth conditions.

For an analogy: I apparently consistently define "blonde" differently from almost everyone I know. But it has an actual definition. When I call someone "blonde" I know what I mean, and people who know me well know what I mean. But it's a different thing from what almost everyone else means when they say "blonde." (I don't know why I can't fix this; I think my color perception is kinda screwed up). An MER guy would say that whether someone is "blonde" isn't objectively true or false because what it means varies from speaker to speaker. Eliezer would say that "blonde" has a meaning in my language and a different meaning in my friends' language, but in either language whether a person is "blonde" is in fact an objective fact.

And, you know, he's right. But we're not very good at discussing phenomena where two different people speak the same language except one or two words have different meanings; it's actually a thing that's hard to talk about. So in practice, "'good' doesn't have an objective definition" conveys my meaning more accurately to the average listener than "'good' has one objective meaning in my language and a different objective meaning in your language."

Replies from: None
comment by [deleted] · 2012-04-21T15:46:24.757Z · LW(p) · GW(p)

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meaning

In http://lesswrong.com/lw/t0/abstracted_idealized_dynamics/mgr, user steven wrote "When X (an agent) judges that Y (another agent) should Z (take some action, make some decision), X is judging that Z is the solution to the problem W (perhaps increasing a world's measure under some optimization criterion), where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it's shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral." with which Eliezer agreed.

This means that, even though people might presently have different things in mind when they say something is "good", Eliezer does not regard their/our/his present ideas as either the meaning of their-form-of-good or his-form-of-good. The meaning of good is not "the things someone/anyone personally, presently finds morally compelling", but something like "the fixed facts that are found but not defined by clarifying the result of applying the shared human evaluative cognitive machinery to a wide variety of situations under reflectively ideal conditions of information." That is to say, Eliezer thinks, not only that moral questions are well defined, "objective", in a realist or cognitivist way, but that our present explicit-moralities all have a single, fixed, external referent which is constructively revealed via the moral computations that weigh our many criteria.

I haven't finished reading CEV, but here's a quote from Levels of Organization that seems relevant: "The target matter of Artificial Intelligence is not the surface variation that makes one human slightly smarter than another human, but rather the vast store of complexity that separates a human from an amoeba". Similarly, the target matter of inferences that figure out the content of morality is not the surface variation of moral intuitions and beliefs under partial information which result in moral disagreements, but the vast store of neural complexity that allows humans to disagree at all, rather than merely be asking different questions.

So the meaning of presently-acted-upon-and-explicitly-stated-rightness in your language, and the meaning of it in my language might be different, but one of the many points of the meta-ethics sequence is that the expanded-enlightened-mature-unfolding of those present usages gives us a single, shared, expanded-meaning in both our languages.

If you still think that moral relativism is a good way to convey that in daily language, fine. It seems the most charitable way in which he could be interpreted as a relativist is if "good" is always in quotes, to denote the present meaning a person attaches to the word. He is a "moral" relativist, and a moral realist/cognitivist/constructivist.

Replies from: Jadagul
comment by Jadagul · 2012-04-21T19:27:48.710Z · LW(p) · GW(p)

Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, and has more or less explicitly identified it as a failure mode of AI development. I doubt these are beliefs that really converge with more information and reflection.

Or in steven's formulation, I suspect that relatively few agents actually have Ws in common; his definition presupposes that there's a problem structure "implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments". I'm arguing that many agents have sufficiently different implicit problem structures that, for instance, by that definition Eliezer and Robin Hanson can't really make "should" statements to each other.

Replies from: None
comment by [deleted] · 2012-04-22T01:23:32.452Z · LW(p) · GW(p)

Just getting citations out of the way, Eliezer talked about the repugnant conclusion here and here. He argues for shared W in Psychological Unity and Moral Disagreement. Kaj Sotala wrote a notable reply to Psychological Unity, Psychological Diversity. Finally Coherent Extrapolated Volition is all about finding a way to unfold present-explicit-moralities into that shared-should that he believes in, so I'd expect to see some arguments there.

Now, doesn't the state of the world today suggest that human explicit-moralities are close enough that we can live together in a Hubble volume without too many wars, without a thousand broken coalitions of support over sides of irreconcilable differences, without blowing ourselves up because the universe would be better with no life than with the evil monsters in that tribe on the other side of the river?

Human concepts are similar enough that we can talk to each other. Human aesthetics are similar enough that there's a billion dollar video game industry. Human emotions are similar enough that Macbeth is still being produced three hundred years later on the other side of the globe. We have the same anatomical and functional regions in our brains. Parents everywhere use baby talk. On all six populated continents there are countries in which more than half of the population identifies with the Christian religions.

For all those similarities, is humanity really going to be split over the Repugnant Conclusion? Even if the Repugnant Conclusion is more of a challenge than muscling past a few inductive biases (scope insensitivity and the attribute substitution heuristic are also universal), I think we have some decent prospect for a future in which you don't have to kill me. Whatever will help us to get to that future, that's what I'm looking for when I say "right". No matter how small our shared values are once we've felt the weight of relevant moral arguments, that's what we need to find.

Replies from: Jadagul
comment by Jadagul · 2012-04-22T06:19:17.821Z · LW(p) · GW(p)

This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph).

I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real.

In a number of places Eliezer commented that he's not too worried about, say, two systems morality_1 and morality_2 that differ in the third decimal place. I think it's actually really interesting when they differ in the third decimal place; it's probably not important to the project of designing an AI but I don't find that project terribly interesting so that doesn't bother me.

But I'm also more willing to say to someone, ""We have nothing to argue about [on this subject], we are only different optimization processes." With most of my friends I really do have to say this, as far as I can tell, on at least one subject.

However, I really truly don't think this is as all-or-nothing as you or Eliezer seem to paint it. First, because while morality may be a compact algorithm relative to its output, it can still be pretty big, and disagreeing seriously about one component doesn't mean you don't agree about the other several hundred. (A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy; and as far as I can tell this is more or less irreducible in the specification for all of us). But I can still talk to these people and have rewarding conversations on other subjects.

Second, because I realize there are other means of persuasion than argument. You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe. (See Rorty's Contingency, Irony, and Solidarity for discussion of this).

Humans have a lot of psychological similarity. They also have some very interesting and deep psychological variation (see e.g. Haidt's work on the five moral systems). And it's actually useful to a lot of societies to have variation in moral systems--it's really useful to have some altruistic punishers, but not really for everyone to be an altruistic punisher.

But really, this is beside the point of the original question, whether Eliezer is really a meta-ethical relativist, because the limit of this sequence which he claims converges isn't what anyone else is talking about when they say "morality". Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes. Eliezer clearly doesn't believe any such thing exists. And he's right.

Replies from: None
comment by [deleted] · 2012-04-22T22:14:42.971Z · LW(p) · GW(p)

"We have nothing to argue about [on this subject], we are only different optimization processes."

Calling something a terminal value is the default behavior when humans look for a justification and don't find anything. This happens because we perceive little of our own mental processes and in the absence of that information we form post-hoc rationalizations. In short, we know very little about our own values. But that lack of retrieved / constructed justification doesn't mean it's impossible to unpack moral intuitions into algorithms so that we can more fully debate which factors we recognize and find relevant.

A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy

Your friends can understand why humans have positive personality descriptors for people who don't get angry in various situations: descriptors like reflective, charming, polite, solemn, respecting, humble, tranquil, agreeable, open-minded, approachable, cooperative, curious, hospitable, sensitive, sympathetic, trusting, merciful, gracious.

You can understand why we have positive personality descriptors for people who get angry in various situations: descriptors like impartial, loyal, decent, passionate, courageous, boldness, leadership, strength, resilience, candor, vigilance, independence, reputation, and dignity.

Both you and your friends can see how either group could pattern match their behavioral bias as being friendly, supportive, mature, disciplined, or prudent.

These are not deep variations, they are relative strengths of reliance on the exact same intuitions.

You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe.

Stories strengthen our associations of different emotions in response to analogous situations, which doesn't have much of a converging effect (Edit: unless, you know, it's something like the bible that a billion people read. That certainly pushes humanity in some direction), but they can also create associations to moral evaluative machinery that previously wasn't doing its job. There's nothing arational about this: neurons firing in the inferior frontal gyrus are evidence relevant to a certain useful categorizing inference, "things which are sentient".

Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes

I'm not in a mood to argue definitions, but "optimization process" is a very new concept, so I'd lean toward "less".

Replies from: Jadagul
comment by Jadagul · 2012-04-22T23:22:26.886Z · LW(p) · GW(p)

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same.

As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.

Replies from: endoself, hairyfigment
comment by endoself · 2012-04-23T03:25:10.456Z · LW(p) · GW(p)

He does intend to convey something real and nontrivial (well, some people might find it trivial, but enough people don't that it is important to be explicit) by saying that he is not a meta-ethical realist. The basic idea is that, while his brain is the causal reason for him wanting to do certain things, it is not referenced in the abstract computation that defines what is right. To use a metaphor from the meta-ethics sequence, it is a fact about a calculator that it is computing 1234 * 5678, but the fact that 1234 * 5678 = 7 006 652 is not a fact about that calculator.

This distinguishes him from some types of relativism, which I would guess to be the most common types. I am unsure whether people understand that he is trying to draw this distinction and still think that it is misleading to say that he is not a moral relativist or whether people are confused/have a different explanation for why he does not identify as a relativist.

comment by hairyfigment · 2012-04-24T17:35:21.329Z · LW(p) · GW(p)

In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

Do you know anyone who never makes computational errors? If 'mistakes' happen at all, we would expect to see them in cases involving tribal loyalties. See von Neumann and those who trusted him on hidden variables.

Replies from: Jadagul
comment by Jadagul · 2012-04-25T02:37:40.619Z · LW(p) · GW(p)

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.

comment by buybuydandavis · 2012-04-22T21:36:02.414Z · LW(p) · GW(p)

The problem is, EY may just be contradicting himself, or he may be being ambiguous, and even deliberately so.

"what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain."

I think his views could be clarified in a moment if he stated clearly whether this abstract computation is identical for everyone. Is it AC_219387209 for all of us, or AC_42398732 for you, and AC_23479843 for me, with the proviso that it might be the case that AC_42398732 = AC_23479843?

Your quote makes it appear the former.Other quotes in this thread about a "shared W" point to that as well.

Then again, quotes in the same article make it appear the latter, as in:

If you hoped that morality would be universalizable—sorry, that one I really can't give back. Well, unless we're just talking about humans. Between neurologically intact humans, there is indeed much cause to hope for overlap and coherence;

We're all busy playing EY Exegesis. Doesn't that strike anyone else as peculiar? He's not dead. He's on the list. And he knows enough about communication and conceptualization to have been clear in the first place. And yet on such a basic point, what he writes seems to go round and round and we're not clear what the answer is. And this, after years of opportunity for clarification.

It brings to mind Quirrell:

“But if your question is why I told them that, Mr. Potter, the answer is that you will find ambiguity a great ally on your road to power. Give a sign of Slytherin on one day, and contradict it with a sign of Gryffindor the next; and the Slytherins will be enabled to believe what they wish, while the Gryffindors argue themselves into supporting you as well. So long as there is uncertainty, people can believe whatever seems to be to their own advantage. And so long as you appear strong, so long as you appear to be winning, their instincts will tell them that their advantage lies with you. Walk always in the shadow, and light and darkness both will follow.”

If you're trying to convince people of your morality, and they have already picked teams, there is an advantage in letting it appear to each that they haven't really changed sides.

comment by Manfred · 2012-04-20T22:20:24.890Z · LW(p) · GW(p)

Ah, neat, you found exactly what it is. Although the LW version is a bit stronger, since it involves thoughts like "the cause of me thinking some things are moral does not come from interacting with some mysterious substance of moralness."

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T22:25:44.006Z · LW(p) · GW(p)

That's it? That's the whole takeaway?

I mean, I can accept "the answer is there is no answer" (just as there is no point to existence of itself, we're just here and have to work out what to do for ourselves). It just seems rather a lot of text to get that across.

Replies from: Manfred
comment by Manfred · 2012-04-21T00:53:52.721Z · LW(p) · GW(p)

Well, just because there is no moral argument that will convince any possible intelligence doesn't mean there's nothing left to explore. For example, you might apply the "what words mean" posts to explore what people mean when they say "do the right thing," and how to program that into an AI :P

comment by TheOtherDave · 2012-04-20T20:55:04.717Z · LW(p) · GW(p)

My summary is pretty close to yours.

I would summarize it as:

  • All questions about the morality of actions can be restated as questions about the moral value of the states of the world that those actions give rise to.
  • All questions about the moral value of the states of the world can in principle be answered by evaluating those world-states in terms of the various things we've evolved to value, although actually performing that evaluation is difficult.
  • Questions about whether the moral value of states of the world should be evaluated in terms of the things we've evolved to value, as opposed to evaluated in terms of something else, can be answered by pointing out that the set of things we've evolved to value is what right means and is therefore definitionally the right set of things to use.

I consider that third point kind of silly, incidentally.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T21:40:10.946Z · LW(p) · GW(p)

Yeah, that's the bit that looks like begging the question. The sequence seems to me to fail to build its results from atoms.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-20T23:53:46.432Z · LW(p) · GW(p)

Well, it works OK if you give up on the idea that "right" has some other meaning, which he spent rather a long time in that sequence trying to convince people to give up on. So perhaps that's the piece that failed to work.

I mean, once you get rid of that idea, then saying that "right" means the values we all happen to have (positing that there actually is some set of values X such that we all have X) is rather a lot like saying a meter is the distance light travels in 1 ⁄ 299,792,458 of a second... it's arbitrary, sure, but it's not unreasonable.

Personally, I would approach it from the other direction. "Maybe X is right, maybe it isn't, maybe both, maybe neither. What does it matter? How would you ever tell? What is added to the discussion by talking about it? X is what we value; it would be absurd to optimize for anything else. We evaluate in terms of what we care about because we care about it; to talk about it being "right" or "not right," insofar as those words don't mean "what we value" and "what we don't value", adds nothing to the discussion."

But saying that requires me to embrace a certain kind of pragmatism that is, er, socially problematic to be seen embracing.

comment by handoflixue · 2012-04-23T21:52:58.273Z · LW(p) · GW(p)

Would anyone dare summarise it?

Morality is a sense, similar to taste or vision. If I eat a food, I can react by going 'yummy' or 'blech'. If I observe an action, I can react by going 'good' or 'evil'.

Just like your other senses, it's not 100% reliable. Kids eventually learn that while candy is 'yummy', eating nothing but candy is 'blech' - your first-order sensory data is being corrected by a higher-order understanding (whether this be "eating candy is nutritionally bad" or "I get a stomach ache on days I just eat candy").

The above paragraph ties in with the idea of "The lens that sees its flaws". We can't build a model of "right and wrong" from scratch any more than we could build a sense of yumminess from scratch; you have to work with the actual sensory input you have. To return to the food analogy, a diet consisting of ostensibly ideal food, but which lacks 'yumminess', will fail because almost no one can actually keep to it. Equally, our morality has to be based in our actual gut reaction of 'goodness' - you can't just define a mathematical model and expect people to follow it.

Finally, and most important to the idea of "CEV", is the idea that, just as science leads us to a greater understanding of nutrition and what actually works for us, we can also work towards a scientific understanding of morality. As an example, while 'revenge' is a very emotionally-satisfying tactic, it's not always an effective tactic; just like candy, it's something that needs to be understood and used in moderation.

Part of growing up as a kid is learning to eat right. Part of growing up as a society is learning to moralize correctly :)

Replies from: Incorrect
comment by Incorrect · 2012-04-23T22:05:22.180Z · LW(p) · GW(p)

Having flawed vision means that you might, for example, fail to see an object. What does having flawed morality cause you to be incorrect about?

Replies from: NancyLebovitz, handoflixue
comment by NancyLebovitz · 2012-04-29T17:10:56.413Z · LW(p) · GW(p)

From Bury the Chains, the idea that slavery was wrong hit England as a surprise. Quakers and Evangelicals were opposed to slavery, but the general public went from oblivious to involved very quickly.

comment by handoflixue · 2012-04-23T22:58:54.135Z · LW(p) · GW(p)

It can mean you value short-term reactions instead of long-term consequences. A better analogy would be flavor: candy tastes delicious, but it's long-term consequences are undesirable. In this case, a flawed morality leads you to conclude that because something registers as 'righteous' (say, slaying all the unbelievers), you should go ahead and do it, without realizing the consequences ("because this made everyone hate us, we have even less ability to slay/convert future infidels")

On another level, one can also realize that values conflict ("I really like the taste of soda, but it makes my stomach upset!") -> ("I really like killing heretics, but isn't murder technically a sin?")

Edit: There's obviously numerous other flaws that can occur (you might not notice that something is "evil" until you've done it and are feeling remorse, to try and more tightly parallel your example). This isn't meant to be comprehensive :)

comment by shminux · 2012-04-20T23:50:05.714Z · LW(p) · GW(p)

I am wondering if risk analysis and mitigation is a separate "rationality" skill. I am not talking about some esoteric existential risk, just your basic garden-variety everyday stuff. While there are several related items here (ugh fields, halo effect), I do not recall EY or anyone else addressing the issue head-on, so feel free to point me to the right discussion.

Replies from: XiXiDu
comment by XiXiDu · 2012-04-21T09:54:48.881Z · LW(p) · GW(p)

I am wondering if risk analysis and mitigation is a separate "rationality" skill.

Two related points that I think are very important and not dealt with:

  • Exploration vs. exploitation (or when to stop doing research).
  • Judgment under uncertainty (or how to deal with unpredictable long term consequences).
comment by komponisto · 2012-04-21T15:49:14.308Z · LW(p) · GW(p)

A couple of embarrassingly basic physics questions, inspired by recent discussions here:

  • On occasion people will speak of some object "exiting one's future light cone". How is it possible to escape a light cone without traveling in a spacelike direction?

  • Does any interpretation of quantum mechanics offer a satisfactory derivation of the Born rule? If so, why are interpretations that don't still considered candIdates? If not, why do people speak as if the lack of such a derivation were a point against MWI?

Replies from: Alejandro1, vi21maobk9vp, MinibearRex, wedrifid, BlazeOrangeDeer
comment by Alejandro1 · 2012-04-21T17:25:42.257Z · LW(p) · GW(p)

Suppose (just to fix ideas) that you are at rest, in some coordinate system. Call FLC(t) your future light cone from your space-time position at time t.

An object that is with you at t=0 cannot exit FLC(0), no matter how it moves from there on. But it can accelerate in a way that its trajectory is entirely outside FLC(T) from some T>0. Then it makes sense to say that the object has exited your future light cone: nothing you do after time T can affect it.

comment by vi21maobk9vp · 2012-04-21T16:49:06.928Z · LW(p) · GW(p)

Well, every object is separated from you by a spacelike interval. If some distant object starts accelerating quickly enough, it may become forever inaccessible.

Also, an object distant enough on way-bigger-than-galaxy-superclaster scale can have Hubble speed more than c relative to us.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-23T11:23:33.749Z · LW(p) · GW(p)

Also, an object distant enough on way-bigger-than-galaxy-superclaster scale can have Hubble speed more than c relative to us.

Are you sure about this? I don't understand relativity much, but I would suspect this to be another case of "by adding speeds classically, it would be greater than c, but by applying proper relativistic calculation it turns out to be always less than c".

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-23T18:35:04.079Z · LW(p) · GW(p)

It looks like it is even weirder.

Proper relativistic velocity arithmetics you mention is about special relativity theory - i.e. local flat-space case. Hubble runaway speed is supposed to be about global ongoing space distortion, i.e. it is strictly about general relativity. As far as I know, it is actually measured based on impulse change in photons, but it can be theoretically defined using time needed for a lightspeed round-trip.

When this relative speed is small, everything is fine; if I understand correctly, if Hubble constant is constant in the long term and there are large enough distances in the universe, it would take ray of light exponential time (not linear) to cross distances above some threshold.

In the inflationary model of early universe, there is some strange phase where distances grow faster than light could cover them - it is possible as it is not motion of matter in space, but change of the stucture of space. http://en.wikipedia.org/wiki/Inflationary_model

comment by MinibearRex · 2012-04-26T21:27:18.047Z · LW(p) · GW(p)

If not, why do people speak as if the lack of such a derivation were a point against MWI?

The primary argument in favor of MWI is that it doesn't require you to postulate additional natural laws other than the basic ones we know for quantum evolution. This argument can pretty easily be criticized on the grounds that yes, MWI does require you to know an additional fact about the universe (the Born rule) before you can actually generate correct predictions.

comment by wedrifid · 2012-04-21T23:16:37.586Z · LW(p) · GW(p)

On occasion people will speak of some object "exiting one's future light cone". How is it possible to escape a light cone without traveling in a spacelike direction?

Usually people do include traveling in a spacelike direction as a component of the 'exiting'. But the alternative is for the objects ('you' and 'the other thing') to be at rest relative to each other but a long distance apart - while living in a universe that does stuff like this.

ie. Imagine ants on the outside of a balloon that is getting blown up at an accelerating rate.

comment by BlazeOrangeDeer · 2012-04-24T04:48:47.669Z · LW(p) · GW(p)

Nobody has derived the Born rule, though I think some have managed to argue that it is the only rule that makes sense? (I'm not sure how successful they were). I think people may count it against mwi because of either simple double standards or because it's more obvious as an assumption since it's the only one MWI needs to make. (In other theories the rule may be hidden in with the other stuff like collapse, so it doesn't seem like a single assumption but a natural part of the theory. Since MWI is so lean, the assumed rule may be more noticeable, especially to people who are seeing it from the other side of the fence.)

comment by MileyCyrus · 2012-04-21T06:49:40.176Z · LW(p) · GW(p)

What does FOOM stand for?

Replies from: Grognor, Thomas, tut, David_Gerard
comment by Grognor · 2012-04-21T07:42:12.790Z · LW(p) · GW(p)

It's not an acronym. It's an onomatopoeia for what happens when an AI self-recursively improves and becomes unimaginably powerful.

(A regular explosion goes BOOM; an intelligence explosion goes FOOM.)

I added it to the jargon page.

comment by Thomas · 2012-04-21T07:13:50.171Z · LW(p) · GW(p)

It's the sound of an AI passing by, I guess.

comment by tut · 2012-04-21T07:38:26.339Z · LW(p) · GW(p)

Very rapid increase/acceleration. Originally it's the sound you hear if you pour gasoline on the ground and set fire to it.

comment by David_Gerard · 2012-04-21T10:30:45.799Z · LW(p) · GW(p)

Friends Of Ol' Marvel. It means that a self-improved unfriendly AI could turn into Galactus and eat the planet.

Replies from: thomblake
comment by thomblake · 2012-04-23T23:13:35.481Z · LW(p) · GW(p)

Jokes are not helpful answers to self-described "stupid questions" - you can safely assume the asker will likely miss that it's a joke.

comment by TimS · 2012-04-20T20:08:57.282Z · LW(p) · GW(p)

What do people mean here when they say "acausal"?

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

Replies from: Randaly, pragmatist, orthonormal, thomblake, HonoreDB, ciphergoth, thomblake, Incorrect, DanielLC, Incorrect, Will_Newsome
comment by Randaly · 2012-04-20T21:22:35.084Z · LW(p) · GW(p)

"Acausal" is used as a contrast to Causal Decision Theory (CDT). CDT states that decisions should be evaluated with respect to their causal consequences; ie if there's no way for a decision to have a causal impact on something, then it is ignored. (More precisely, in terms of Pearl's Causality, CDT is equivalent to having your decision conduct a counterfactual surgery on a Directed Acyclic Graph that represents the world, with the directions representing causality, then updating nodes affected by the decision.) However, there is a class of decisions for which your decision literally does have an acausal impact. The classic example is Newcomb's Problem, in which another agent uses a simulation of your decision to decide whether or not to put money in a box; however, the simulation took place before your actual decision, and so the money is already in the box or not by the time you're making your decision.

"Acausal" refers to anything falling in this category of decisions that have impacts that do not result causally from your decisions or actions. One example is, as above, Newcomb's Problem; other examples include:

  • Acausal romance: romances where interaction is impossible
  • The Prisoner's Dilemma, or any other symmetrical game, when played against the same algorithm you are running. You know that the other player will make the same choice as you, but your choice has no causal impact on their choice.

There are a number of acausal decision theories: Evidential Decision Theory (EDT), Updateless Decision Theory (UDT), Timeless Decision Theory (TDT), and Ambient Decision Theory (ADT).

In EDT, which originates in academia, casuality is completely ignored, and only correlations are used. This leads to the correct answer on Newscomb's Problem, but fails on others- for example, the Smoking Lesion. UDT is essentially EDT, but with an agent that has access to its own code. (There's a video and transcript explaining this in more detail here).

TDT, like CDT, relies on causality instead of correlation; however, instead of having agents chose a decision that is implemented, it has agents first chose a platonic computation that is instantiated in, among other things, the actual decision maker; however, is is also instantiated in every other algorithm is equal, acausally, to the decision maker's algorithm, including simulations, other agents, etc. And, given all of these instantiations, the agent then choses the utility-maximizing algorithm.

ADT...I don't really know, although the wiki says that it is "variant of updateless decision theory that uses first order logic instead of mathematical intuition module (MIM), emphasizing the way an agent can control which mathematical structure a fixed definition defines, an aspect of UDT separate from its own emphasis on not making the mistake of updating away things one can still acausally control."

comment by pragmatist · 2012-04-21T00:01:45.333Z · LW(p) · GW(p)

One must distinguish different varieties of MWI. There is an old version of the interpretation (which, I think, is basically what most informed laypeople think of when they hear "MWI") according to which worlds cannot interact. This is because "world-splitting" is a postulate that is added to the Schrodinger dynamics. Whenever a quantum measurement occurs, the entire universe (the ordinary 3+1-dimensional universe we are all familiar with) duplicates (except that the two versions have different outcomes for the measurement). It's basically as mysterious a process as collapse, perhaps even more mysterious.

This is different from the MWI most contemporary proponents accept. This MWI (also called "Everettianism" or "The Theory of the Universal Wavefunction" or...) does not actually have full-fledged separate universes. The fundamental ontology is just a single wavefunction. When macroscopic branches of the wavefunction are sufficiently separate in configuration space, one can loosely describe it as world-splitting. But there is nothing preventing these branches from interfering in principle, just as microscopic branches interfere in the two-slit experiment. There is no magical threshold of branch size/separation where the Schrodinger equation no longer permits interference. And in MWI understood this way, the Schrodinger equation is all the dynamics there are. So yeah, MWI does allow for the interaction of "worlds" in principle. The reason we never see this happening at a macroscopic scale is usually explained by appeal to special initial conditions (just like the thermodynamic arrow of time).

ETA: And in some sense, all the separate worlds are actually interacting all the time, just at a scale that is impossible for our instruments to detect.

Replies from: TimS
comment by TimS · 2012-04-21T23:12:35.104Z · LW(p) · GW(p)

Suppose I use the luck of Mat Cauthon to pick a random direction to fly my perfect spaceship. Assuming I live forever, do I eventually end up in a world that split from this world?

Replies from: pragmatist
comment by pragmatist · 2012-04-22T06:36:59.870Z · LW(p) · GW(p)

No. The splitting is not in physical space (the space through which you travel in a spaceship), but in configuration space. Each point in configuration space represents a particular arrangement of fundamental particles in real space.

Moving in real space changes your position in configuration space of course, but this doesn't mean you'll eventually move out of your old branch into a new one. After all, the branches aren't static. You moving in real space is a particular aspect of the evolution of the universal wavefunction. Specifically, your branch (your world) is moving in configuration space.

Don't think of the "worlds" in MWI as places. It's more accurate to think of them as different (evolving) narratives or histories. Splitting of worlds is a bifurcation of narratives. Moving around in real space doesn't change the narrative you're a part of, it just adds a little more to it. Narratives can collide, as in the double slit experiment, which leads to things appearing as if both (apparently incompatible) narratives are true -- the particle went through both slits. But we don't see this collision of narratives at the macroscopic level.

comment by orthonormal · 2012-04-21T02:53:45.505Z · LW(p) · GW(p)

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

To expand on what pragmatist said: The wavefunction started off concentrated in a tiny corner of a ridiculously high-dimensional space (configuration space has several dimensions for every particle), and then spread out in a very non-uniform way as time passed.

In many cases, the wavefunction's rule for spreading out (the Schrödinger equation) allows for two "blobs" to "separate" and then "collide again" (thus the two-split experiment, Feynman paths and all sorts of wavelike behavior). The quote marks around these are because it's not ever like perfect physical separation, more like the way that the pointwise sum of two Gaussian functions with very different means looks like two "separated" blobs.

But certain kinds of interactions (especially those that lead to a cascade of other interactions) correspond to those blobs "losing" each other. And if they do so, then it's highly unlikely they'll accidentally "collide" again later. (A random walk in a high-dimensional space never finds its way back, heuristically speaking.)

So, as long as the universe has relatively low entropy (as it will until what we would call the end of the universe), significant interference with "blobs" of wavefunction that "split off of our blob" in macroscopic ways a long time ago would be fantastically unlikely. Not impossible, just "a whale and a petunia spontaneously appear out of quantum noise" degree of improbability.

comment by thomblake · 2012-04-20T20:54:40.026Z · LW(p) · GW(p)

What do people mean here when they say "acausal"?

As I understand it: If you draw out events as a DAG with arrows representing causality, then A acausally effects B in the case that there is no path from A to B, and yet a change to A necessitates a change to B, normally because of either a shared ancestor or a logical property.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-20T21:25:05.717Z · LW(p) · GW(p)

I most often use it informally to mean "contrary to our intuitive notions of causality, such as the idea that causality must run forward in time", instead of something formal having to do with DAGs. Because from what I understand, causality theorists still disagree on how to formalize causality (e.g., what constitutes a DAG that correctly represents causality in a given situation), and it seems possible to have a decision theory (like UDT) that doesn't make use of any formal definition of causality at all.

comment by HonoreDB · 2012-04-20T23:29:44.402Z · LW(p) · GW(p)

Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don't think that's an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.

comment by Paul Crowley (ciphergoth) · 2012-04-22T09:13:05.188Z · LW(p) · GW(p)

Broadly, branches become less likely to interact as they become more dissimilar; dissimilarity tends to bring about dissimilarity, so they can quickly diverge to a point where the probability of further interaction is negligible. Note that at least as Eliezer tells it, branches aren't ontologically basic objects in MWI any more than chairs are; they're rough high-level abstractions we imagine when we think about MWI. If you want a less confused understanding than this, I don't have a better suggestion than actually reading the quantum physics sequence!

comment by thomblake · 2012-05-03T18:59:56.746Z · LW(p) · GW(p)

I found an excellent answer here

comment by Incorrect · 2012-04-20T20:52:12.432Z · LW(p) · GW(p)

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?

I think it depends on perspective. We notice worlds interfering with each other in the double slit experiment. I think that maybe once you are in a world you no longer see evidence of it interfering with other worlds? I'm not really sure.

Replies from: TimS
comment by TimS · 2012-04-20T21:34:46.570Z · LW(p) · GW(p)

Pretty sure double slit stuff is an effect of wave-particle duality, which is just as consistent with MWI as with waveform collapse.

comment by DanielLC · 2012-04-20T21:51:30.959Z · LW(p) · GW(p)

Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right?

Basically, if two of them evolve into the same "world", they interfere. It could be constructive or destructive. It averages out to be that it occurs as often as you'd expect, so outside of stuff like the double-slit experiment, they won't really interact.

Replies from: TimS
comment by TimS · 2012-04-20T22:34:48.179Z · LW(p) · GW(p)

Hmm. I'm also pretty sure that the double-slit experiments are not evidence of MWI vs. waveform collapse.

Replies from: DanielLC, vi21maobk9vp
comment by DanielLC · 2012-04-21T18:34:15.507Z · LW(p) · GW(p)

They are evidence against wave-form collapse, in that they give a lower bound as to when it must occur. Since, if it does exist, it's fairly likely that waveform collapse happens at a really extreme point, there's really only a fairly small amount of evidence you can get against waveform collapse without something that disproves MWI too. The reason MWI is more likely is Occam's razor, not evidence.

comment by vi21maobk9vp · 2012-04-21T05:38:13.894Z · LW(p) · GW(p)

Well, I tried to understand some double-slit corner-cases. If I read some classical Copenhagen-approach quantum physics textbook, it is hard to describe what happens if you install a non-precise particle detector securely protected from experimenter's attempts to ever read it.

Of course, in some cases Penrose model and MWI are hard to distinguish, because gravitons are hard to screen off and can cause entanglement over large distances.

comment by Incorrect · 2012-04-20T20:50:13.515Z · LW(p) · GW(p)

What do people mean here when they say "acausal"?

Non-traditional notions of causality as in TDT such as causality that runs backwards in time.

comment by Will_Newsome · 2012-04-20T22:56:50.372Z · LW(p) · GW(p)

"Acausal" means formal or final as opposed to efficient causality.

Replies from: ciphergoth, David_Gerard
comment by Paul Crowley (ciphergoth) · 2012-04-23T07:51:52.307Z · LW(p) · GW(p)

Also, a monad is just a monoid in the category of endofunctors.

comment by David_Gerard · 2012-04-20T23:06:31.327Z · LW(p) · GW(p)

That looks like precise jargon. Where should I look up the words you just used?

Replies from: hairyfigment, Will_Newsome
comment by hairyfigment · 2012-04-20T23:17:12.103Z · LW(p) · GW(p)

Philosophy sources, eg the online Stanford Encyclopedia thereof or some work on Aristotle. But I don't recommend you bother. "Formal" means a logical implication. "Final" suggests a purpose, which makes sense in the context of decision theory.

comment by Will_Newsome · 2012-04-20T23:19:56.363Z · LW(p) · GW(p)

Aristotelian tradition. I'm sure you could find a lot of similarly motivated classifications in the cybernetics and complex systems literature if you're not into old school metaphysics.

comment by FiftyTwo · 2012-04-22T03:38:15.665Z · LW(p) · GW(p)

People talk about using their 'mental model' of person X fairly often. Is there an actual technique for doing this or is it just a turn of phrase?

Replies from: siodine, tut, TheOtherDave, beoShaffer
comment by siodine · 2012-04-22T18:44:31.806Z · LW(p) · GW(p)

It's from psychology: it's where an intelligence develops a model of a thing and then mentally simulates what will happen to it given X. Caledionian crows are capable of developing mental models in solving problems, for example. A mental model of a person is basically where you've acquired enough information of them to approximate their intentions or actions. (Or you might use existing archetypes or your own personality for modeling--put yourself in their shoes.) For example, a lot of the off color jokes by Jimmy Carr would seem malicious from a stranger, misogynist, or racist, whereas you can see with someone like Carr that the intention is to derive amusement from the especially offensive rather than to denigrate a group.

comment by tut · 2012-04-22T13:56:22.166Z · LW(p) · GW(p)

Not a conscious technique. When you get to know a person you form some kind of brain structure which allows you to imagine that person when they aren't around, and which makes some behaviors seem more realistic/in character for them than some other behaviors. This structure is your mental model of that person.

comment by TheOtherDave · 2012-04-22T05:52:05.895Z · LW(p) · GW(p)

I use this phrase a lot; in my case it's just a turn of phrase. Can't speak for anyone else, though.

comment by beoShaffer · 2012-04-22T04:33:27.206Z · LW(p) · GW(p)

Sorta both. Basically your mental model of someone is anything internal that use to predict their (re)actions.

comment by [deleted] · 2012-04-20T19:40:21.721Z · LW(p) · GW(p)

Why are you called OpenThreadGuy instead of OpenThreadPerson?

Replies from: OpenThreadGuy, gRR
comment by OpenThreadGuy · 2012-04-20T20:52:41.805Z · LW(p) · GW(p)

I'm a guy.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T22:20:25.518Z · LW(p) · GW(p)

Thanks for a most useful thread, by the way.

comment by gRR · 2012-04-20T20:12:01.382Z · LW(p) · GW(p)

Let it be OpenThreadGuy and OpenThreadLady, by turns.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-20T21:49:58.362Z · LW(p) · GW(p)

OpenThreadAgent, you speciesist.

Replies from: Jayson_Virissimo, Thomas
comment by Jayson_Virissimo · 2012-04-23T08:14:02.430Z · LW(p) · GW(p)

Way to be open-minded, you personist.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-23T16:13:31.580Z · LW(p) · GW(p)

Hey! My cat counts as an agent, though I'm not sure if he counts as a person. So some of my favourite domesticated dependents are agents that aren't people!

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-23T18:37:40.641Z · LW(p) · GW(p)

Be thankful that nearest ant colony to you has more useful tasks than asking you how should they be divided w.r.t. agency.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-23T18:43:48.702Z · LW(p) · GW(p)

They're unlikely to post here unless they've read GEB.

comment by Thomas · 2012-04-20T22:36:24.921Z · LW(p) · GW(p)

Well ... a script could open threads like "open thread for a (half of) a month" and "the monthly rationality quotes" and some others automatically, driven only by the calendar.

comment by hairyfigment · 2012-04-20T20:26:45.812Z · LW(p) · GW(p)

Does ZF assert the existence of an actual formula, that one could express in ZF with a finite string of symbols, defining a well-ordering on the-real-numbers-as-we-know-them? I know it 'proves' the existence of a well-ordering on the set we'd call the real numbers if we endorsed the statement "V=L". I want to know about the nature of that set, and how much ZF can prove without V=L or any other form of choice.

Replies from: vi21maobk9vp, Thomas, gjm, Incorrect
comment by vi21maobk9vp · 2012-04-20T20:57:27.504Z · LW(p) · GW(p)

Nope.

ZF is consistent with many negations of strong choice. For example, ZF is consistent with Lebesgue measurability of every subset in R. Well-ordering of R is enought to create unmeasurable set.

So, if ZF could prove existence of such a formula, ZF+measurability would prove contradiction, but ZF+neasurability is equiconsistent with ZF and ZF would be inconsistent.

It is very hard to say anything about any well-ordering of R, they are monster constructions...

Replies from: hairyfigment, ShardPhoenix, hairyfigment
comment by hairyfigment · 2012-04-20T21:12:34.539Z · LW(p) · GW(p)

Well-ordering of R is enought to create unmeasurable set.

Does a well-ordering on the constructable version of R provably do this? I fear I can't tell if you've addressed my question yet.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-21T05:44:20.260Z · LW(p) · GW(p)

Constructible version of R (if it is inside R and not the whole R) is just like Q: dense, small part of the whole, and usually zero-measure. So, this construction will yield something of measure zero if you define measure on the whole R.

comment by ShardPhoenix · 2012-04-21T09:13:57.029Z · LW(p) · GW(p)

Here's a sort of related argument (very much not a math expert here): Any well ordering on the real numbers must be non-computable. If there was a computable order, you could establish a 1-1 correspondence between the natural numbers and the reals (since each real would be emitted on the nth step of the algorithm).

Is that remotely right?

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-21T10:13:29.736Z · LW(p) · GW(p)

Well, yes, but mostly because most real numbers cannot ever be specified to an algorithm or received back from it. So you are right, ordering of R is about incomputable things.

The difference here is that we talk about ZFC formulas which can include quite powerful tricks easily.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2012-04-22T01:36:09.435Z · LW(p) · GW(p)

Oh right, I forgot that real numbers could be individually non-computable in the first place.

Replies from: Sniffnoy
comment by Sniffnoy · 2012-04-22T04:53:59.722Z · LW(p) · GW(p)

This is true, but not, I think, the corect point to focus on.

The big obstacle is that the real numbers are uncountable. Of course, their uncountability is also why there exist uncomputable reals, but let's put that aside for now, because the computability of individual reals is not the point.

The point is that computers operate on finite strings over finite alphabets, and there are only countably many of these. In order to do anything with a computer, you must first translate it into a problem about finite strings over a finite alphabet. (And the encoding matters, too -- "compute the diameter of a graph" is not a well-specified problem, because "compute the diameter of a graph, specified as an adjacency matrix" and "compute the diameter of a graph, specified with connectivity lists" are different problems. And if of course I was implicitly assuming here the output was in binary -- if we wanted it in unary, that would again technically be a different problem.)

So the whole idea of an algorithm that operates on or outputs real numbers is nonsensical, because there are only countably many finite strings over a given finite alphabet, and so no encoding is possible. Only specified subsets of real numbers with specified encodings can go into computers.

Notice that the whole notion of the computability of a real number appears nowhere in the above. It's also a consequence of uncountability, but really not the point here.

I'm going to go into your original comment a bit more here:

If there was a computable order, you could establish a 1-1 correspondence between the natural numbers and the reals (since each real would be emitted on the nth step of the algorithm).

You seem to be under the impression that a countably infinite well-ordering must be isomorphic to omega, or at least that a computable one must be. I don't think that would make for a useful definition of a computable well-ordering. Normally we define a well-ordering to be computable if it is computable as a relation, i.e. there is an algorithm that given two elements of your set will compare them according to your well-ordering. We can then define a well-order type (ordinal) to be computable if it is the order type of some computable well-ordering. By this definition, omega+1, and basically all the countable ordinals you usually see, are computable (well, except for the ones explicitly made not to be).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-04-22T09:56:32.414Z · LW(p) · GW(p)

So the whole idea of an algorithm that operates on or outputs real numbers is nonsensical

You can work with programs over infinite streams in certain situations. For example, you can write a program that divides a real number by 2, taking an infinite stream as input and producing another infinite stream as output. Similarly, you can write a program that compares two unequal real numbers.

Replies from: Sniffnoy
comment by Sniffnoy · 2012-04-22T20:32:24.670Z · LW(p) · GW(p)

True, I forgot about that. I guess that would allow extending the notion of "computable well-ordering" to sets of size 2^aleph_0. Of course that doesn't necessarily mean any of them would actually be computable, and I expect none of them would. Well, I guess it has to be the case that none of them would, or else we ought to be able to prove "R is well-orderable" without choice, but there ought to be a simpler argument than that -- just as computable ordinals in the ordinary sense are downclosed, I would expect that these too ought to be downclosed, which would immediately imply that none of the uncountable ones are computable. Now I guess I should actually check that they are downclosed...

Replies from: JoshuaZ
comment by JoshuaZ · 2012-04-22T20:50:56.302Z · LW(p) · GW(p)

What does it mean to be downclosed?

Replies from: Sniffnoy
comment by Sniffnoy · 2012-04-22T22:25:42.177Z · LW(p) · GW(p)

I mean closed downwardly -- if one is computable, then so is any smaller one. (And so the Church-Kleene ordinal is the set of all computable ordinals.)

comment by hairyfigment · 2012-04-20T21:22:09.408Z · LW(p) · GW(p)

Upvoted on reflection for the part I quoted, which either answers the question or gives us some reason to distinguish R-in-L (the set we'd call the real numbers if we endorsed the statement "V=L") from "the-real-numbers-as-we-know-them".

comment by Thomas · 2012-04-20T21:46:00.126Z · LW(p) · GW(p)

ZF does not imply well ordering of R, ZFC does. ZF with the Axiom of Choice.

Strictly speaking.

comment by gjm · 2012-04-21T00:19:39.387Z · LW(p) · GW(p)

Yes. In ZF one can construct an explicit well-ordering of L(alpha) for any alpha; see e.g. Kunen ch VI section 4. The natural numbers are in L(omega) and so the constructible real numbers are in L(omega+k) for some finite k whose value depends on exactly how you define the real numbers; so a well-ordering of L(omega+k) gives you a well ordering of R intersect L.

I'm not convinced that R intersect L deserves the name of "the-real-numbers-as-we-know-them", though.

Replies from: vi21maobk9vp, hairyfigment, vi21maobk9vp
comment by vi21maobk9vp · 2012-04-21T05:46:44.638Z · LW(p) · GW(p)

Separate concern: Why constructible real numbers are only finitely higher than Q? Cannot it be that there are some elements of (say) 2^Q that cannot be pinpointed until a much higher ordinal?

Of course, there is still a formula that specifies a high enough ordinal to contain all members of R that are actually constructible.

Replies from: hairyfigment, gjm
comment by hairyfigment · 2012-05-27T06:01:18.572Z · LW(p) · GW(p)

I figured out the following after passing the Society of Actuaries exam on probability (woot!) when I had time to follow the reference in the grandparent:

The proof that |R|=|2^omega| almost certainly holds in L. And gjm may have gotten confused in part because L(omega+1) seems like a natural analog of 2^omega. It contains every subset of omega we can define using finitely many parameters from earlier stages. But every subset of omega qualifies as a subset of every later stage L(a>omega), so it can exist as an element in L(a+1) if we can define it using parameters from L(a).

As another likely point of confusion, we can show that for each individual subset, a|x|, this says if V=L then 2^omega must stay within or equal L(omega1). The same proof tells us that L satisfies the generalized continuum hypothesis.

comment by gjm · 2012-04-21T09:02:49.087Z · LW(p) · GW(p)

Um. You might well be right. I'll have to think about that some more. It's years since I studied this stuff...

comment by hairyfigment · 2012-05-27T04:30:11.528Z · LW(p) · GW(p)

While some of the parent turns out not to hold, it helped me to find out what the theory really says (now that I have time).

comment by vi21maobk9vp · 2012-04-21T05:13:58.085Z · LW(p) · GW(p)

Let's see. Assume measurability axiom - every subset of R has Lebesgue measure. As we can use the usual construction of unmeasurable set on L intersect R, our only escape option is that it has zero measure.

So if we assume measurability, L intersect R is a dense zero-measure subset, just like Q. These are the reals we can know individually, but not the reals-as-a-whole that we know...

Replies from: gjm
comment by gjm · 2012-04-21T09:01:47.219Z · LW(p) · GW(p)

Seems reasonable to me.

comment by Incorrect · 2012-04-20T23:13:55.151Z · LW(p) · GW(p)

Is this a correct statement of what a well-ordering of R is?

%20\land%20\forall%20B%20(B%20\subseteq%20\mathbb{R}%20\land%20B%20\neq%20\emptyset%20\implies%20\exists%20x%20\in%20B%20\forall%20y%20\in%20B%20((x%20\preceq%20y)%20\in%20A)))%0A%0A\forall%20A%20\forall%20B%20(\text{total-order}(A,%20B)%20\iff%20\forall%20a%20\forall%20b%20\forall%20c(a,%20b,%20c%20\in%20B%20\implies%20(((a%20\preceq%20b)%20\in%20A%20\lor%20(b%20\preceq%20a)%20\in%20A)%20\land%20((a%20\preceq%20b)%20\in%20A%20\land%20(b%20\preceq%20a)%20\in%20A%20\implies%20a%20=%20b)%20\land%20((a%20\preceq%20b)%20\in%20A%20\land%20(b%20\preceq%20c)%20\in%20A%20\implies%20(a%20\preceq%20c)%20\in%20A)))))

Replies from: gjm
comment by gjm · 2012-04-21T00:06:00.779Z · LW(p) · GW(p)

Looks OK to me, though I can't guarantee that there isn't a subtle oops I haven't spotted. (Of course it assumes you've got some definition for what sort of a thing (a <= b) is; you might e.g. use a Kuratowski ordered pair {{a},{a,b}} or something.)

comment by MarkusRamikin · 2012-04-21T09:35:10.450Z · LW(p) · GW(p)

http://wiki.lesswrong.com/mediawiki/index.php?title=Jargon

Belief update What you do to your beliefs, opinions and cognitive structure when new evidence comes along.

I know what it means to update your beliefs, and opinions is again beliefs. What does it mean to "update your cognitive structure"? Does it mean anything or is it just that whoever wrote it needed a third noun for rhythm purposes?

Replies from: David_Gerard, DuncanS
comment by David_Gerard · 2012-04-21T10:28:43.699Z · LW(p) · GW(p)

I wrote that line in the jargon file quoting the first line of the relevant wiki page. The phrase was there in the first revision of that page - put there by an IP. If that IP is still present and could explain ...

I could come up with surmises as to what it could plausibly mean, but I'd be making them up and it isn't actually clear to me in April 2012 either.

comment by DuncanS · 2012-04-21T22:39:46.440Z · LW(p) · GW(p)

It's the process of changing your mind about something when new evidence on something comes your way.

The different jargon acts as a reminder that the process ought not be an arbitrary one, but (well, in an ideal world anyway) should follow the evidence in a way defined by Bayes theorem.

I don't think there's any particular definition of what constitutes, belief, opinion and cognitive structure. It's all just beliefs, although some of it might then be practised habit.

comment by ahartell · 2012-04-21T01:48:11.820Z · LW(p) · GW(p)

What are the basic assumptions of ultilarianism and how are they justified? I was talking about ethics with a friend and after a bunch of questions like "Why is utilitarianism good?" and "Why is it good for people to be happy?" I pretty quickly started to sound like an idiot.

Replies from: orthonormal, Alicorn, Larks
comment by orthonormal · 2012-04-21T02:57:57.154Z · LW(p) · GW(p)

I like this (long) informal explanation, written by Yvain.

comment by Larks · 2012-04-21T08:19:59.208Z · LW(p) · GW(p)
  • Each possible world-state has a value
  • The value of a world-state is determined by the amount of value for the individuals in it
  • The function that determines the value of a world state is monotonic in its arguments (we often, but not always, require linearity as well)
  • The function that determines the value of a world state does not depend on the order of its arguments (a world where you are happy and I am sad is the same as one where I am happy and you are sad)
  • The rightness of actions is determined wholey by the value of their (expected) consequences.

and then either

  • An action is right iff no other action has better (expected) consequences

or

  • An action is right in proportion to to the goodness of its consequences
comment by Bart119 · 2012-04-24T17:05:28.297Z · LW(p) · GW(p)

I've been aware of the concept of cognitive biases going back to 1972 or so, when I was a college freshman. I think I've done a decent job of avoiding the worst of them -- or at least better than a lot of people -- though there is an enormous amount I don't know and I'm sure I mess up. Less Wrong is a very impressive site for looking into nooks and crannies and really following things through to their conclusions.

My initial question is perhaps about the social psychology of the site. Why are two popular subjects here (1) extending lifespan, including cryogenics, (2) increasingly powerful AIs leading to a singularity. Is there an argument that concern for these things is somehow derivable from a Bayesian approach? Or is it more or less an accident that these things are of interest to the people here?

Examples of other things that might be of interest could be (a) "may I grow firmer, quieter, warmer" (rough paraphrase of Dag Hammarskjold), (b) I want to make the very best art, (c) economics rules and the key problem is affording enough for everyone. I'm not saying those are better, just that they're different. Are there reasons people here talk about the one set and not the other?

Replies from: Alejandro1, gwern, MinibearRex
comment by Alejandro1 · 2012-04-24T18:11:26.113Z · LW(p) · GW(p)

Welcome to LW! You pose an interesting question.

My initial question is perhaps about the social psychology of the site. Why are two popular subjects here (1) extending lifespan, including cryogenics, (2) increasingly powerful AIs leading to a singularity. Is there an argument that concern for these things is somehow derivable from a Bayesian approach? Or is it more or less an accident that these things are of interest to the people here?

I think there is a purely sociological explanation. LW was started by Eliezer Yudkowsky, who is a transhumanist and an AI researcher very concerned about the singularity, and his writings at Overcoming Bias (the blog from which LW was born by splitting) naturally tended to attract people with the same interests. But as LW grows and attracts more diverse people, I don't see why transhumanism/futurism related topics must necessarily stay at the forefront, though they might (path-dependence effect). I guess time will tell.

Examples of other things that might be of interest could be (a) "may I grow firmer, quieter, warmer" (rough paraphrase of Dag Hammarskjold), (b) I want to make the very best art, (c) economics rules and the key problem is affording enough for everyone. I'm not saying those are better, just that they're different. Are there reasons people here talk about the one set and not the other?

If you have something interesting to say about these topics and the application of rationality to them, by all means do! However, about topic (c) you must bear in mind that there is a community consensus to avoid political discussions, which often translates to severely downvoting any post that maps too closely to an established political/ideological position.

comment by gwern · 2012-04-24T18:59:08.809Z · LW(p) · GW(p)

Why are two popular subjects here (1) extending lifespan, including cryogenics

This is factually false. I suspect if you looked through the last 1000 Articles or Discussion posts, you'd find <5% on life extension (including cryonics) and surely <10%.

Cryonics does not even command much support; in the last LW survey, 'probability cryonics will work' averaged 21%; 4% of LWers were signed up, 36% opposed, and 54% merely 'considering' it. So if you posted something criticizing cryonics (which a number of my posts could be construed as...), you would be either supported or regarded indifferently by ~90% of LW.

Replies from: Vladimir_Nesov, Bart119
comment by Vladimir_Nesov · 2012-04-24T20:11:55.062Z · LW(p) · GW(p)

As I wrote in a comment to the survey results post, the interpretation of assignment of low probability to cryonics as some sort of disagreement or opposition is misleading:

... if ... probability of global catastrophe ... [is] taken into account ... even though I'm almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as "doesn't believe in cryonics"?

Replies from: gwern
comment by gwern · 2012-04-24T20:21:47.002Z · LW(p) · GW(p)

Of course not. Why the low probability is important is because it defeats the simplistic non-probabilistic usual accounts of cultists as believing in dogmatic shibboleths; if Bart119 were sophisticated enough to say that 10% is still too much, then we can move the discussion to a higher plane of disagreement than simply claiming 'LW seems obsessed with cryonics', hopefully good arguments like '$250k is too much to pay for such a risky shot at future life' or 'organizational mortality implies <1% chance of cryopreservation over centuries and the LW average is shockingly optimistic' etc.

To continue your existential risk analogy, this is like introducing someone to existential risks and saying it's really important stuff, and then them saying 'but all those risks have never happened to us!' This person clearly hasn't grasped the basic cost-benefit claim, so you need to start at the beginning in a way you would not with someone who immediately grasps it and makes a sophisticated counter-claim like 'anthropic arguments show that existential risks have been overestimated'.

comment by Bart119 · 2012-04-24T19:38:16.251Z · LW(p) · GW(p)

Where can I find survey results? I had just been thinking I'd be interested in a survey, also hopefully broken down by frequency of postings and/or karma. But if they've been done, in whatever form, great.

Replies from: gwern
comment by MinibearRex · 2012-04-26T21:19:27.027Z · LW(p) · GW(p)

Why are two popular subjects here (1) extending lifespan, including cryogenics, (2) increasingly powerful AIs leading to a singularity. Is there an argument that concern for these things is somehow derivable from a Bayesian approach? Or is it more or less an accident that these things are of interest to the people here?

The short answer is that the people who originally created this site (the SIAI, FHI, Yudkowsky, etc) were all people who were working on these topics as their careers, and using Bayesian rationality in order to do those things. So, the people who initially made up the community were made up, in large part, of people who were interested in those topics and rationality. There is a bit more variation in this group now, but it's still generally true.

comment by moridinamael · 2012-04-21T20:33:38.620Z · LW(p) · GW(p)

Why shouldn't I go buy a lottery ticket with quantum-randomly chosen numbers, and then, if I win, perform 1x10^17 rapid quantum decoherence experiments, therefor creating more me-measure in the lottery winning branch and virtually guaranteeing that any given me-observer-moment will fall within a universe where I won?

Replies from: army1987, BlazeOrangeDeer
comment by A1987dM (army1987) · 2012-04-21T20:53:59.622Z · LW(p) · GW(p)

You're thinking according what pragmatist calls the “old version” of MWI. In the modern understanding, it's not that an universe splits into several when a quantum measurement is performed -- more like the ‘universes’ were ‘there’ all along but the differences between them used to be in degrees of freedom you don't care about (such as the thermal noise in the measurement apparatus).

comment by BlazeOrangeDeer · 2012-04-24T04:54:24.409Z · LW(p) · GW(p)

You-measure is conserved in each branch, I believe. You can't make more of it, only more fragments of it.

comment by maia · 2012-05-01T19:43:52.218Z · LW(p) · GW(p)

Re: utilitarianism: Is there any known way to resolve the Utility Monster issue?

Replies from: wedrifid
comment by wedrifid · 2012-05-28T18:42:20.064Z · LW(p) · GW(p)

Re: utilitarianism: Is there any known way to resolve the Utility Monster issue?

Yes, don't be utilitarian. Seriously. The "utility monster" isn't a weird anomaly. It is just what utilitarianism is fundamentally about, but taken to an extreme where the problem is actually obvious to those who otherwise just equate 'utilitarianism' with 'egalitarianism'.

comment by TimS · 2012-04-21T19:13:55.604Z · LW(p) · GW(p)

I obviously do not understand quantum mechanics as well as I thought, because I thought this comment and this comment were saying the same thing, but karma indicates differently. Can someone explain my mistake?

Replies from: falenas108
comment by falenas108 · 2012-04-21T20:07:06.376Z · LW(p) · GW(p)

The first comment says that the double slit experiment is feasible under both hypothesis, but the second adds on that it is just as likely with MWI as waveform collapse.

Analogy: There are two possible bags arrangements, one filled with 5 green balls and 5 red balls, and the other with 4 green balls and 6 red balls. It's true that drawing a green ball is consistent with both, but it's more likely in the with first bag than the second.

Replies from: TimS
comment by TimS · 2012-04-21T20:21:54.754Z · LW(p) · GW(p)

I see what you mean. But I thought "ripples of one wave affected the other wave" was the accepted interpretation of the double slit experiment. In other words, the double slit experiments prove the wave-particle duality. I wasn't aware that the wave-particle duality was considered evidence in favor of MWI.

Replies from: Torben, BlazeOrangeDeer, tut, falenas108
comment by Torben · 2012-04-25T12:34:42.346Z · LW(p) · GW(p)

In Fabric of Reality, David Deutsch claims the double-split experiment is evidence of photons interfering with photons in other worlds.

comment by BlazeOrangeDeer · 2012-04-24T05:01:54.236Z · LW(p) · GW(p)

"Wave-particle duality" pretty much just means particles that obey the schrodinger wave equation, I think. And it could be more evidence for one theory than another if one theory was vague, ambiguous, etc. The more specific theory gets more points if it matches experiment.

comment by tut · 2012-04-22T13:49:26.446Z · LW(p) · GW(p)

It is evidence of (what we now know as) quantum mechanics. MWI is just an interpretation of QM, so there isn't really evidence for MWI that isn't also evidence for the other interpretations according to the people who favor them.

comment by falenas108 · 2012-04-21T21:23:30.288Z · LW(p) · GW(p)

I don't know nearly enough about QM to say whether or not that's true, I was just going off what was said in response to your second comment. However, that doesn't seem to have any upvotes, so it may not be correct either.

comment by Randaly · 2012-04-25T16:55:42.902Z · LW(p) · GW(p)

I did not understand Wei Dai's explanation of how UDT can reproduce updating when necessary. Can somebody explain this to me in smaller words?

(Showing the actual code that output the predictions in the example, instead of shunting it off in "prediction = S(history)," would probably also be useful. I also don't understand how UDT would react to a simpler example: a quantum coinflip, where U(action A|heads)=0, U(action B|heads)=1, U(action A|tails)=1, U(action B|tails)=0.)

comment by Mark_Eichenlaub · 2012-04-21T22:42:44.904Z · LW(p) · GW(p)

What's 3^^^3?

Is this Knuth's arrow notation?

Replies from: Mark_Eichenlaub
comment by Mark_Eichenlaub · 2012-04-21T22:44:59.192Z · LW(p) · GW(p)

wait, that was easier to search than I thought. http://lesswrong.com/lw/kn/torture_vs_dust_specks/

Yes, it is Knuth's arrow notation.

comment by A1987dM (army1987) · 2012-06-07T14:02:14.255Z · LW(p) · GW(p)

How comes that in some of the posts which were imported from Overcoming Bias, even if the “Sort By:” setting is locked to “Old”, some of the comments are out of sequence? Same applies to karma scores of http://lesswrong.com/topcomments/ -- right now, setting the filter to “This week”, the second comment is at 32 whereas the third is at 33, and sometimes when I set the filter to “Today” I even get a few negative-score comments.

Is there a better place to ask similar questions?

comment by bramflakes · 2012-06-02T17:47:59.919Z · LW(p) · GW(p)

I have a question:

If I'm talking with someone about something that they're likely to disbelieve at first, is it correct to say that the longer the conversation goes on, the more likely they are to believe me? The reasoning goes that after each pause or opportunity to interrupt they can either interrupt and disagree, or don't do anything (perhaps nod their head but it's not required). If they interrupt and disagree then that obviously that's evidence in favor of them disbelieving. However, if they don't, then is that evidence in favor of them believing?

Replies from: DanArmak
comment by DanArmak · 2012-07-07T19:07:41.052Z · LW(p) · GW(p)

If they interrupt and disagree then that obviously that's evidence in favor of them disbelieving. However, if they don't, then is that evidence in favor of them believing?

If X is evidence of A, then ~X (not-X) is evidence of ~A. They are two ways of looking at the same thing - it's the same evidence. This is called conservation of expected evidence.

So if your premise is true, then your conclusion is necessarily also true.

Please note that this says nothing about whether your premise is indeed true. If you have doubts that "not disagreeing indicates belief", that is exactly the same as having doubts that "disagreeing indicates disbelief". The two propositions may sound different, one may sound more correct than the other, but that is an accident of phrasing: from a Bayesian point of view the two are strictly equivalent.

Replies from: bramflakes
comment by bramflakes · 2012-07-08T10:33:43.537Z · LW(p) · GW(p)

Thanks -- I knew that this was conservation of expected evidence, I just wasn't sure if I was using it correctly.

comment by FiftyTwo · 2012-05-02T01:23:57.878Z · LW(p) · GW(p)

The standard utilitarian argument for pursuing knowledge, even when it is unpleasan to know, is that greater knowledge makes us more able to take actions that fulfil our desires, and hence make us happy.

However the psychological evidence is that our introspective access to our desires and our ability to predict what circumstances will make us happy is terrible.

So why should we seek additional knowledge is we can't use it to make ourselves happier? Surely we should live in a state of blissful ignorance as much as possible.

Replies from: wedrifid
comment by wedrifid · 2012-05-28T18:44:28.449Z · LW(p) · GW(p)

The standard utilitarian argument for pursuing knowledge

So why should we seek additional knowledge is we can't use it to make ourselves happier? Surely we should live in a state of blissful ignorance as much as possible.

Because I'm not utilitarian. I'll care about being happy to whatever extent I damn well please. Which is "a fair bit but it is not the most important thing".

"The opposite of happiness is not sadness, but boredom." --Tim Ferris.

comment by James_Evans · 2012-04-24T21:22:02.099Z · LW(p) · GW(p)

About Decision Theory, specifically DT relevant to LessWrong.

Since there is quite a lot of advanced material already on LW that seem to me as if they would be very helpful if one is one is perhaps near to finishing or beyond an intermediate stage:

Various articles: http://lesswrong.com/r/discussion/tag/decision_theory/ http://lesswrong.com/r/discussion/tag/decision/

And the recent video (and great transcript): http://lesswrong.com/lw/az7/video_paul_christianos_impromptu_tutorial_on_aixi/

And there are a handful of books that seem relevant to overall decision making, but none specifically for Decision Theory on the textbook list: http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/

What currently are the best textbooks, websites/webpages and other resources for learning Decision Theory (with a goal of becoming useful in the cutting-edge of LW's DT) to make sense of the above for someone that has a math education up to basic Calculus?

EDIT: Just after looking around LW for DT related things I happened to notice the links on top of http://lesswrong.com/lw/b7w/decision_theories_a_semiformal_analysis_part_iii/ of which http://lesswrong.com/lw/aq9/decision_theories_a_less_wrong_primer/ is one of the links. Though I am definitely interested in textbooks and more like this.

comment by Bart119 · 2012-04-28T16:04:56.310Z · LW(p) · GW(p)

LWers are almost all atheists. Me too, but I've rubbed shoulders with lots of liberal religious people in my day. Given that studies show religious people are happier than the non-religious (which might not generalize to LWers but might apply to religious people who give up their religion), I wonder if all we really should ask of them is that they subscribe to the basic liberal principle of letting everyone believe what they want as long as they also live by shared secular rules of morality. All we need is for some humility on their part -- not being totally certain of their beliefs means they won't feel the need to impose their beliefs on others. If religious belief is how they find meaning in a life (that, in my opinion, has no absolute meaning), why rock their boats?

This must have been discussed many, many times. Pointers to relevant discussions, either within or outside LW, appreciated.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-28T18:28:57.915Z · LW(p) · GW(p)

I wonder if all we really should ask of them is that they

, why rock their boats?

What boat-rocking are you talking about? Do you know a lot of people who "ask of" religious people that they do something?

Replies from: Bart119
comment by Bart119 · 2012-04-30T16:50:55.328Z · LW(p) · GW(p)

It seems that implicit in any discussion of the kind is, "What do you think I ought to do if you are right?".

For theists, the answer might be something leading to, "Accept Jesus as your personal savior", etc.

For atheists, it might be, "Give up the irrational illusion of God." I'm questioning whether such an answer is a good idea if they are at least humble and uncertain enough to respect others' views -- if their goal is comfort and happiness as opposed to placing a high value on literal truth.

But do recall, I'm placing this in the "stupid questions" thread because I am woefully ignorant of the debate and am looking for pointers to relevant discussions.

Replies from: TimS
comment by TimS · 2012-04-30T17:28:47.468Z · LW(p) · GW(p)

It seems that implicit in any discussion of the kind is, "What do you think I ought to do if you are right?"

That is implicit in any discussion of this type. But it doesn't go without saying that we should be trying to have a conversation of this type. In fact, it is totally unfair of you to assume that having this conversation is so pressing that it goes without saying. After all, not all theists proselytize.

For a more substantive response, I'll say only that I'm not convinced that believing unpleasant but truth things is inherently inconsistent with being happier. But there is a substantial minority in this community that disagrees with me.

Replies from: Bart119
comment by Bart119 · 2012-05-01T16:34:38.716Z · LW(p) · GW(p)

I remain quite confused.

In fact, it is totally unfair of you to assume that having this conversation is so pressing that it goes without saying. After all, not all theists proselytize.

OK. This seems to imply that there is some serious downside about starting such a conversation. What would it be? It would seem conciliatory to theists, if some (naturally enough) assume that what atheists want is for them to embrace atheism.

I'll say only that I'm not convinced that believing unpleasant but truth things is inherently inconsistent with being happier.

I hope I've parsed the negatives correctly: Certainly believing unpleasant but true things is highly advantageous to being happier if it leads to effective actions (I sure hope that pain isn't cancer -- what an unpleasant thing to believe... but I'll go to the doctor anyway and maybe there will be an effective treatment). If it means unpleasant things that can't be changed, then that's not inherently inconsistent with being happier either, for instance if your personal happiness function includes that discovering that you are deceiving yourself will make you very unhappy.

The question is more whether it is a valid choice for a person to say they value pleasant illusions when there is no effective way to change the underlying unpleasant reality.

We object when someone else wants to infringe on our liberties (contraception, consensual sexual practices), and my suggestion was that a mild dose of doubt in one's faith might be enough to defang efforts to restrict other people's liberties.

I knew a devout Catholic who was also a devout libertarian, and his position on abortion was that it was a grave sin, but it should not be illegal. I'm not sure if that position required a measure of doubt about the absolute truth of Catholicism, but it seems possible.