Open thread, Nov. 10 - Nov. 16, 2014

post by MrMind · 2014-11-10T08:32:41.114Z · LW · GW · Legacy · 194 comments

Contents

194 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

194 comments

Comments sorted by top scores.

comment by Toggle · 2014-11-10T15:38:40.488Z · LW(p) · GW(p)

I've been commenting on the site or a few months now, but so far just replies and responses. I've been thinking about potential contributions for a top-level discussion post, and I thought I'd ask about it here first to gage interest.

I have taught university classes in the past, usually with traditional methodology but in one memorable case with some experimental methods. There were a few ways this was different; as an example, we used 'high expectations, low stakes'- we allowed students to retake any assignment as many times as they liked, but their grade for the entire class was basically the lowest grade they got on any assignment. (This was partly inspired by video games, actually.)

It will obviously be of particular interest to anyone else who does teaching, but there's reasonable hope that some of my experiences there would be of use to audidacts. Do you think this would be a good use of my time?

Replies from: Vaniver, Gunnar_Zarncke, NancyLebovitz
comment by Vaniver · 2014-11-12T14:48:41.141Z · LW(p) · GW(p)

I think the phrase in the education literature is mastery learning: my exposure to it was discussion of how Khan Academy does math tests. Because they're on a computer-based system, they can generate an arbitrary number of problems of a particular form (like, for example, 'multiply two three digit numbers together') and give each student as many problems as it takes for them to get 10 right in a row. Sometimes the student gets the lesson and only does 10 questions; sometimes the student takes 200 tries to get 10 right in a row, but they always master the skill before they move on (or they spend a lot of time getting very lucky).

comment by Gunnar_Zarncke · 2014-11-10T20:12:33.718Z · LW(p) · GW(p)

I think your account will be received fairly well in Discussion if you present it like the above.

comment by NancyLebovitz · 2014-11-10T17:08:30.069Z · LW(p) · GW(p)

There were a few ways this was different; as an example, we used 'high expectations, low stakes'- we allowed students to retake any assignment as many times as they liked, but their grade for the entire class was basically the lowest grade they got on any assignment.

Isn't that a high stake situation?

Replies from: Toggle
comment by Toggle · 2014-11-10T19:45:10.460Z · LW(p) · GW(p)

Holistically, yes. But they are free to fail any given assignment any number of times- in fact, many would sign up to take quizzes before studying, as a preview of what the assessment would look like and a way to rapidly jump through sections they might have studied elsewhere.

comment by Capla · 2014-11-10T23:27:36.581Z · LW(p) · GW(p)

Does anyone know how Eliezer first met Robin? How did the first end up as a co-editor of the latter's blog?

Replies from: Kaj_Sotala, knb
comment by Kaj_Sotala · 2014-11-12T10:06:50.779Z · LW(p) · GW(p)

I don't know, but I'm guessing the Extropians mailing list.

Replies from: jaime2000
comment by jaime2000 · 2014-11-12T11:02:38.707Z · LW(p) · GW(p)

That's my guess, too. I know that both Eliezer and Robin posted there. Eliezer had definitely come to Robin's attention by 1999; he is cited in Robin's "Comments on Vinge's Singularity" page.

Of course, the most straightforward way to answer this question is to simply ask either of them.

Replies from: Eliezer_Yudkowsky, gjm
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-11-12T20:28:54.528Z · LW(p) · GW(p)

Your guess and your evidence are both correct.

comment by gjm · 2014-11-12T14:55:25.458Z · LW(p) · GW(p)

Even better, ask both.

Replies from: Capla
comment by Capla · 2014-11-12T18:30:08.567Z · LW(p) · GW(p)

Ok. How? Can I just send Eliezer an Email?

Replies from: shminux, Lumifer
comment by Shmi (shminux) · 2014-11-12T18:43:02.635Z · LW(p) · GW(p)

You can send any user a private message, which shows up in their inbox next time they check this forum, so they will definitely see it. Whether they choose to reply is up to them.

comment by Lumifer · 2014-11-12T19:07:49.436Z · LW(p) · GW(p)

Can I just send Eliezer an Email?

There is a handy link on this page.

comment by knb · 2014-11-12T08:36:24.114Z · LW(p) · GW(p)

OB was sponsored by the Future of Humanity Institute, (IIRC) so perhaps they encouraged RH and EY to post there? You could always ask at Robin Hanson's monthly open thread.

Replies from: Vulture, Capla
comment by Vulture · 2014-11-16T06:18:18.609Z · LW(p) · GW(p)

OB was sponsored by the Future of Humanity Institute, (IIRC) so perhaps they encouraged RH and EY to post there?

You've got a lot of backwards arrows in that diagram there.

comment by Capla · 2014-11-12T18:17:19.866Z · LW(p) · GW(p)

Is that on overcoming bias?

Replies from: knb
comment by knb · 2014-11-12T20:58:38.362Z · LW(p) · GW(p)

Yes.

comment by Artaxerxes · 2014-11-10T09:39:38.828Z · LW(p) · GW(p)

Nick Bostrom toured a bunch to promote Superintelligence after it came out. This included presentations, basically he would give a very condensed summary of the book, roughly the same each time. For anyone who's read the book you're probably not going to hear anything new in that.

However, the Q&As that took place after these presentations are a somewhat interesting look at how people react to hearing about this subject matter.

Talks at Google, the Q&A Starts 45:14. The first person in the audience to speak up is Ray Kurzweil.

His talk at the Oxford Martin School, Q&A starts at 51:38.

His talk at UC Berkeley, hosted by MIRI. Q&A starts at about 53:37.

comment by Salemicus · 2014-11-11T11:55:52.762Z · LW(p) · GW(p)

There have been a lot of posts over the years about the fungibility of money and time - but strangely (at least to me), they all fill up with suggestions for how to turn money into time. Personally, I have the opposite desire - to turn time into money. I have found it extremely hard to find a decently-paid part-time job that fits around my main job. I also don't know how to get into freelancing.

Does anyone have any good suggestions? Possibly relevant info: I live in the UK, and am a programmer, of good but not phenomenal skill.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-11T13:26:42.195Z · LW(p) · GW(p)

Why do you think that a part time job is the way to go? Maybe it's better to switch your main job to a higher paying job that's more demanding?

Replies from: Salemicus
comment by Salemicus · 2014-11-11T15:11:09.832Z · LW(p) · GW(p)

I would love to make such a switch, and am currently working on it. But that is a long-term goal, and in the meantime I'm looking to turn some time into money on the margin.

Replies from: Punoxysm
comment by Punoxysm · 2014-11-11T20:01:24.700Z · LW(p) · GW(p)

Freelancing is your best bet. Listing yourself online at places like Elance is a good start. Submit aggressive bids. Understand that feedback is CRITICAL - bad reviews can sink you, even 4 stars out of 5 is "bad", and you'll have to bid low until you have a good feedback record.

Alternately, you could do something like make an app. You probably won't make much money directly, but it could be a good longer-term investment in skills and resume.

Also, if you are considering selling some stuff you have, make an effort to shop around and get the best price. Putting it on ebay will yield higher prices than a garage sale, but take more time.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-11-12T09:55:04.311Z · LW(p) · GW(p)

Hm. This advice runs exactly counter to the Charge Your Happy Price advice. Why is that?

Replies from: Punoxysm, Capla
comment by Punoxysm · 2014-11-12T18:41:47.943Z · LW(p) · GW(p)

Freelancing is brutally competitive, especially if you have no track record.

My advice is second hand. Maybe if he has a good network already, or is just willing to wait a while for work, he can start charging his "happy price" immediately.

comment by Capla · 2014-11-12T18:29:01.118Z · LW(p) · GW(p)

Attach your "(" to your "]".

comment by Ritalin · 2014-11-11T15:15:08.308Z · LW(p) · GW(p)

Dudeism? What in the world are they blathering about?

It turns out Dudeism is a thing. Wikipedia summarizes it best:

The Dudeist belief system is essentially a modernized form of Taoism purged of all of its metaphysical and medical doctrines. Dudeism advocates and encourages the practice of "going with the flow", "being cool headed", and "taking it easy" in the face of life's difficulties, believing that this is the only way to live in harmony with our inner nature and the challenges of interacting with other people. It also aims to assuage feelings of inadequacy that arise in societies which place a heavy emphasis on achievement and personal fortune. Consequently, simple everyday pleasures like bathing, bowling, and hanging out with friends are seen as far preferable to the accumulation of wealth and the spending of money as a means to achieve happiness and spiritual fulfillment.

I thought it'd be worth bringing to attention here, because if there's one adjective that would not apply to the online LW community, it's "laid back". Note that many of us are lazy, but we struggle with laziness, we keep looking for self-help and trying to figure out our motivation systems and trying so hard to achieve. Other urges and "sufferings" we struggle with are the need to fit in, the need to make sense of the world, the need to be perfectly clear in thought and expression (and the need to demand that of others), and so on and so forth.

How much could we benefit from being more laid-back, from openly and deliberately saying "fuck it"? From doing what we actually want to do without regard for what's expected of us?

The thoughts on this post aren't very well-articulated, and perhaps I'm misjudging LW completely, but, um, you know, that's just, like, uh, my opinion, man. Obviously, it's open for debate; that's what we're here for, yes?

Replies from: Vaniver, maxikov, IlyaShpitser, Risto_Saarelma, Lumifer, Metus
comment by Vaniver · 2014-11-12T14:38:03.073Z · LW(p) · GW(p)

I thought it'd be worth bringing to attention here, because if there's one adjective that would not apply to the online LW community, it's "laid back".

Hmm. I have pretty strong Daoist / Stoic tendencies, and a large part of that deals with rejection of "should-ness;" that is, things are as they are, and carrying around a view of how the world "should" be that disagrees with the actual world is, on net, harmful.

I've gotten some pushback from LWers on that view, as they use the delta between their should-world and their is-world to motivate themselves to act. As far as I can tell, that isn't necessary; one can be motivated by the is-world directly, and if one reasons in the is-world one is more likely to make successful plans than if one reasons in the should-world (which is where one will reason, since diffs between the should-world and is-world are defects!).

But I think that LW is useful at dissolving that pushback; the practice of cashing out beliefs as predictions about the world rather than tribal identifications or moral claims is basically the practice of living in the is-world instead of the should-world.

Replies from: Ritalin
comment by Ritalin · 2014-11-12T15:28:14.415Z · LW(p) · GW(p)

diffs between the should-world and is-world are defects

My understanding is that defects are like speed-bumps and potholes, pieces where the harmonious flow of reality is interrupted, dissonances and irregularities. Going with the flow and being in harmony with the world requires more sensitivity, training, and awareness than simply letting oneself get carried by the current. It's the difference between 'surfing' waves, and 'getting engulfed' by them, yes?

comment by maxikov · 2014-11-12T23:40:49.098Z · LW(p) · GW(p)

How much could we benefit from being more laid-back, from openly and deliberately saying "fuck it"? From doing what we actually want to do without regard for what's expected of us?

I would say substantially. LW largely seems to advocate for preference utilitarianism, whereas EA and animal rights subsets of the group often come suspiciously close to deontological "whatever you do care about, here is what you should care about". As a matter of fact, the whole advocacy for consistency in ethics (e.g. "shut up and multiply") can backfire since System 1's values are not necessarily consistent. I'm not suggesting giving up on these attempts, but I guess that many people would benefit from being able to listen to System 1's voice saying "I want to invent a lightsaber" without having System 2 immediately scream "but people in Africa are suffering, and you're just being scope insensitive".

Replies from: Ritalin
comment by Ritalin · 2014-11-13T13:18:47.020Z · LW(p) · GW(p)

Well, sorting out system 1's inconsistencies can help one feel happier and more at peace with oneself. You can't achieve serenity just by giving in to all your impulses, because they contradict each other.

Replies from: maxikov
comment by maxikov · 2014-11-13T17:56:47.836Z · LW(p) · GW(p)

Sure, and I found that incredibly useful in my life as well - particularly, it helps to stop feeling bad about what's considered morally questionable, but doesn't in fact hurt anybody. But some people may go way over the top on that, and it may be useful to throttle down as well.

comment by IlyaShpitser · 2014-11-11T16:06:18.202Z · LW(p) · GW(p)

I think the steelman of "The Dude" is that you shouldn't run your mind like a police state, it's cutting against the grain.

But "The Dude" is kinda "trampy," for lack of a better word, I don't think he's a diamond in the rough or anything like that.

Replies from: Ritalin
comment by Ritalin · 2014-11-11T17:07:28.490Z · LW(p) · GW(p)

Nah, he's no hero, he's just a selfish man. But, of all the characters, he is the only one who is honest about doing nothing, while every other character on the film (and many, many people in Real Life) go to great lengths to sustain the illusion of activity and productiveness.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-11-12T12:54:04.434Z · LW(p) · GW(p)

That doesn't make him any better, he's just failing in a different way. Nul points.

(and many, many people in Real Life) go to great lengths to sustain the illusion of activity and productiveness.

And then, some people are active and productive. I don't know the film, but from your description of it, it's about a bunch of losers, in a fictional world from which every other possibility is excluded. Why should I take notice of anything in it?

Replies from: Ritalin
comment by Ritalin · 2014-11-12T13:05:13.440Z · LW(p) · GW(p)

Because there's a grain of truth in it that extends far beyond its admittedly limited scope.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-11-12T15:51:05.212Z · LW(p) · GW(p)

I prefer larger doses.

comment by Risto_Saarelma · 2014-11-12T17:05:15.214Z · LW(p) · GW(p)

Raymond Smullyan's The Tao is Silent is possibly relevant.

comment by Lumifer · 2014-11-11T18:16:54.401Z · LW(p) · GW(p)

There is, well, y'know, early Christianity... :-)

Matthew 6:25-27:

"Therefore I tell you, do not worry about your life, what you will eat or drink; or about your body, what you will wear. Is not life more than food, and the body more than clothes?
Look at the birds of the air; they do not sow or reap or store away in barns, and yet your heavenly Father feeds them. Are you not much more valuable than they?
Can any one of you by worrying add a single hour to your life?

and Matthew 6:34:

Therefore do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.

Replies from: Salemicus, Ritalin
comment by Salemicus · 2014-11-12T10:27:40.327Z · LW(p) · GW(p)

Jesus and the early Christians were all about proselytising, sacrifice, and martyrdom. The Dude is about none of those things. He doesn't try and persuade others to become Dude-like, and he doesn't stand up for what he believes in - if he believes in anything. The Dude isn't a preacher, he's a bowler. He's all about going with the flow, in his own little way.

Replies from: Lumifer
comment by Lumifer · 2014-11-12T15:38:22.749Z · LW(p) · GW(p)

All true, my comment was somewhat tongue-in-cheek :-) The early Christians tended to be awfully serious fellows. That quote from Matthew, though, was popular in the California hippy scene much, much later :-D

comment by Ritalin · 2014-11-11T22:40:19.712Z · LW(p) · GW(p)

The pre-ecclesiastical Jesus of Nazareth is referenced as a Dude avant-la-lettre (as are Sidhartha, Laozi, Epicurus, Heraclitus, and other counter-culturals that gained a cult following). Not to be confused with Jesus of North Hollywood, who is the opposite of a Dude.

Also, again with the smileying. :-(

Replies from: Lumifer
comment by Lumifer · 2014-11-12T01:38:48.180Z · LW(p) · GW(p)

Also, again with the smileying. :-(

Chill, dude :-P

Replies from: Ritalin
comment by Ritalin · 2014-11-12T11:40:10.018Z · LW(p) · GW(p)

The Dude abides... {B{í=

comment by Metus · 2014-11-11T23:41:02.114Z · LW(p) · GW(p)

I thought it'd be worth bringing to attention here, because if there's one adjective that would not apply to the online LW community, it's "laid back". Note that many of us are lazy, but we struggle with laziness, we keep looking for self-help and trying to figure out our motivation systems and trying so hard to achieve. Other urges and "sufferings" we struggle with are the need to fit in, the need to make sense of the world, the need to be perfectly clear in thought and expression (and the need to demand that of others), and so on and so forth.

QFT

I could speculate why this is the way it is but that would be too much work to type up.

comment by Stefan_Schubert · 2014-11-10T20:14:20.943Z · LW(p) · GW(p)

I am constructing a political bias quiz together with Spencer Greenberg, who runs the site Clearer Thinking and wonder if people could help me coming up with questions. The quiz will work like this. First, you'll respond to a number of questions regarding your political views: e.g., republican or democrat, pro-life or pro-choice, pro- or anti-immigration, etc. Then you'll be given a number of factual questions. On the basis of your answers, you'll be given two scores:

1) The number of correct answers - your degree of political knowledge. 2) Your degree of political bias.

The assigmment of political bias will be based on the following reasoning. Suppose you're a hard-core environemntalist, and are consistently right about the questions where hard-core environemntalist like the true answer (e.g. climate change) but consistenly wrong about the questions where they are not (e.g. GMOs). Now this suggests that you have not reviewed these questions impartially, but that you acquire whatever factual beliefs suit your political opinions - i.e. that you're biased. Hence, the higher the ratio between the correct answers you like and the correct answers you dislike is, the more biased you are.

(The argument is slightly more complicated, but this should suffice for the present purposes. Also, the test shouldn't be taken too seriously - the main purpose is to make people think more about political bias as a problem).

The questions are intended for an American audience. I have come up with the following questions so far:

1) Which of the following statements best describes expert scientists’ views of the claim that global temperatures are rising due to human activities (this question is taken from a great paper by Dan Kahan )? (Most agree it's true, divided, most agree it's false)

2) Which of the following statements best describes expert scientists’ views of the claim that genetically modified foods are safe? (Same possible answers)

3) Which of the following statements best describes expert scientists’ views of the claim that humans are causing mass extinction of species at a rate that is at least 100 times the natural rate? (Same possible answers)

4) Which of the following statements best describes expert scientists’ views of the claim that radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities? (Same possible answers)

5) Which of the following statements best describes expert scientists’ views of the claim that humankind evolved from other species through natural selection. (Same possible answers)

6) Which of the following statements best describes expert scientists’ views of the claim that the death penalty increases homicide rates. (Same possible answers)

7) Studies show that on spatial reasoning tests, male mean scores are higher than females', whereas the converse is true of emotional intelligence tests. (True/false)

8-10) (These are taken from Bryan Caplan's excellent The Myth of the Rational Voter ) Expert economists were given the following possible explanations for why the economy isn't doing better. For each one, please indicate whether they thought it is a major reason the economy is not doing better than it is, a minor reason, or not a reason at all:

8) “Taxes are too high”

9) “Foreign aid spending is too high"

10) “Top executives are paid too much”

11) How much does the US spend on foreign aid, as a share of GDP (0-1 %, 1-3 %, 3+ %).

I need perhaps 10-15 additional questions. The questions need to have the following features:

1) The answer needs to be provable. Hence why ask what expert scientists believe about P – on which there are surveys I can point to – rather than P itself in many of the questions. However, you can also have questions about P itself if you can point to reliable sources such as government statistics, as I do in question 11.

2) They should be “baits” for biased people; i.e. such that biased people should be expected to give the wrong answer if they don’t like the true answer, and the true answer if they like it.

3) The questions shouldn’t be very difficult. If you give people questions on, e.g, numbers, you have to give fairly large intervals, as I do in question 11. Also you cannot ask too outlandish questions (e.g., questions on small parts of the federal budget).

At present I seem to have more questions where the liberal answer is the true one, so “pro-conservative” questions are particularly welcome.

Any suggestions of questions or other forms of input is highly appreciated! :)

Replies from: ChristianKl, maxikov, ChristianKl, Jiro, MaximumLiberty, CronoDAS, satt, Alejandro1, ChristianKl, cameroncowan
comment by ChristianKl · 2014-11-11T16:20:55.570Z · LW(p) · GW(p)

7) Studies show that on spatial reasoning tests, male mean scores are higher than females', whereas the converse is true of emotional intelligence tests. (True/false)

I see no reason to bundle those claims.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-11-11T18:19:49.172Z · LW(p) · GW(p)

True. I'll split them. Thanks!

comment by maxikov · 2014-11-12T23:24:58.964Z · LW(p) · GW(p)

8) “Taxes are too high”

9) “Foreign aid spending is too high"

10) “Top executives are paid too much”

I would rather rephrase "X is too high" as "X should be reduced" if that's what you want to ask. Otherwise it seems to shift the perspective from policy-making to emotional evaluation.

comment by ChristianKl · 2014-11-10T22:37:43.060Z · LW(p) · GW(p)

At present I seem to have more questions where the liberal answer is the true one, so “pro-conservative” questions are particularly welcome.

How about a questions about the average IQ in some subsaharan country?

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-11-11T11:33:19.327Z · LW(p) · GW(p)

Hm, good idea. Could be very controversial, though, and I'm not sure of whether it would be sufficiently provable. But yes, a question where the true answer is some negative fact about an African country is a good idea. Thanks!

Replies from: ChristianKl
comment by ChristianKl · 2014-11-11T13:37:51.595Z · LW(p) · GW(p)

If you don't want to go into the IQ area, personal values are a good topic.

The World Value Survey seems a good source.

In some African countries more Muslims believe that homosexuality should be punishable by death than most Western liberals would like.

Replies from: fubarobfusco, Stefan_Schubert
comment by fubarobfusco · 2014-11-11T18:26:26.472Z · LW(p) · GW(p)

A number of the above questions are not asking, "Is X true?" but rather "Do group Y believe that X is true?"

But once you get into asking "Do group Y believe that X should be done?" you're not talking about respondents' model of others' factual "is" beliefs, but respondents' model of others' moral "ought" beliefs.

That might be a very different thing.

comment by Stefan_Schubert · 2014-11-11T18:19:24.065Z · LW(p) · GW(p)

Excellent! Yes those sorts of questions are even better.

comment by Jiro · 2014-11-10T22:27:00.690Z · LW(p) · GW(p)

3) Which of the following statements best describes expert scientists’ views of the claim that humans are causing mass extinction of species at a rate that is at least 100 times the natural rate? (Same possible answers)

Without knowing anything about the extinction of species, I could guess that the answer is "most scientists agree".

If the correct number is not 100 but is large, the question would incorrectly conclude that some people who are not biased are biased (since someone who falsely thinks the number is 100 when it's really 75 or 125 is not biased, but would incorrectly answer "scientists agree").

If the correct number is not 100 but is small, the question would incorrectly conclude that some people who are biased are not biased (since someone who falsely thinks the number is 75 or 125 when it's really 1 is biased, but this bias would be undetectable since he would correctly answer the question with "scientists disagree")

Therefore the correct number is 100.

This question should be phrased using words like "many", not using the exact number 100.

For pro-conservative questions, one could be: In a recent poll of 15000 police officers polled, a large majority thought assault weapon bans are effective, a small majority thought assault weapon bans are effective, they were about equal, a small majority thought assault weapon bans are ineffective, a large majority thought assault weapon bans are ineffective. http://www.policeone.com/Gun-Legislation-Law-Enforcement/articles/6183787-PoliceOnes-Gun-Control-Survey-11-key-lessons-from-officers-perspectives/

Generally, however, people should be suspicious of such questions because in real political discourse, these questions are used to set the terms of the debate.

-- Does it matter whether the death penalty increases homicide rates?

-- Does it matter that humans cause climate change regardless of the size of the change?

-- I would think that any truly expert economist would say "we have no way to know the answers to these questions to the same degree as we know physics or chemistry answers. I could give you my educated opinion, but there's still a lot of disagreement within economics".

Replies from: Stefan_Schubert, ChristianKl
comment by Stefan_Schubert · 2014-11-11T11:48:48.597Z · LW(p) · GW(p)

Hm, you're right that question 3) is not formulated rightly. Great comment!

Thanks for the poll data. One worry is, though, that the police officers might not be seen as proper "experts" in the same sense as climate scientists are. I need to think about that.

The data are very interesting, though. US police officers seem to very conservative indeed.

Thanks!

comment by ChristianKl · 2014-11-11T13:36:39.452Z · LW(p) · GW(p)

-- Does it matter that humans cause climate change regardless of the size of the change?

The point of the exercise is detecting bias. As such it's not important whether the answer to the question is important.

Replies from: Jiro
comment by Jiro · 2014-11-11T16:11:10.707Z · LW(p) · GW(p)

It doesn't work that way.

Imagine that you don't know much about homeopathy, but you do know that experts oppose it. Then someone asks you the question "The number of homeopathic cures of all types rejected by the FDA for not being effective is (much less than) (less than) (equal to) (greater than) (much greater than) the number of allopathic cancer cures."

If you approached this question out of context, you would think "I know that experts believe homeopathy isn't effective. The FDA uses experts. So experts probably rejected lots of homeopathic remedies."

If you approached this question in context, however, you would reason "I know that experts believe homeopathy isn't effective. But given the way this question is phrased, it's being asked by a homeopath. He's probably asking this question because it makes homeopathy look good, so this must be an unusual situation where experts' belief on homeopathy doesn't affect the answer, and he's falsely trying to imply that it does. So the FDA probably rejected few homeopathic remedies for being ineffective, but for some reason this doesn't reflect the belief of experts."

Replies from: fubarobfusco
comment by fubarobfusco · 2014-11-11T18:27:14.589Z · LW(p) · GW(p)

For instance, if most homeopathic treatments are not submitted to the FDA, they would not have a chance to reject them.

Replies from: Jiro
comment by Jiro · 2014-11-11T19:14:57.902Z · LW(p) · GW(p)

Actually, one of the sponsors of the act that created the FDA was a homeopath and he wrote in an exception for homeopathy, so homeopathic treatments don't have to prove they are safe and effective.

Also, keep note of who this question would falsely mark as biased. Someone who opposes homeopathy and correctly knows that experts also oppose homeopathy, who tries to reason the first way, would be marked down as biased, because he answered in a way favorable to his own position but contrary to the facts. Yet answering the first way doesn't mean bias, it just means he ignored the agenda of the person asking the question.

comment by MaximumLiberty · 2014-11-13T00:29:05.420Z · LW(p) · GW(p)

On your question 1, I would rephrase it to say that human activities tend to cause global temperatures to rise. Or that human activities have caused global temperatures to rise. Otherwise, you get stuck in the whole issue about the "pause," which might show that temperatures are not currently rising for reasons that are not fully understood and are subject to much debate. The paper you cite was from early 2010, and was based on research before that, so the pause had not become much-discussed by then.

One thing that I think will be interesting if you run the quiz is to identify a group who resist polarization and are between the extremes. For example, I think there are plenty of people that agree that carbon dioxide causes temperatures to rise (all else being equal) but believe that the feedback loop is not significantly positive. People from each extreme tend to lump them middle-grounders in with the people at the other extreme: "You're an alarmist!" "You're a denier!" etc.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-11-13T11:40:20.314Z · LW(p) · GW(p)

Good point - I'll change the formulation.

comment by CronoDAS · 2014-11-13T07:59:39.367Z · LW(p) · GW(p)

What do economists think of the American Reinvestment and Recovery Act of 2009?

Replies from: MaximumLiberty, Stefan_Schubert
comment by MaximumLiberty · 2014-11-13T23:54:23.951Z · LW(p) · GW(p)

Or, the following based on http://ew-econ.typepad.fr/articleAEAsurvey.pdf. (I've bolded the answers I think are supported, but you should check my work!)

  • "What do economists think about taxes on imported goods?"Most favor; divided; most disfavor.
  • "What do economists think about laws restricting employers from outsourcing jobs to other countries?" Most favor; divided; most disfavor.
  • "What do economists think about anti-dumping laws, which prohibit foreign manufacturers from selling goods below cost in the US?" Most favor; divided; most disfavor.
  • "What do economists think about subsidizing farming?" Most favor; divided; most disfavor.
  • "What do economists think about proposals to replace public-school financing with vouchers?" Most favor; divided; most disfavor.
  • "What do most economists think about the proposal of raising payroll taxes to close the funding gap for Social Security?" Most favor; divided; most disfavor.
  • "What do economists believe the effect of global warming will be on the US economy by the end of the 21st century?" Most believe it will help significantly; divided; most believe it will hurt significantly.
  • "What do economists think about marijuana legalization?" Most favor; divided; most disfavor.
  • "What do economists believe about legislation for universal health insurance?" Most favor; divided; most disfavor.
  • "Do more economists believe that the minimum wage should be raised by more than $1 or should be abolished?"
comment by Stefan_Schubert · 2014-11-13T11:40:43.821Z · LW(p) · GW(p)

Great!

comment by satt · 2014-11-13T06:52:30.917Z · LW(p) · GW(p)

For the true/false(/divided) questions, it'd be wise to aim for an even split in true/false(/divided) answers to minimize acquiescence bias. At the moment disproportionately few answers for questions 1-7 are "false", so someone who just likes agreeing with things has an unfair advantage there!

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-11-13T11:41:04.560Z · LW(p) · GW(p)

Haha! Good point!

comment by Alejandro1 · 2014-11-10T22:39:47.760Z · LW(p) · GW(p)

You could have a question about the scientific consensus on whether abortion can cause breast cancer (to catch biased pro-lifers). For bias on the other side, perhaps there is some human characteristic the fetus develops earlier than the average uninformed pro-choicer would guess? There seems to be no consensus on fetus pain, but maybe some uncontroversial-yet-surprising fact about nervous system development? I couldn't find anything too surprising on a quick Wiki read, but maybe there is something.

Replies from: gattsuru
comment by gattsuru · 2014-11-10T23:53:31.108Z · LW(p) · GW(p)

I would expect that even as a fairly squishy pro-abortion Westerner (incredibly discomforted with the procedure but even more discomforted by the actions necessary to ban it), I'm likely to underestimate the health risks of even contragestives, and significantly underestimate the health risks of abortion procedures. Discussion in these circles also overstates the effectiveness of conventional contraception and often underestimates the number of abortions performed yearly. The last number is probably the easiest to support through evidence, although I'd weakly expect it to 'fool' smaller numbers of people than qualitative assessments.

I'm also pretty sure that most pro-choice individuals drastically overestimate its support by women in general -- this may not be what you're looking for, but the intervals (40% real versus 20% expected for women who identify as "pro-life") are large enough that they should show up pretty clearly.

Replies from: Stefan_Schubert, NancyLebovitz
comment by Stefan_Schubert · 2014-11-11T11:37:19.664Z · LW(p) · GW(p)

These are good ideas. You've got it quite right - these are exactly the kinds of questions I'm looking for. Possibly the health risks questions are the best ones - I'll see what evidence I can find on those issues. Thanks!

comment by NancyLebovitz · 2014-11-13T06:39:05.456Z · LW(p) · GW(p)

It wouldn't surprise me if people generally overestimate the safety and effectiveness of drugs and medical procedures-- would you want to compare the accuracy of people's evaluation of contraceptives and abortions to their evaluation of medicine in general?

It also wouldn't surprise me if there's a minority who drasitically underestimate the safety and effectiveness of medicine.

comment by ChristianKl · 2014-11-11T16:20:36.736Z · LW(p) · GW(p)

If you want to catch the other side on the global warming debate as well, there are a bunch of claims where I suspect the average liberals is overconfident. Maybe something like "Hurricane X wouldn't have happened without global warming". The IPCC report shows their confidence for various claims and it's likely something there to catch liberals.

Replies from: Stefan_Schubert, army1987
comment by Stefan_Schubert · 2014-11-12T15:18:56.477Z · LW(p) · GW(p)

Yes, something like that could probably catch some liberals, that's true.

comment by A1987dM (army1987) · 2014-11-11T16:55:55.650Z · LW(p) · GW(p)

Maybe something like "Hurricane X wouldn't have happened without global warming".

Not unlikely at all. Try “There would have be many fewer hurricanes in the past 10 years without global warming” instead.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-11T17:04:26.643Z · LW(p) · GW(p)

I said something like because I just want to illustrate an idea and not the suggestion in that form. It makes sense to a claim directly from the IPCC report instead of making up your own claim.

comment by cameroncowan · 2014-11-11T21:57:10.288Z · LW(p) · GW(p)

Some domestic questions would be nice. Opinions about school choice for example.

Replies from: MaximumLiberty
comment by MaximumLiberty · 2014-11-13T00:12:41.636Z · LW(p) · GW(p)

Or homeschooling. Possibilities:

"Studies show that home-schooled children score worse on tests related to socialization than conventionally educated children." This is false according to the first paragraph under "Socialization" on en.wikipedia.org/wiki/Homeschooling in that always true resource, Wikipedia.

"The most cited reason for parents to choose homeschooling over public schools is the public schools' (a) the lack of religious or moral instruction, (b) social environment, or (c) quality of instruction." The actual answer is (b), with (a) taking second place and (c) taking third. See http://nces.ed.gov/pubs2006/homeschool/parentsreasons.asp.

Replies from: cameroncowan
comment by cameroncowan · 2014-11-13T06:13:47.391Z · LW(p) · GW(p)

I was homeschooled and hated every minute of it! But I think it can be alright in a few cases. I came out pretty good.

Replies from: MaximumLiberty
comment by MaximumLiberty · 2014-11-13T23:14:36.638Z · LW(p) · GW(p)

Did you also attend public school? If so, which did you dislike more? If you didn't, which do you think you would have disliked more?

I'm also curious if you don't mind me asking: what did you hate about it?

Replies from: cameroncowan
comment by cameroncowan · 2014-11-15T01:26:54.188Z · LW(p) · GW(p)

I went to private schools, a montessori and a private christian school. I hated the isolation, the lack of intellectual curiosity and my browbeating perfectionist mother who never let me learn and just expected perfection at all times in all subjects. It led to a lot of abuse in my family, especially being an only child.

comment by maxikov · 2014-11-12T09:08:53.102Z · LW(p) · GW(p)

When I was trying to make sense of Peter Watts' Echopraxia it has occurred to me that there may be two vastly different but both viable kinds of epistemology.

First is the classical hypothesis-driven epistemology, promoted by positivists and Popper, and generalized by Bayesian epistemology and Solomonoff induction. In the most general version, you have to come up with a set of hypotheses with assigned probabilities, and look for information that would change the entropy of this set the most. It's a good idea. It formalizes what is science, and what is not; it provides the framework for research, and, given the infinite amount of computing power on a hypercomputer, extract the theoretical maximum of utility from sensory information. The main problem is that it doesn't an algorithmic way to come up with hypotheses, and the suggestion to test infinitely many of them (aleph-1, as far as I can tell) isn't very helpful either.

On th other hand, you can imagine data-driven epistemology, where you don't really formulate any hypotheses. You just have a lot of pattern-matching power, completely agnostic of the knowledge domain, and you use it to try to find any regularities, predictability, clustering, etc. in the sensory data. Then you just check if any of the discovered knowledge is useful. That barely (if at all) can distinguish correlation and causation, that does not really distinguish scientific and non-scientific beliefs, and it doesn't even guarantee that the findings will be meaningful. However, it does work algorithmically, even with finite resources.

They actually go together rather nice, with data-driven epistemology being the source of hypotheses for the hypothesis-driven epistemology. However, Watts seems to be arguing that given enough computing power, you'd be better off spending it on data-driven pattern matching than on generating and testing hypotheses. And since brains are generally good at pattern matching, System 1, slightly tweaked with yet-to-be-invented technologies, can potentially vastly outperform System 2 running hypothesis-driven epistemology. I wonder to which extent it may actually be true.

Replies from: Kaj_Sotala, Vaniver
comment by Kaj_Sotala · 2014-11-12T10:19:20.222Z · LW(p) · GW(p)

Reminds me of "The Cactus and the Weasel".

The philosopher Isaiah Berlin originally proposed a (tongue-in-cheek) classification of people into "hedgehogs", who have a single big theory that explains everything and view the world in that light, and "foxes", who have a large number of smaller theories that they use to explain parts of the world. Later on, the psychologist Philip Tetlock found that people who were closer to the "fox" end of the spectrum tended to be better at predicting future events than the "hedgehogs".

In "The Cactus and the Weasel", Venkat constructs an elaborate hypothesis of the kinds of belief structures that "foxes" and "hedgehogs" have and how they work, talking about how a belief can be grounded in a small number of fundamental elements (typical for hedgehogs) or in an intricate web of other beliefs (typical for foxes). The whole essay is worth reading, but a few excerpts that are related to what you just wrote:

Where does [the fox prediction advantage], let’s call it the Tetlock edge, come from? I have a speculative answer.

It comes from eschewing abstraction and preferring the unreliable world of System 1 tools: metaphor, analogy and narrative; tools that all depend on pattern recognition of one sort or the other, rather than classification into clean schema. Fox brains are in effect constantly doing meta-analyses with unstructured ensembles, rather than projecting from abstract models.

That’s where the advantage comes from: eschewing abstraction.

Abstraction creates meta-knowledge via inductive generalization, and can grow into doctrinaire world views. The way this happens is that you try to formalize the interdependencies among all your generalized beliefs. Your one big idea as a hedgehog is an idea that covers everything, the whole T-box, so to speak. Abstraction provides you with ways to compute beliefs and actions in domains you haven’t even encountered yet, thereby coloring your judgment of the novel before the fact.

Pattern recognition creates meta-knowledge through linkages among weak views in multiple domains. The many things you know start getting densely connected in a messy web of ad hoc associations. Your collection of little ideas, densely connected, does not cover everything, since there are fewer abstractions. So you can only form beliefs about new domains once you encounter some data about them (which means you have an inclusion bias). And you cannot act decisively in those domains, since you lack strong metanorms. This means pattern recognition leaves you with a fundamentally more open mind (or less strongly colored preconceptions about what you do not yet know).

The way you slowly gain a Tetlock advantage, if you live long enough to collect a lot of examples and a very densely connected mind full of little ideas, is as follows: The more you see instances of a belief in various guises, the better you get at recognizing new instances. This is because the chances that a new instance will be recognizable close to an existing instance in your collection increases, and also because patterns color the unknown less strongly than abstractions.

As you age, your mind becomes a vessel for accumulating a growing global context to aid in the appreciation of novelty.

Abstraction offers you a satisfyingly consistent and clean world view, but since you generally stop collecting new instances (and might even discard ones you have) once you have enough to form an abstract belief through inductive generalization, it is harder to make any real use of new information as it comes in. There is already a strongly colored opinion in place and guides to action that don’t rely on knowing things. Your abstractions also accumulate metanorms, and give you an increasing array of reasons to not include new information in your world view. [...]

Foxes are fundamentally Big Data native people. They operate on the assumption that it is cheaper to store new information than to decide what to do with it. Hedgehogs are fundamentally not Big Data native. If they can’t structure it, they can’t store it, and have to throw it away. If they can structure it with an abstraction, they don’t need to store most of it. Only a few critical details to fit the Procrustean bed of their abstraction.

Because foxes resist the temptation of abstraction (and therefore the temptation to throw away examples of patterns once an inductive generalization and/or metanorm has been arrived at, or stop collecting), they slowly gains an advantage over time, as the data accumulates: the Tetlock edge.

We can restate the Archilocus definition in a geeky way: The fox has one big, unstructured dataset, the hedgehog has many small structured datasets.

But this takes a long time and a lot of stamp collecting, and foxes have to learn to survive in the meantime. Young foxes can be particularly intimidated by old hedgehogs, since the latter are likely to have accumulated more data in absolute terms.

Replies from: maxikov
comment by maxikov · 2014-11-12T21:21:57.858Z · LW(p) · GW(p)

That is very interesting and definitely worth reading. One thing though, it seems to me that a rationalist hedgehog should be capable of discarding their beliefs if the incoming information seems to contradict them.

comment by Vaniver · 2014-11-12T14:28:25.185Z · LW(p) · GW(p)

On th other hand, you can imagine data-driven epistemology, where you don't really formulate any hypotheses. You just have a lot of pattern-matching power, completely agnostic of the knowledge domain

When you say "pattern-matching," what do you mean? Because when I imagine pattern-matching, I imagine that one has a library of patterns, which are matched against sensory data- and those library of patterns are the 'hypotheses.'

But where does this library come from? It seems to be something along the lines of "if you see it once, store it as a pattern, and increase the relevance as you see it more times / decrease or delete if you don't see it enough" which looks like an approximation to "consider all hypotheses, updating their probability upward when you see them and try to keep total probability roughly balanced."


That is, I think we agree; but I think when we use phrases like "pattern-matching" it helps to be explicit about what we're talking about. Distinguishing between patterns and hypotheses is dangerous!

Replies from: maxikov
comment by maxikov · 2014-11-12T21:50:03.589Z · LW(p) · GW(p)

Probably a better term would be "unsupervised learning". For example, deep learning and various clustering algorithms allow us to figure out whether the data had any sorts of non-temporal regularities. Or we may try to see if the data predicts itself - if we see X, in Y seconds we'll see Z. That doesn't seem to be equivalent to considering infinitely many hypotheses. In Solomonoff induction, hypothesis is the algorithm capable of generating data, and based on the new incoming information, we can decide whether the algorithm fits the data or not. In unsupervised learning, on the other hand, we don't necessarily have an underlying model, or the model may not be generative.

Replies from: Vaniver
comment by Vaniver · 2014-11-13T04:44:04.564Z · LW(p) · GW(p)

For example, deep learning and various clustering algorithms allow us to figure out whether the data had any sorts of non-temporal regularities. ... That doesn't seem to be equivalent to considering infinitely many hypotheses.

I think it's useful to think of the parameter-space for your model as the hypothesis-space. Saying "our parameter-space is R^600" instead of "our parameter-space is all possible algorithms" is way more reasonable and computable, but what it would mean for an unsupervised learning algorithm to have no hypotheses would be that it has no parameters (which would be worthless!). Remember that we need to seed our neural nets with random parameters so that different parts develop differently, and our clustering algorithms need to be seeded with different cluster centers.

Replies from: maxikov
comment by maxikov · 2014-11-13T05:13:42.971Z · LW(p) · GW(p)

Does it mean then that neural networks start with a completely crazy model of the real world, and slowly modify this model to better fit the data, as opposed to jumping between model sets that fit the data perfectly, as Solomonoff induction does?

Replies from: Vaniver
comment by Vaniver · 2014-11-13T15:28:15.868Z · LW(p) · GW(p)

Does it mean then that neural networks start with a completely crazy model of the real world, and slowly modify this model to better fit the data

This seems like a good description to me.

as opposed to jumping between model sets that fit the data perfectly, as Solomonoff induction does?

I'm not an expert in Solomonoff induction, but my impression is that each model set is a subset of the model set from the last step. That is, you consider every possible output string (implicitly) by considering every possible program that could generate those strings, and I assume stochastic programs (like 'flip a coin n times and output 1 for heads and 0 for tails') are expressed by some algorithmic description followed by the random seed (so that the algorithm itself is deterministic, but the set of algorithms for all possible seeds meets the stochastic properties of the definition).

As we get a new piece of the output string--perhaps we see it move from "1100" to "11001"--we rule out any program that would not have output "11001," which includes about half of our surviving coin-flip programs and about 90% of our remaining 10-sided die programs. So the class of models that "fit the data perfectly" is a very broad class of models, and you could imagine neural networks as estimating the mean of that class of models instead of every instance of the class and then taking the mean of them.

comment by Gunnar_Zarncke · 2014-11-10T09:07:54.870Z · LW(p) · GW(p)

I see a lot self-help books and posts following the general pattern: Don't read all the advice and apply it all at once but read and master it step by step (mostly really urging not to continue reading). I think this is a sound approch which could be applied more often. It is kind of clicker-traing advice applied at a high level. I wonder about the best granularity. The examples below use between 3 and about 100 steps. And I'd guess the more is better here - if possible. But it may depend on the topic at hand.

Examples:

Probably you can think of lots more...

Replies from: Vaniver, Viliam_Bur
comment by Vaniver · 2014-11-12T15:08:53.222Z · LW(p) · GW(p)

Carnegie's How to Make Friends and Influence People has a slightly parallel approach: reread the book on a regular schedule, as you'll notice things the fourth time you didn't notice the third because your skill growth puts you in a different place relative to the material. (Maybe he also recommends not to read the whole book at once, also, and I just forgot that part; I think he does encourage people to read just the parts they want to.)

It seems to me that rereading is likely to be more effective than staggering the reading, and that rereading enables staggering. One of my childhood English teachers was more forgiving of reading in class than other teachers, and had a small rack of books available- one of them was Watership Down, which I read cover to cover probably ~7 times, and afterwards would just open to a random page and then would be able to immediately place myself in the story at that point and read from there.

This also calls to mind the practice among more serious Christians of reading the Bible once a year- it takes about four pages a day, and does not take many years for much of it to be very familiar. Muslims have the term "Hafiz" for someone who has memorized the Quran, which typically takes several years of focused effort, and I don't think Christians have a comparable term, but I've definitely noticed phrases along the lines of "quote chapter and verse" for when people had sizeable blocks of the Bible memorized.

comment by Viliam_Bur · 2014-11-10T09:28:45.671Z · LW(p) · GW(p)

Now when I think about rationality seminars -- aren't they analogical to reading the whole book at the same time? So the proper approach would be instead an hour or two, once in a week or two weeks (as long as it takes to master the lesson). But that would make travelling really expensive, so instead the lessons would have to be remote. Perhaps explaining the topic in a YouTube video, then having a Skype debate, a homework, and a mailinglist only for debating the current homework.

Replies from: Vaniver
comment by Vaniver · 2014-11-12T15:13:07.424Z · LW(p) · GW(p)

Three CFAR tools come to mind that reduce this somewhat:

First is the practice of "delegating to specific future selves." You plan a specific time ("two weeks from now, Sunday, in the morning") to do a specific task ("look through my workshop notes to figure out what things I want to focus on, and again delegate those things to specific future selves"), and they explicitly suggest using this on the seminar materials and notes.

Second is the various alumni connection mechanisms- a few people have done set up groups to go through the materials again, there's people that chat regularly on Skype, and so on.

Third is the rationality dojo in the CFAR office (so only applicable for the local / visiting alums) that meets weekly, I believe.

comment by Metus · 2014-11-10T14:42:50.188Z · LW(p) · GW(p)

If you have not yet read Jaynes' Probability Theory I urge you to do so. If you are not willing to read almost a thousand pages, just read the preface.

Started yesterday and I can't keep my eyes off it.

Replies from: MrMind, Risto_Saarelma
comment by MrMind · 2014-11-11T10:01:07.717Z · LW(p) · GW(p)

Seconded. There are a lot of clever ideas that I haven't seen anywhere in other probability books: A_p distributions, group invariance, the derivation of an ignorance prior as a multi-agents problem, etc.

The only lacking (due to obsolescence) chapter is the one about quantum mechanics. Jaynes advocates (although implicitly) a hidden variables theory, but so far Bell's and Kochen-Specher's theorems imposed heavy constraints on those.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-11-12T03:49:27.572Z · LW(p) · GW(p)

You seem to imply that Jaynes was writing before Bell. That is not true by many decades. I suppose it is possible that the chapter is based on a paper he wrote before Bell, but he had half a century to revise it.

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-12T12:01:22.637Z · LW(p) · GW(p)

Jaynes thought he had found an error in Bell's theorem, but he was wrong. (I wrote a comment somewhere on LW about this before; I'll link to it as soon as I find it.)

I'm under the impression that he was so committed to the idea that there are no probabilities due to intrinsic indeterminacy of nature rather than our ignorance that he got mind-killed. (I wonder whether he had ever heard (and seriously thought) about the MWI.)

Replies from: IlyaShpitser, pragmatist, VAuroch
comment by IlyaShpitser · 2014-11-12T12:55:36.643Z · LW(p) · GW(p)

http://arxiv.org/pdf/physics/0411057.pdf

That is a remarkable error, actually. As far as I can tell, it's basically denying that conditional independence is possible in Nature (!?)


The existence of Bell's inequality is basically a theorem about marginals of Bayesian networks with hidden variables. If you get an independence in the underlying Bayes net, you sometimes get an inequality in the marginal. This is not about causality at all, or about physics at all, this is a logical consequence of a conditional independence structure. It does not matter if it is causal or physical or not. Bell's theorem is about this graph: A -> B <- H -> C <- D, where we marginalize out H. My "friend in Jesus" Robin Evans has some general conditions on graphs for when this sort of thing happens.

comment by pragmatist · 2014-11-15T15:55:45.365Z · LW(p) · GW(p)

Jaynes was aware of MWI. Jaynes and Everett corresponded with one another, and Jaynes read a short version of Everett's Ph.D. dissertation (in which MWI was first proposed and defended) and wrote a letter commenting on it. You can read the letter here. He seems to have been very impressed by the theory, describing it as "the logical completion of quantum mechanics, in exactly the same sense that relativity was the logical completion of classical theory". Not entirely sure what he meant by that.

comment by VAuroch · 2014-11-13T11:10:47.894Z · LW(p) · GW(p)

I don't think it's mind-killed. It's possible to reject the premise of the Bell inequality by rejecting counterfactual definiteness, and this is a small but substantial minority view. MWI then takes this a step further and reject factual definiteness, but this is not the standard way in which it's presented, so someone who has issues with the notion of "Alice makes a decision 'of her own free choice', unaffected by events in her past light cone" but has never encountered the descriptions of MWI which mention factual and counterfactual definiteness, can justifiably believe that contrary to appearances, some hidden-variable or superdeterminist theory must be true.

I speak from personal experience, here. Up until about a year ago, I held two beliefs that I recognized were in defiance of the standard scientific conclusions, both on logical grounds. One was belief in hidden variable theories of quantum physics; the other was belief that the Big Crunch theory must be correct, rather than the Big Chill (on counter-anthropic grounds; a Big Chill universe would be the last of all universes, and that we should happen to live in the last universe, which happens to be well-tuned for life, strains credulity). Upon realizing that MWI solved the problems that led me to hidden-variable theories, and also removed the necessity for an infinite succession of universes, thus reconciling the logical non-exceptionalist argument and the Big Chill data, I switched to believing in MWI.

comment by Risto_Saarelma · 2014-11-10T16:59:46.226Z · LW(p) · GW(p)

Started yesterday

So I guess you haven't read very far yet? The first chapter is great, but the demand for mathematical fluency goes up fast from there.

Replies from: Metus
comment by Metus · 2014-11-10T20:56:57.552Z · LW(p) · GW(p)

Working my way through chapter three currently. I have a strong background in mathematics relative to the average physics student. In any case, so far the exact derivations were not important and the substance is repeated in prose, as should be standard in any good mathematics book.

comment by Metus · 2014-11-10T14:39:48.618Z · LW(p) · GW(p)

I've been rereading some of the older threads and urge you to do so too. There might be stuff you didn't see the first time around and there will be stuff you noted but have forgotten.

Replies from: drethelin
comment by drethelin · 2014-11-10T18:47:33.734Z · LW(p) · GW(p)

Seconding this. A good way to find good old threads is to go through the highest ranked comments and click through to the thread, or look at the back comments of good but non-prolific commenters like vassar.

Replies from: None, Gunnar_Zarncke
comment by [deleted] · 2014-11-12T18:48:12.695Z · LW(p) · GW(p)

Wei Dei provided a fantastic tool to read the highest rated comments for a given user. You can find it linked to here.

comment by Gunnar_Zarncke · 2014-11-10T20:16:51.925Z · LW(p) · GW(p)

I also found that browsing thru comments and submissions of the top contributors (karma total or last 30 days) produces a trove of insightful and interesting material. The downside: It eats a lot time. I needed to limit it. Procrastination warning.

Sidetrack: Maybe we should cultivate a habit to automatically turn away from procrastination tasks if not otherwise mandated.

comment by Kaj_Sotala · 2014-11-10T10:46:07.161Z · LW(p) · GW(p)

I've read Yvain's Meditations on Moloch and some of the stuff from the linked On Gnon, but I'm still unclear on what exactly Gnon is supposed to signify. Does someone have a good explanation?

Replies from: Toggle, jaime2000, shminux, ZankerH
comment by Toggle · 2014-11-10T14:47:47.767Z · LW(p) · GW(p)

IANANRx, but I think the maximally charitable answer is "Nature; especially, the biological, physiological, and game-theoretical constraints within which any society and culture must operate." By extension, a culture neglecting these constraints is necessarily in a state of collapse- a faux perpetual motion machine may move for a few moments because of initial momentum in the system, but it must necessarily halt.

As an additional corollary, homeostatic societies (which were, presumably, not in a state of collapse) must have been acting within these constraints. Therefore, long-running traditional cultures most clearly illustrate the terms of compliance with Gnon.

Replies from: bogus, Kaj_Sotala, fubarobfusco
comment by bogus · 2014-11-10T15:19:11.561Z · LW(p) · GW(p)

Therefore, long-running traditional cultures most clearly illustrate the terms of compliance with Gnon.

This is why deep ecologists and 'Soylent Greens' often advocate tribal-like structures, as found in hunters-gatherers' societies. But this clearly raises a question, how do we know which kinds of technological or social evolution are compatible with Gnon? Is greenwashed "natural capitalism" good enough, or do we need to radically simplify our lives in the name of sustainability? Or even forsake all kinds of technology and go primitivist?

comment by Kaj_Sotala · 2014-11-10T15:12:38.001Z · LW(p) · GW(p)

Thanks, your answer together with jaime2000's clarified things considerably.

comment by fubarobfusco · 2014-11-11T18:32:41.127Z · LW(p) · GW(p)

A critique of the general concept: A culture may remain "in a state of collapse" for a long, long time. It's a little like saying "as soon as you're born, you start dying" — it's a statement more about the speaker's attitude toward life or society than about the life or society being described.

(Moreover, homeostasis only works until invaded. That's why there ain't no more moa in old Aotearoa.)

Replies from: Toggle, Lumifer
comment by Toggle · 2014-11-11T18:56:39.617Z · LW(p) · GW(p)

In terms of instrumental goals ('keep society functioning'), I think these are secondary concerns. A person might believe that we are all in a perpetual state of decay; a doctor finds it necessary to understand the kidneys of a high-functioning adult so that later problems may be diagnosed and fixed. Even if decay itself might take a long time- and even if decay is ultimately inevitable- there are reasons to want to understand and replicate the rules that provide access to 'doing okay, for now'.

Departing from my steelman for a moment, I think a more pressing concern with the model might be a poor understanding of the environmental pressures on specific societies. Homeostasis is contextual- gills are a bad organ for somebody like me to have. In the case of human societies, it's not obvious what these environmental pressures might be, or what consequences they might have. Technology is certainly one of them, as are other human societies, as are material resources and so on, but it's just not a well constrained problem. Does internet access alter the most stable implementations of copyright law? Does cheap birth control change the most economically viable praxis of women's education? Would we expect Mars colonization to result from a new cold war? So I think it is not enough to show that a society endured- you have to show that the organs of that society act as solutions to currently existing problems, otherwise they are likely to multiply our miseries.

(Rejoinder to the rejoinder: Chesterton's Fence.)

comment by Lumifer · 2014-11-11T18:50:31.878Z · LW(p) · GW(p)

A critique of the general concept: A culture may remain "in a state of collapse" for a long, long time.

I think the "in a state of collapse" expression is a bit misleading with wrong connotations. A culture neglecting the real-world constraints is not necessarily collapsing. A better analogy would be swimming against the current -- you can do it for a while by spending a lot of energy, but sooner or later you'll run out and the current will sweep you away.

Replies from: Nornagest
comment by Nornagest · 2014-11-11T18:57:46.737Z · LW(p) · GW(p)

What is energy in this analogy, and where does it come from?

Replies from: Lumifer
comment by Lumifer · 2014-11-11T19:04:50.361Z · LW(p) · GW(p)

In the most general approach, negentropy. In the context of human societies, it's population, talent, economic production, power. Things a society needs to survive, grow, and flourish.

Replies from: Nornagest
comment by Nornagest · 2014-11-11T19:21:19.420Z · LW(p) · GW(p)

A lot of that doesn't look like the kind of thing societies consume, more like the substrate they run on. At least aside from a few crazy outliers like the Khmer Rouge.

I'm having a hard time thinking of policy regimes that require governments to trade off future talent, for example, for continued existence. Maybe throwing a third of your male population into a major war would qualify, but wars that major are quite rare.

Replies from: NancyLebovitz, Azathoth123, fubarobfusco
comment by NancyLebovitz · 2014-11-12T14:34:39.500Z · LW(p) · GW(p)

Tentatively-- keeping the society poor and boring. Anyone who can leave, especially the smarter people, does leave. This is called a brain drain.

comment by Azathoth123 · 2014-11-12T01:10:50.133Z · LW(p) · GW(p)

Literally borrowing ever increasing amounts of money against future generations' productivity.

Having social policies that lead to high IQ people reproducing less.

comment by fubarobfusco · 2014-11-11T22:57:05.214Z · LW(p) · GW(p)

Maybe throwing a third of your male population into a major war would qualify, but wars that major are quite rare.

They are now, anyway.

The Ottoman Empire lost 13-15% of its total population in WWI but had by far the worst proportional losses of that war, particularly from disease and starvation.

In WWII, Poland lost 16%, the Soviet Union lost 13%, and Germany 8-10%..

In the U.S. Civil War, the U.S. as a whole lost 3% of its population, including 6% of white Northern males and 18% of white Southern males..

Replies from: Nornagest
comment by Nornagest · 2014-11-11T23:17:14.077Z · LW(p) · GW(p)

Rare, not nonexistent. The World Wars are the main recent exception I was gesturing towards, although more extreme examples exist on a smaller scale: the Napoleonic Wars killed somewhere on the order of a third of French men eligible for recruitment, for example. And they were rarer before modern mass conscription, although exceptions did exist.

comment by jaime2000 · 2014-11-10T15:03:28.230Z · LW(p) · GW(p)

Gnon is reality, with an emphasis towards the aspects of reality which have important social consequences. When you build an airplane and fuck up the wing design, Gnon is the guy who swats it down. When you adopt a pacifist philosophy and abolish your military, Gnon is the guy who invades your country. When you are a crustacean struggling to survive in the ocean floor, Gnon is the guy who turns you into a crab.

Basically, reality has mathematical, physical, biological, economical, sociological, and game-theoretical laws. We anthropomorphize those laws as Gnon.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-11-10T15:12:20.941Z · LW(p) · GW(p)

Thanks, your answer together with Toggle's clarified things considerably.

(Also, that crab thing is fascinating.)

Replies from: jaime2000, army1987
comment by jaime2000 · 2014-11-10T16:12:02.930Z · LW(p) · GW(p)

(Also, that crab thing is fascinating.)

Oh, definitely. It's a really good analogy for the NRx view of civilization, too. That's why Gnon's symbol is a crab.

If you want to read another non-obscurantist explanation of Gnon, try Nyan Sandwich's "Natural Law and Natural Religion".

Replies from: None
comment by [deleted] · 2014-11-12T03:35:41.646Z · LW(p) · GW(p)

Gnon's symbol is a crab because someone had to slip subtle pro-Maryland propaganda into the memeplex.

comment by A1987dM (army1987) · 2014-11-11T12:11:10.780Z · LW(p) · GW(p)

(Also, that crab thing is fascinating.)

See also: List of examples of convergent evolution

comment by Shmi (shminux) · 2014-11-10T16:16:52.328Z · LW(p) · GW(p)

Seems like they take Feynman's "reality must take precedence over public relations, for nature cannot be fooled" and add a but of mysticism, resulting in "Nature is out to get you, constant vigilance, citizen!". Certainly makes the message easier to internalize for those who already think they live in a hostile environment.

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2014-11-10T16:22:37.434Z · LW(p) · GW(p)

"Nature is out to get you, constant vigilance, citizen!"

That's just a less-pithy version of "The perversity of the Universe tends towards a maximum", one of the formulations of Finagle's Law.

comment by NancyLebovitz · 2014-11-10T17:01:30.631Z · LW(p) · GW(p)

And then we get to the hard question-- how do we decide what is true about nature?

Replies from: shminux
comment by Shmi (shminux) · 2014-11-10T17:16:04.427Z · LW(p) · GW(p)

The usual way, make models (and metamodels) and refine them to explain and predict better.

comment by ZankerH · 2014-11-10T11:29:34.260Z · LW(p) · GW(p)

As I understand it, the apatheist statement of "the laws of nature as they justify traditional societal hierarchy".

comment by Vaniver · 2014-11-14T04:10:07.414Z · LW(p) · GW(p)

Dewey believed that the [Alexander] Technique was a method of enlightening the emotions. At the intellectual level, according to Jones:

[Dewey] found it much easier, after he had studied the Technique, to hold a philosophical position calmly once he had taken it or to change it if new evidence came up warranting a change. He contrasted his own attitude with the rigidity of other academic thinkers who adopt a position early in their career and then use their intellects to defend it indefinitely.

From Body Learning by Michael Gelb, while he's quoting another book in the middle. I know there are a handful of other LWers out there who do Alexander, but it's a very interesting technique because it seems useful to know (I wouldn't be surprised if it's the inspiration for a lot of the skillful movement stuff in Dune, which made an impression on me as a child) but useless to discuss: there's not really a way to teach it except by touch, which doesn't scale very well.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-14T14:39:21.553Z · LW(p) · GW(p)

Alexander Technique is one somatic technique of many. All of them lead to skillful movement and without background knowledge there no reason to assume that a specific technique inspired Dune.

I don't think that discussing it is inherently impossible. It just hard. It's even more hard when the people you are talking with don't have the background to understand the basic claims and then want peer reviewed research for every claim.

If you want to discuss issues around shifting around your center of gravity than the people you talk to have to know what you mean with shifting around one's center of gravity and it's not something that can be easily done via a blog post.

Replies from: Vaniver
comment by Vaniver · 2014-11-14T16:17:32.731Z · LW(p) · GW(p)

Alexander Technique is one somatic technique of many. All of them lead to skillful movement and without background knowledge there no reason to assume that a specific technique inspired Dune.

I was guessing based on timing, but looking into Herbert I'm not seeing any obvious influence. It's more a statement of Dune's impact on how I think about movement than it is about Herbert, it looks like.

I don't think that discussing it is inherently impossible. It just hard.

Well, you can certainly elaborate the basic intellectual edifice, as done for Zen here. My stab at it:

Humans are 'psychophysical systems' (read this as a rejection of dualism in practice rather than just philosophy). Most people don't use themselves skillfully. One of the key skills in using yourself skillfully is learning the skill of not doing habitual wasteful or harmful actions, and this entails unlearning habits and defaults.

The sort of person who reads LW probably has significant experience entering strange new conceptual territory, to wrap their mind around beliefs or opinions that seem totally alien to them; they probably don't have significant experience entering strange new physical territory, in the sense of moving or keeping their body in a manner that seems totally alien to them. And just as concepts that start off seeming alien can turn out to be helpful and grow familiar, so can physical mannerisms.

Communicating concepts in words is difficult but mostly doable. Communicating mannerisms in words is many times more difficult, and illusions of transparency even worse. Communicating mannerisms by touch is difficult but mostly doable. The communication difficulty is increased by the fact that the 'mannerism' involves the level of tension in the muscles and resistance to movement as well as the position of the joints. (I can show you a picture of how my shoulder is oriented; can I show you a picture of how readily it moves when you push or pull my hand?) Note also that many people spend years of focused effort in learning how to better communicate with words (both listening/reading and speaking/writing), and very little focused effort in learning how to better communicate with mannerisms (observing with sight or touch and demonstrating with example or touch).

It's even more hard when the people you are talking with don't have the background to understand the basic claims and then want peer reviewed research for every claim.

There seems to be a general heuristic of "if you can't articulate how you know X, I don't believe that you know X" that I am deeply ambivalent about using. On the one hand, it serves as an impetus to abstract and formalize knowledge and is useful as a cautionary principle against trickery. On the other hand, much (if not most!) knowledge cannot be easily articulated because it is stored in the form of muscle memory or network associations rather than clear logical links. I don't seem to rely on that heuristic very much, for reasons I haven't fully unpacked.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-15T13:33:20.325Z · LW(p) · GW(p)

Last Sunday I went to a 1 1/2 hour Grinberg Method presentation by a Grinberg teacher. At the end I asked innocently asked a deep question. After a bit forth and back the teacher did understand my question. On the other hand someone practicing Grinberg professionally with 1 year of professional training didn't even understand my question.

On the other hand, much (if not most!) knowledge cannot be easily articulated because it is stored in the form of muscle memory or network associations rather than clear logical links.

Not only that. If a specific concept withstood 100 separated attempts of falsifying it, I can be pretty confident in the concept. On the other hand summarizing those 100 separate attempt of falsifying it can't be done in a LW post. Of course the concepts for which that's true are also quite central for the way I view the world.

There seems to be a general heuristic of "if you can't articulate how you know X, I don't believe that you know X" that I am deeply ambivalent about using.

When talking about somatics, it often also useful to think "if you don't articulate how you know X, then I have no good idea what you mean when you say that you know X". Unfortunately that's quite unavoidable in the topic.

"If I claim that lowering my center of gravity will ground me and make it harder for someone to push me" then, the average person on LW likely does not have a concept of what that sentence means.

The communication difficulty is increased by the fact that the 'mannerism' involves the level of tension in the muscles and resistance to movement as well as the position of the joints.

Not only that. It also involves movement intentions. Movement intentions are not something trivial to explain.

At the beginning of the year I was a Bachata Congress taking a workshop. The teacher announced to the group that he does something and the audience is supposed to tell him what he does. He did the basic step and changed his movement intention from up to down and back a few times. I was the only person who noticed that. He said that nobody even noticed before in his workshops and he's teaching at a different Congress most weeks. The kind of people who go to dance congresses are not totally incompetent at human movement and still he usually does this and nobody can tell him what he's doing. For me it looks quite obvious but then I spent a lot of time with somatics (but still have no professional training).

Concepts like tensions, muscles, resistance to movement and position of joints are all ideas that for which I assume that most people on LW have phenomenological primitives. Movement intention isn't like that. It's nothing that somebody in school told you about.

As far as the concept of muscles go, I'm currently reading Anatomy Trains with includes the nice passage:

If the elimination of the muscle as a physiological unit is too radical a notion for most of us to accept, we can tone it down in this way: In order to progress, contemporary therapists need to think 'outside the box' of this isolated muscle concept.

That's were it get's conceptually interesting and unfortunately that's no ground that's easy to discuss on LW or for that matter on any online forum I know of.

comment by CAE_Jones · 2014-11-14T04:04:46.601Z · LW(p) · GW(p)

I don't know how to be less boring. It does not help that most people aren't the least bit interesting.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2014-11-14T14:41:29.906Z · LW(p) · GW(p)

Boring means different things to different people. I personally like a deep intellectual discussion.

Other people value other characteristics. Many people are not boring when they are unstifled and just follow their impulses.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-14T16:21:18.782Z · LW(p) · GW(p)

I personally like a deep intellectual discussion.

Even two erudites can find each other boring if their views are irreconcilable.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-15T16:12:06.629Z · LW(p) · GW(p)

That assumes the goal of discussion is to reconcile views. If I meet someone who thinks radically different from myself that's for me an interesting opportunity to understand a new perspective.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-20T14:37:43.898Z · LW(p) · GW(p)

Do you set apart a specific mix of circumstances for the pursuit of Aumann agreement?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-20T15:01:35.891Z · LW(p) · GW(p)

Agreement is seldom a goal for me in intellectual discussion. The goal is rather to learn something new or to let the person I'm discovering with learn something new. It's about exploration of ideas.

comment by Lumifer · 2014-11-14T15:59:26.292Z · LW(p) · GW(p)

"I drink to make other people more interesting" -- Ernest Hemingway

But don't forget that this didn't work out well for him.

Otherwise, the solution is to find interesting people. Internet helps a LOT.

comment by pinyaka · 2014-11-11T18:14:01.874Z · LW(p) · GW(p)

My spouse has agreed to give up either chicken or beef. Beef is significantly worse than chicken from an environmental standpoint, but more chickens die (possibly after suffering) to feed us. How can I compare the two different ethical dimensions and decide which to eliminate?

Which would you eliminate? [pollid:801]

Replies from: pcm, None, Douglas_Knight, DanielFilan, pragmatist, Baughn, Eliezer_Yudkowsky
comment by pcm · 2014-11-13T01:54:11.357Z · LW(p) · GW(p)

If you can afford pasture-raised chicken or grass-fed beef, the animal suffering consideration becomes less important than if you're eating factory-farmed animals.

Replies from: pinyaka
comment by pinyaka · 2014-11-13T19:33:25.141Z · LW(p) · GW(p)

That is a fair point. We already do that with our chicken, beef and pork having sourced them all locally and made sure that the animals are treated about as well as you could expect. In fact, there may even be a Hansonian argument that these animals generally have lives that are worth living even if you take into account that their last day may be fairly awful. I'm not sure how to factor that in either. I guess if the crappy lives of many factory-farmed chickens outweighs the crappy life of one factory-farmed cow, the nice lives of many hippy-farmed chickens should still outweigh the nice life of one hippy-farmed cow.

comment by [deleted] · 2014-11-12T18:52:26.607Z · LW(p) · GW(p)

Also consider which choice is more likely to stick, or make future ethical choices seem reasonable.

comment by Douglas_Knight · 2014-11-12T04:00:28.892Z · LW(p) · GW(p)

Why choose one? If you aren't sure which is worse, maybe you should assume that they are about equal. Then you should reduce total consumption. Is eliminating one option going to help you do that? Or will the other grow to fill the void?

Replies from: gothgirl420666, pinyaka, polymathwannabe
comment by gothgirl420666 · 2014-11-13T17:17:19.527Z · LW(p) · GW(p)

It's easier to follow a hard-and-fast rule than it is to promise yourself you'll do less of something.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-11-14T03:34:17.971Z · LW(p) · GW(p)

Yes, but it's just a commitment to an instrumental goal. To repeat myself: will it actually change, or will the one fill the void of the other? If you go to a wedding where you are offered a choice between fish and beef, the right ban does force the choice of fish, but most menus are longer than that; in particular, cooking at home offers the longest menu.

comment by pinyaka · 2014-11-13T19:35:44.754Z · LW(p) · GW(p)

My goal is to get to neither. My partner is willing to eliminate one and I think that showing that we can substitute veggies for one form of meat will make an emotionally stronger case later that we can make the same substitution for a different meat.

Replies from: Douglas_Knight, eeuuah
comment by Douglas_Knight · 2014-11-14T03:31:14.179Z · LW(p) · GW(p)

If you think it is going to be a temporary phase, then it is even less important which one you choose.

But, again, flesh and fowl are fungible. Will eliminating one actually reduce your consumption? Perhaps setting a quota for how much meat to buy on weekly grocery store trips, or going by days of the week (the most popular method in the world!) would be more effective.

Replies from: pinyaka
comment by pinyaka · 2014-11-16T02:03:44.002Z · LW(p) · GW(p)

The current plan is to eliminate meat from lunch and substitute veggie soups and some other kind of sandwhiches (we have a panini grill and I know a few good vegetarian options). Also, we're going to swap in a veggie pizza once per week as well. It may be a temporary phase, but that is not the goal for either of us.

comment by eeuuah · 2014-11-21T03:56:05.190Z · LW(p) · GW(p)

If your goal is for this to be a temporary step, pick whichever one will make a stronger argument. I.e. if one has much better substitutes available, get rid of it now.

comment by polymathwannabe · 2014-11-12T15:34:23.788Z · LW(p) · GW(p)

I thought the same. From the way the choice is framed, animal suffering is not a factor to consider. It should be, but if you really were considering it, you'd give up both.

Replies from: pinyaka
comment by pinyaka · 2014-11-13T19:25:51.104Z · LW(p) · GW(p)

Animal suffering and environmental impact are the primary factors for me but I'm weakly motivated and don't think I'll be able to change my habits without my partner changing her eating habits as well (she prepares most of our meals because she likes cooking and I do not). Animal suffering is not important to her and she's had some health problems on a vegetarian diet before so she's only willing to cut one form of meat and see how that goes before cutting further. I'd like to cut the one that generates the most problems and replace it with vegetable products first and establish a new, better equilibrium first. I do think that I'll be better at planning vegetarian replacements than she was, so I'm optimistic that eventually we'll get to pescitarian at least, but I wanted to get input on how to think about the first step.

comment by DanielFilan · 2014-11-16T04:08:30.557Z · LW(p) · GW(p)

Chicken. Far more chickens die per amount of meat, and I suspect that they have worse lives, since it is probably easier to keep a whole bunch of chickens in a small space and cut off bits of them without anaesthesia. Brian Tomasik writes about this question here, although be warned that there are some pretty nasty pictures that you will have to scroll past to get to his estimate of the numbers.

Replies from: pinyaka
comment by pinyaka · 2014-11-17T19:35:09.607Z · LW(p) · GW(p)

I don't worry too much about their living conditions. We already eschew factory farmed meats, so the chicken is free range and the cattle is raised and butchered by people with a religious obligation to treat the animals relatively well. These are definitely things that are good to consider though.

comment by pragmatist · 2014-11-12T07:15:08.798Z · LW(p) · GW(p)

I eat chicken but I don't eat mammals. This is partly for environmental reasons, but it is also because my ethics are not cosmopolitan. I think beings that are more cognitively similar to me are owed more moral concern (by me, not everyone else), not merely because they are more likely to be sentient or sapient or whatever, but because they are more likely to share my interests and outlook on the world, have emotions that I can identify with, etc. So I believe that I have greater moral obligations to my family and friends than to strangers, greater moral obligations to humans than to great apes, and so on. In the absence of contrary evidence, I use distance on the evolutionary tree as a proxy for cognitive distance. On those grounds, I am pretty uncomfortable with the suffering that cattle (and other mammals) undergo in the factory farming industry. I am significantly less uncomfortable about the suffering that chickens undergo.

So I guess my point is that you shouldn't be weighing chicken suffering against cattle suffering on a one-to-one scale, because completely cosmopolitan ethical systems are wrong. Our sphere of moral concern shouldn't work like an absolute threshold, where we have equal concern for all entities within the sphere and no concern for any entity outside it. Instead, it should gradually attenuate with distance. I probably can't convince you of all this in a single comment, but perhaps you should at least consider it as a morally relevant possibility.

comment by Baughn · 2014-11-12T02:48:22.220Z · LW(p) · GW(p)

I would add that (vague recollection upcoming:) chicken might be healthier than beef, if you're just going to eat one of them.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-11-12T06:58:04.485Z · LW(p) · GW(p)

Beef. Chickens are even less likely to be sentient by a considerable margin.

Replies from: VAuroch, pinyaka
comment by VAuroch · 2014-11-13T10:39:36.108Z · LW(p) · GW(p)

Related to this and your recent poll on Facebook: Do you distinguish between sentience and sapience?

comment by pinyaka · 2014-11-13T19:27:48.718Z · LW(p) · GW(p)

I agree that chickens are less likely to be sentient, but is killing an animal with 50 sentience units worse than killing 10 animals with 5 sentience units? How is suffering likely to scale with sentience?

comment by Shmi (shminux) · 2014-11-13T05:47:12.513Z · LW(p) · GW(p)

Odds on the Philae comet lander finding prevalence of the "right" kind of aminoacids#In_biology) (conditional on the lander surviving long enough to run the experiment and transmit the results)?

I would guess < 1%.

Replies from: Nornagest
comment by Nornagest · 2014-11-14T19:04:21.991Z · LW(p) · GW(p)

I don't know much about organic chemistry, but amino acids are fairly simple molecules, and if memory serves they've been found free-floating in molecular clouds. I'd tentatively bet at 100:1 odds that that the right isomers of at least the simpler ones will be found conditional on the experiment being run, since a comet's basically a giant slushball of compounds left over from those molecular clouds.

On the other hand, I'd put the odds at well under 1% that we'll see terrestrial-like isomer ratios. That would be tantamount to finding a biosphere -- an active one, not just the remains of one, since amino acids are photosensitive and tend toward an equilibrium isomer ratio unless biological processes are leaning on it.

Replies from: shminux
comment by Shmi (shminux) · 2014-11-14T21:17:55.017Z · LW(p) · GW(p)

since amino acids are photosensitive and tend toward an equilibrium isomer ratio

Well, amino acid racemization rates is high (under 1M years) at room temperature, but probably much lower at 3K or whatever the comet ambient temperature is, and the warm layers of the comet are regularly blown off, anyway. Not sure about the effects of X- and gamma rays on the racemization rate, but if it is anything significant, then yeah, little chance of finding anything but 50/50.

EDIT: actually, having looked a bit more, a deviation from equal ratio is quite common in extraterrestrial objects, and is an open problem, so the odds of finding something non-racemized are much higher than 1%. D'oh.

comment by hydkyll · 2014-11-10T12:43:13.230Z · LW(p) · GW(p)

Is there actually good AI research somewhere in Europe? (Apart from what the FHI is doing.) Or: can the mission for FAI benefit at all from me doing my PhD at the AI lab of some university? (Which is my plan currently.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-11-10T22:38:07.310Z · LW(p) · GW(p)

Do you mean AI research or FAI research? (The FHI does not do AI research.)

Replies from: lfghjkl
comment by lfghjkl · 2014-11-11T00:19:30.810Z · LW(p) · GW(p)

Maybe he uses good as a synonym for friendly?

comment by SodaPopinski · 2014-11-10T20:50:10.789Z · LW(p) · GW(p)

Suppose we believe that stock market prices are very good aggregators of information about companies future returns. What would be the signs that the "big money" is predicting (a) a positive postscarcity type singularity event or (b) an apocalypse scenario AI induced or otherwise?

Replies from: Ander
comment by Ander · 2014-11-10T21:43:18.889Z · LW(p) · GW(p)

For (a), it would probably look like the late 90's dot com bubble runup, except that it wouldn't end with a bubble burst and most of the companies going under, instead it would just keep going, while world dramatically changed.

For (b), I don't think we would really know until it had started, at which point things would go bad very, very quickly. I doubt that you could use price movements far in advance to predict it coming.

In general, markets can go down in price much faster than they went up. Scenario (a) would look like a continual parabolic rise, while (b) would just be a massive crash.

Replies from: Lumifer
comment by Lumifer · 2014-11-10T22:06:35.974Z · LW(p) · GW(p)

For (a), it would probably look like the late 90's dot com bubble runup

Why? In both cases money becomes meaningless post-singularity.

If you expect a happy singularity in the near future, you should actually pull your money out of investments and spend it all on consumption (or risk mitigation).

Replies from: Ander
comment by Ander · 2014-11-11T23:20:37.071Z · LW(p) · GW(p)

My idea was that for (a), money was becoming worthless but ownership of the companies driving the singularity was not. In that case, the price of shares in those companies would skyrocket towards infinity as everyone piled all of that soon-to-be-worthless money into it.

Of course, if the ownership of those companies was not going to matter either, then what you said would be true.

Replies from: Capla
comment by Capla · 2014-11-12T18:45:16.101Z · LW(p) · GW(p)

if the ownership of those companies was not going to matter

I this is something that I think is neglected (in part because it's not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who's goals are implemented. If it's a fast take-off, whoever cracks recursive self-improvement first basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think "We can create utopia beyond anyone's wildest dreams" and instead to default to "We'll skewer the competition in the next quarter."

comment by polymathwannabe · 2014-11-11T05:24:22.565Z · LW(p) · GW(p)

I have a hypothesis based on systems theory, but I don't know how much sense it makes.

A system can only simulate a less complex system, not one at least as complex as itself. Therefore, human neurologists will never come up with a complete theory of the human mind, because they'll not be able to think of it, i.e. the human brain cannot contain a complete model of itself. Even if collectively they get to understand all the parts, no single brain will be able to see the complete picture.

Am I missing some crucial detail?

Replies from: Illano, None, ChristianKl, Kaj_Sotala, MrMind, Adele_L, cameroncowan
comment by Illano · 2014-11-11T15:44:42.252Z · LW(p) · GW(p)

I think you may be missing a time factor. I'd agree with your statement if it was "A system can only simulate a less complex system in real-time." As an example, designing the next generation of microprocessors can be done on current microprocessors, but simulation time often takes minutes or even hours to run a simulation of microseconds.

comment by [deleted] · 2014-11-12T03:37:16.137Z · LW(p) · GW(p)

Institutions are bigger than humans.

Also the time thing.

comment by ChristianKl · 2014-11-11T16:13:34.884Z · LW(p) · GW(p)

The whole point of a theory is that it's less complex than the system you want to model. You are always making some simplifications.

comment by Kaj_Sotala · 2014-11-11T06:29:15.914Z · LW(p) · GW(p)

It's my understanding that nobody understands every part of major modern-day engineering projects (e.g. the space shuttle, large operating systems) completely, and the brain seems more complex than those, so this is probably right. That said, we still have high-level theories describing those, so we'll likely have high-level theories of the brain as well, allowing one to understand it in broad strokes if not in every detail.

comment by MrMind · 2014-11-11T09:21:17.348Z · LW(p) · GW(p)

A system can only simulate a less complex system, not one at least as complex as itself

It probably depends on what you mean by complexity. Surely a universal Turing machine can emulate any other universal Turing machine, given enough resources.

On the other side, neurological models of the brain need not to be as complex as the brain itself, since much of the complexity is probably accidental.

comment by Adele_L · 2014-11-11T05:41:34.208Z · LW(p) · GW(p)

Seems unlikely, given the existence of things like quines), and the fact that self-reference comes pretty easily. I recommend reading Godel Escher Bach, it discusses your original question in the context of this sort of self-referential mathematics, and is also very entertaining.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-11-11T06:45:20.543Z · LW(p) · GW(p)

Quines don't say anything about human working memory limitations or the amount of time a human would require for learning to understand the whole system, and furthermore only talk about printing the source code not understanding it, so I'm not sure how they're relevant for this.

Replies from: Adele_L
comment by Adele_L · 2014-11-11T15:17:05.487Z · LW(p) · GW(p)

I wouldn't be too surprised if the hypothesis is true for unmodified humans, but for systems in general I expect it to be untrue. Whatever 'understanding' is, the diagonal lemma should be able to find a fixed point for it (or at the very least, an arbitrarily close approximation) - it would be very surprising if it didn't hold. Quines are just an instance of this general principle that you can actually play with and poke around and see how they work - which helps demystify the core idea and gives you a picture of how this could be possible.

comment by cameroncowan · 2014-11-13T21:24:47.352Z · LW(p) · GW(p)

Unless from the beginning you create the system to accomplish a certain number of tasks and then work to create the system to complete them. That can mean creating systems and subroutines in order to accomplish a larger goal. Take stocking a store for example:

There are a few tasks to consider:

  1. Price Changes
  2. Presentation
  3. Stocking product
  4. Taking away old product (or excess)

A large store like Target has 8 different, loosely connected teams that accomplish these tasks. That is a store system within a building of 8 different subroutines to create a system that, if it works at its best, makes sure that the store is perfect stocked with the right presentation, amount of produce and the correct price. That system of 8 subroutines is back up by the 3 backroom subroutines that create the backroom system that take in product and make it available for stocking and that system is backed up by the distribution center system which is backed up by the transportation system (each truck and contractor working as a subroutine).

These systems and subroutines are created to accomplish one goal and that is to make sure that customers can find what they are looking for a buy it. I think using this idea we can start to create systems and subroutines that make it possible to replicate very complicated systems without losing anything.

comment by polymathwannabe · 2014-11-15T18:15:10.817Z · LW(p) · GW(p)

io9 popularizing Bayes' theorem.

comment by JQuinton · 2014-11-10T17:37:42.322Z · LW(p) · GW(p)

A friend of mine recently succumbed to using the base rate fallacy in a line of argumentation. I tried to explain that it was a base rate fallacy, but he just replied that the base rate is actually pretty high. The argument was him basically saying something equivalent to "If I had a disease that had a 1 in a million chance of survival and I survived it, it's not because I was the 1 in a million, it's because it was due to god's intervention". So I tried to point out that either his (subjective) base rate is wrong or his (subjective) conditional probability is wrong. Here's the math that I used, let me know if I did anything wrong:

Let's assume that the prior probability for aliens is 99%. The probability of surviving the disease given that aliens cured it is 100%. And of course, the probability of surviving the disease at all is 1 out of a million, or 0.0001%.

  • Pr(Aliens | Survived) = Pr(Survived | Aliens) x Pr(Aliens) / Pr(Survived)
  • Pr(Aliens | Survived) = 100% x 99% / 0.0001%
  • Pr(Aliens | Survived) = 1.00 * .99 / .000001
  • Pr(Aliens | Survived) = .99 / .000001
  • Pr(Aliens | Survived) = 990,000 or 99,000,000%

There's a 99,000,000% chance that aliens exist!! But... this is probability theory, and here probabilities can only add up to 100%. Meaning that if we end up with some result that is over 100% or under 0% something in our numbers is wrong.

The Total Probability Theorem is the denominator for Bayes Theorem. In this aliens instance, that is the probability of surviving the disease without alien intervention, which is 1 out of a million. The Total Probability Theorem, meaning 1 out of a million in this case, is also equal to Pr(Survived | Aliens) x Pr(Aliens) + Pr(Survived | Some Other Cause) x Pr(Some Other Cause):

  • 1 in a million = Pr(Survived | Aliens) x Pr(Aliens) + Pr(Survived | Some Other Cause) x Pr(Some Other Cause)
  • 0.0001% = 100% x 99% + ??? x 1%
  • 0.0001% = 99% + 1%*???

If we want to find ???, in this case it would be Pr(Survived | Some Other Cause), we need to solve for ??? just like we would in any basic algebra course to find x. In this case, our formula is 0.000001 = 0.99 + 0.01x.

If we solve for x, it is -98.999, or -9899%. Meaning that Pr(Survived | Some Other Cause) is -9899%. Again, a number that is outside the range of allowable probabilistic values. This means that there is something wrong with our input. Either the 1 in a million is wrong, the base rate of alien existence being 99% is wrong, or the 100% conditional probability that you would survive your 1 in a million disease due to alien intervention is wrong. The 1 in a million is already set, so either the base rate or conditional probabilities are wrong. And this is why that sort of "I could only have beaten the odds on this disease due to aliens" (or magic, or alternative medicine, or homeopathy, or Chthulu, or...) reasoning is wrong.

Again, remember the base rate. And you can't cheat by trying to jack up the base rate or you'll skew some other data unintentionally. Probability is like mass; it has to be conserved.

Replies from: MrMind, Osho, ChristianKl
comment by MrMind · 2014-11-11T09:56:08.210Z · LW(p) · GW(p)

Either the 1 in a million is wrong

This.

Since P(S) = P(S|A)P(A) + P(S|-A)P(-A), and P(S|A)P(A) is already .99, then P(S) cannot be .000001.
Those two assertions are contradicting each other: you cannot coherently believe a composite event (suriving an illness) less than you believe each factor (surviving the illness with the aid of magic).

If you believe that God will cure everyone who gets the disease (P(S|A) = 1) and God is already a certainty (P(A) = .99), then why so few people survive the illness?

One possibility is that it's P(S|-A) that is one in a million (surviving without God is extremely rare). In this case:

P(A|S) = P(S|A) P(A) / P(S) -->
P(A|S) = P(S|A) P(A) / (P(S|A) P(A) + P(S|-A) P(-A)) -->
P(A|S) =1 .99 / (1 .99 + .000001 * .01) -->
P(A|S) = .99 / (.99 + .00000001) -->
P(A|S) = .99 / .99000001 -->
P(A|S) = .9999999...

If you already believe that curing aliens are a certainty, then for sure surviving an illness that has only a millionth possibility otherwise, will bring up your belief up to almost a certainty.

Another possible interpretation, that keeps P(S) = .000001, is that P(S|A) is not the certainty. Possibly God will not cure everyone who gets the disease, but only those who deserves it, and this explains why so few survive.

In this case:

P(S) = x .99 + y .01 = .000001 -->
P(A|S) = x * .99 / .000001

a number that depends on how many people God considers worthy of surviving.

comment by Osho · 2014-11-10T19:51:35.505Z · LW(p) · GW(p)

I think your denominator in your original equation is missing a second term. That is why you get a non-probability for your answer. See here: http://foxholeatheism.com/wp-content/uploads/2011/12/Bayes.jpg

comment by ChristianKl · 2014-11-11T17:01:46.807Z · LW(p) · GW(p)

The argument was him basically saying something equivalent to "If I had a disease that had a 1 in a million chance of survival and I survived it, it's not because I was the 1 in a million, it's because it was due to god's intervention". So I tried to point out that either his (subjective) base rate is wrong or his (subjective) conditional probability is wrong.

There no reason why God in principle should be unable to choose which of the people of the mass of one million survives. If you don't have a model of how the one in a million gets cured you don't know that it wasn't the God of the gaps.

In medicine you do find some people having theories according to which nobody should recover from cancer. The fact that there are cases in which the human immune system manages to kick out cancer does suggest that the orthodox view according to which cancer develops when a single cell mutates and the immune system has no way to kill mutated cells is wrong.

Today we have sessions with a psychologists as the standard of care for cancer patients and we pushed back breast cancer detection screening because a lot of the "cancers" that the screening found just disappear on their own and it doesn't make sense to operate them away.

comment by Metus · 2014-11-10T14:41:15.445Z · LW(p) · GW(p)

I am reposting my question from the last open thread: Where can I find reading groups for arbitrary books? I saw the Superintelligence reading group and I realised that I am currently in a reading group for another book. Since my reading list is huge, I could use the mild social incentive of a reading club. Also the commets are usually enlightening and drawing my attention to a point I have not considered before.

I prefer discussion board based groups as I can skip the meaningless discussions.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-10T14:57:05.386Z · LW(p) · GW(p)

Have you tried Goodreads?

Replies from: Metus
comment by Metus · 2014-11-10T15:37:08.623Z · LW(p) · GW(p)

Embarassingly I am a member of that site and have not seen that kind of option. Where is it and can you point me to relevant groups?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-10T15:43:52.968Z · LW(p) · GW(p)

The About page links to the Groups section:

http://www.goodreads.com/group

Replies from: Metus
comment by Metus · 2014-11-10T16:10:19.759Z · LW(p) · GW(p)

Joined the LessWrong group. Thanks.