2013 Less Wrong Census/Survey
post by Scott Alexander (Yvain) · 2013-11-22T09:26:38.606Z · LW · GW · Legacy · 620 commentsContents
2013 Less Wrong Census/Survey None 620 comments
It's that time of year again.
If you are reading this post, and have not been sent here by some sort of conspiracy trying to throw off the survey results, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn't matter if you don't post much. Doesn't matter if you're a lurker. Take the survey.
This year's census contains a "main survey" that should take about ten or fifteen minutes, as well as a bunch of "extra credit questions". You may do the extra credit questions if you want. You may skip all the extra credit questions if you want. They're pretty long and not all of them are very interesting. But it is very important that you not put off doing the survey or not do the survey at all because you're intimidated by the extra credit questions.
It also contains a chance at winning a MONETARY REWARD at the bottom. You do not need to fill in all the extra credit questions to get the MONETARY REWARD, just make an honest stab at as much of the survey as you can.
Please make things easier for my computer and by extension me by reading all the instructions and by answering any text questions in the simplest and most obvious possible way. For example, if it asks you "What language do you speak?" please answer "English" instead of "I speak English" or "It's English" or "English since I live in Canada" or "English (US)" or anything else. This will help me sort responses quickly and easily. Likewise, if a question asks for a number, please answer with a number such as "4", rather than "four".
Last year there was some concern that the survey period was too short, or too uncertain. This year the survey will remain open until 23:59 PST December 31st 2013, so as long as you make time to take it sometime this year, you should be fine. Many people put it off last year and then forgot about it, so why not take it right now while you are reading this post?
Okay! Enough preliminaries! Time to take the...
***
***
Thanks to everyone who suggested questions and ideas for the 2013 Less Wrong Census/Survey. I regret I was unable to take all of your suggestions into account, because of some limitations in Google Docs, concern about survey length, and contradictions/duplications among suggestions. I think I got most of them in, and others can wait until next year.
By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.
620 comments
Comments sorted by top scores.
comment by timujin · 2013-11-22T15:43:44.502Z · LW(p) · GW(p)
Surveyed. Having everyone participate in a Prisoner's Dillema is extremely ingenious.
Edit: Hey, guys, stop upvoting this! You have already falsified my answer to survey's karma question by an order of magnitude!
Edit much later: The lesswrong community is now proved evil.
Edit much more later: Bwahaha, I expected that... Thanks for the karma and stuff...
comment by Viliam_Bur · 2013-11-22T09:11:27.212Z · LW(p) · GW(p)
Taken. It was relatively quick; the questions were easy. Thanks for improving the survey!
Two notes: The question about mental illness has no "None" answers; thus you cannot distinguish between people who had none, and people who didn't answer the question. The question about income did not make clear whether it's pre-tax or post-tax.
comment by Nominull · 2013-11-22T05:56:12.862Z · LW(p) · GW(p)
Are you planning to do any analysis on what traits are associated with defection? That could get ugly fast.
(I took the survey)
Replies from: Kinsei↑ comment by Kinsei · 2013-11-22T15:13:40.752Z · LW(p) · GW(p)
Well, remember that that's a zero sum game within the community since it's coming out of Yvain's pocket. I was going to reflexivly cooperate, then I remembered that I was cooperating in transfering money from someone who was nice enough to create this survey, to people who were only nice enough to answer.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2013-11-22T15:23:10.128Z · LW(p) · GW(p)
This was my initial thought, too. But then it occurred to me that Yvain wants to incentivize people to take the survey, and more people will be so incentivized if the reward is larger. Thus, I can acausally help Yvain achieve his goal by cooperating.
This will only influence people who know something about how the reward works before they decide to take the survey, but it still seemed worth it, so I cooperated.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-11-22T15:43:57.604Z · LW(p) · GW(p)
Cooperating for reasons other than "I expect cooperating to make other people cooperate" gives people a reason to defect and make the total (and your expected) reward lower.
I've done the math elsewhere in this thread, and if at least a third of all respondents decide to cooperate no matter what, the optimal solution is to just defect and take their money.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2013-11-22T15:57:07.314Z · LW(p) · GW(p)
Cooperating for reasons other than "I expect cooperating to make other people cooperate" gives people a reason to defect and make the total (and your expected) reward lower.
Yes. And I did cooperate because I expected that it would make other people cooperate (acausally). I was explaining why I wanted more people to cooperate, even though it would mean that Yvain would lose more money.
I've done the math elsewhere in this thread, and if at least a third of all respondents decide to cooperate no matter what, the optimal solution is to just defect and take their money.
Good. Then a defector has been enticed to take the survey.
comment by gjm · 2013-11-22T02:54:03.712Z · LW(p) · GW(p)
I have taken the survey (and answered, to a good approximation, all the questions).
Note that if you take the survey and comment here immediately after, Yvain can probably identify which survey is yours. If this possibility troubles you, you may wish to delay. On the other hand, empirically it seems that earlier comments get more karma.
I conjecture that more than 5% of entrants will experience a substantial temptation to give SQUEAMISH OSSIFRAGE as their passphrase at the end. The purpose of this paragraph is to remark that (1) if you, the reader, are so tempted then that is evidence that I am right, and (2) if so then giving in to the temptation is probably a bad idea.
Replies from: Error, beoShaffer, lmm, Manfred↑ comment by Error · 2013-11-22T15:24:18.200Z · LW(p) · GW(p)
I conjecture that more than 5% of entrants will experience a substantial temptation to give SQUEAMISH OSSIFRAGE as their passphrase at the end.
I have taken the survey and done exactly this. I have also chosen COOPERATE. I figure doing so is cooperating in two ways; assuming a large number of people give SQUEAMISH OSSIFRAGE, Yvain will either discard those tickets or split the prize between them. If it is split, then the squeamish people are cooperating with each other by making it more likely that all of us will receive something, albeit a smaller amount. If the tickets are discarded, then we are cooperating with non-squeamish people. Gifting them, really; they are more likely to win a prize because we have opted out, and it will be marginally larger because I chose COOPERATE.
Of course this procedure is probably defection against Yvain, who will have to deal with his system being subverted. Oops.
Replies from: gjm↑ comment by gjm · 2013-11-22T15:28:52.304Z · LW(p) · GW(p)
My guess is that if lots of people give the same passphrase and one of them wins the draw, Yvain will simply hold another draw among the people who claim to have won.
Also, for the sums we're talking about I bet your utility is close enough to linear that the difference between (say) "certainly $5" and "$60 with probability 1/12" is very small. (Perhaps it feels larger on account of some cognitive bias, though introspecting I think the two really feel basically equivalent to me.)
Replies from: Error↑ comment by Error · 2013-11-22T15:40:31.210Z · LW(p) · GW(p)
Hrm. Damn, that would be a sane solution and obviates both my mucking about and your own.
My net utility for winning is as close to zero as makes no difference; I make enough that it's unimportant, so the marginal value of the money is probably worth less than the time it would take to arrange the exchange. My utility for playing amusing games with systems of this sort is rather higher, however.
↑ comment by beoShaffer · 2013-11-22T06:11:47.093Z · LW(p) · GW(p)
I was temped, but didn't for the obvious reasons.
comment by roystgnr · 2013-11-22T06:02:25.200Z · LW(p) · GW(p)
I took the survey. My apologies for not doing so in every previous year I've been here, and for not finding time for the extra questions this year.
The race question should probably use checkboxes (2^N answers) rather than radio boxes (N answers). Biracial people aren't that uncommon.
Living "with family" is slightly ambiguous; I almost selected it instead of "with partner/spouse" since our kids are living with us, but I suspected that wasn't the intended meaning.
Replies from: tut, army1987↑ comment by tut · 2013-11-22T13:17:45.150Z · LW(p) · GW(p)
The race question should probably use checkboxes (2^N answers) rather than radio boxes (N answers)
Same with the diagnoses question. But I don't think that Yvain's software deals well with checkboxes. They seem to have much more radiobuttons this year.
↑ comment by A1987dM (army1987) · 2013-11-25T12:55:48.118Z · LW(p) · GW(p)
Living "with family" is slightly ambiguous; I almost selected it instead of "with partner/spouse" since our kids are living with us, but I suspected that wasn't the intended meaning.
Yes. I, who proposed the question, had worded those answers “with parents (and/or siblings)” and “with partner/spouse (and/or children)” respectively.
comment by TheOtherDave · 2013-11-22T04:23:51.279Z · LW(p) · GW(p)
Surveyed. Left several questions blank.
Incidentally, while I answered the "akrasia" questions about mental illnesses, therapy, etc. as best I could, it's perhaps worth noting that most of my answers related to a period of my life after suffering traumatic brain injury that significantly impaired my cognitive function, and therefore might be skewing the results... or maybe not, depending on what the questions were trying to get at
comment by Said Achmiz (SaidAchmiz) · 2013-11-22T03:55:42.991Z · LW(p) · GW(p)
I took the survey.
However, this question confused me:
Time in Community How long, in years, have you been in the Overcoming Bias/Less Wrong community? Enter periods less than 1 year in decimal, eg "0.5" for six months (hint: if you've been here since the start of the community in November 2007, put 6 years)"
(emphasis mine)
The wording confused me; I almost put "6 years" instead of "6" because of it.
Also, I was sorely tempted to respond that I do not read instructions and am going to ruin everything, and then answer the rest of that section, including the test question, correctly. I successfully resisted that temptation, of which fact I am proud.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-22T04:28:10.727Z · LW(p) · GW(p)
Also, I was sorely tempted to respond that I do not read instructions and am going to ruin everything, and then answer the rest of that section, including the test question, correctly. I successfully resisted that temptation, of which fact I am proud.
This.
comment by [deleted] · 2013-11-22T03:19:01.161Z · LW(p) · GW(p)
Surveyed.
Nice to see the reactionaries got their bone thrown to them on the politics section.
comment by Vivificient · 2013-11-22T03:05:23.138Z · LW(p) · GW(p)
I have never posted on LW before, but this seems like a fine first time to do so.
I am really very curious to see the results of the real world cooperate/defect choice at the bottom of the test.
comment by Bayeslisk · 2013-11-22T04:02:50.344Z · LW(p) · GW(p)
Surveyed. Put a humorous pair of Lojban lujvo as a passphrase. I cooperated, knowing that regardless, I was unlikely to win no matter what strategy I pursued, and that priming myself by forcing myself to cooperate now would possibly make me unknowingly want to cooperate in the future to my benefit.
comment by Joshua_Blaine · 2013-11-22T13:48:25.914Z · LW(p) · GW(p)
Survey taken.
I found the Europe question awesome because I, incredibly luckily, had checked Europe's total population for a Fermi estimate just yesterday, so I got to feel like a high accuracy, highly calibrated badass. Of course, that also means it's not good data for things that I learned greater than ~1 day ago.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-25T13:10:14.398Z · LW(p) · GW(p)
Having seen this map a couple months ago hugely helped me with that question, BTW.
comment by Dr_Manhattan · 2013-11-22T03:33:52.700Z · LW(p) · GW(p)
dude, no "jewish" religious background? seems like a serious omission unless my priors are all screwed up.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2013-11-22T03:37:33.779Z · LW(p) · GW(p)
I'm sorry. I'm not sure how that happened. Must have accidentally gotten deleted when I was adding in the Eastern Orthodox stuff. The question has been fixed and "Jewish" is now an option.
Replies from: Dr_Manhattan, SaidAchmiz↑ comment by Dr_Manhattan · 2013-11-22T11:32:43.148Z · LW(p) · GW(p)
Sorry I blew the conspiracy :-p
↑ comment by Said Achmiz (SaidAchmiz) · 2013-11-22T14:12:52.939Z · LW(p) · GW(p)
I assume having written in "Jewish" under "Other" will properly place my response in the correct bucket?
Replies from: tgb↑ comment by tgb · 2013-11-24T02:53:41.621Z · LW(p) · GW(p)
Probably not for the 'official' analysis that Yvain runs - it's an awful lot of work to go through and clean up hundreds (thousands?) of long survey results so IIRC in past years all "Other" fill-in-the-blanks have been essentially discarded. However, the data (or that which was not asked to be kept private) is released after the fact you'd be welcome to do the analysis yourself.
comment by radical_negative_one · 2013-11-22T10:43:40.871Z · LW(p) · GW(p)
Survey completed in full. Begging for karma as per ancient custom.
I choose DEFECT because presumably the money is coming out of CFAR's pocket and I assume they can use the money better than whichever random person wins the raffle. If I win, I commit to requesting it be given as an anonymous donation to CFAR.
EDIT: Having been persuaded my Yvain and Vaniver, I reverse my position and intend to spend the prize on myself. Unfortunately I've already defected and now it's too late to not be an asshole! Sorry about that. Only the slightly higher chance of winning can soothe my feelings of guilt.
Replies from: Yvain, Vaniver, Benquo, owencb↑ comment by Scott Alexander (Yvain) · 2013-11-22T22:06:13.417Z · LW(p) · GW(p)
The money is coming out of my pocket, it is not funging against any other charitable donations, and I am in favor of someone claiming the prize and using it to buy something nice that they like.
Replies from: Salivanth, radical_negative_one↑ comment by Salivanth · 2013-11-23T15:10:27.346Z · LW(p) · GW(p)
In that case, I pre-commit that if I win, I'll spend it on something leisure-related or some treat that I otherwise wouldn't be able to justify the money to purchase.
I co-operated; I'd already committed myself to co-operating on any Prisoner's Dilemma involving people I believed to be rational. I'd like to say it was easy, but I did have to think about it. However, I stuck to my guns and obeyed the original logic that got me to pre-commit in the first place.
If I assume other people are about as rational as me, than a substantial majority of people should think similarly to me. That means that if I decide that everyone else will co-operate and thus I can defect, there's a good chance other people will come to the same conclusion as well. The best way to go about it is to pre-commit to co-operation, and hope that other rational people will do the same.
Thanks for the chance to test my beliefs with actual stakes on the line :)
Replies from: alicey, FourFire, Calvin, Oscar_Cunningham, pgbh↑ comment by Calvin · 2013-11-30T00:33:56.348Z · LW(p) · GW(p)
I am not sure I follow.
If you predict that majority of 'rational' people (say more than 50%) would pre-commit to cooperation, then you had a great opportunity to shaft them by defecting and running with their money.
Personally, I decided to defect as to ensure that other people who also defected won't take advantage of me.
↑ comment by Oscar_Cunningham · 2013-11-29T22:16:24.320Z · LW(p) · GW(p)
That's the correct response when playing against rational players who are also trying to win, but if you actually look at the comments you'll see that most people are deciding to cooperate or defect for a variety of reasons. So I think in this case cooperation is (sadly) not the best move.
↑ comment by radical_negative_one · 2013-11-25T03:02:21.578Z · LW(p) · GW(p)
Well, I can't argue with that. I'm editing my previous comment to reverse my previous position.
↑ comment by Vaniver · 2013-11-22T17:04:46.155Z · LW(p) · GW(p)
presumably the money is coming out of CFAR's pocket
I think the money is coming out of Yvain's pocket, actually.
Replies from: DanArmak↑ comment by DanArmak · 2013-11-22T19:22:22.680Z · LW(p) · GW(p)
I cooperated, and I precommit to waiving my prize if I win.
Replies from: Vaniver↑ comment by Vaniver · 2013-11-23T02:41:31.581Z · LW(p) · GW(p)
I believe there is a strong argument for taking the prize, even if you don't need it, and not donating the prize, even if you would like to, so that people who are actually motivated by prizes do not feel they are obligated to waive or donate their prize. (A prime example of this is George Washington, one of the richest men in America at the time, who thought it was silly that he was getting a salary as president, and that it would be more public-minded of him to not collect his salary. He was convinced that if he did so, he might set a precedent, and this would prevent anyone but the independently wealthy from seeking the presidency.)
↑ comment by owencb · 2013-11-25T11:50:17.264Z · LW(p) · GW(p)
I defected, for similar reasons (without having read the comments, I just assumed that I'd be likely to prefer funds to whoever volunteered to fund this than to a random survey-taker, particularly weighted towards a survey-taker who defected). I'm afraid Yvain's answer here would not be enough to get me to switch.
If the rest of the $60 prize was to be burned -- effectively a wealth redistribution among capital holders -- I'd cooperate.
comment by lalaithion · 2013-11-22T03:50:33.474Z · LW(p) · GW(p)
I can't wait to see the Cooperate/Defect ratio. I, for one, chose to cooperate.
comment by jefftk (jkaufman) · 2013-11-22T13:50:52.882Z · LW(p) · GW(p)
Surveyed.
The IQ question should, like with the SAT/ACT, make it clear you should leave it blank if you've not been tested. And the same with the follow-up in calibration.
comment by Antti_Yli-Krekola · 2013-11-22T12:26:11.883Z · LW(p) · GW(p)
Survey taken.
comment by Kaj_Sotala · 2013-11-22T09:01:33.491Z · LW(p) · GW(p)
Surveyed.
The occupation thing could have been a checkbox, for us who are e.g. both students and doing for-profit work.
The income question could have used a clarification of whether it was pre- or post-tax. (I assumed pre-.)
Replies from: Jayson_Virissimo, FourFire, SaidAchmiz, arundelo↑ comment by Jayson_Virissimo · 2013-11-23T01:26:57.440Z · LW(p) · GW(p)
Yeah, I'm both a student and am self-employed. I guessed pre-tax, but the number is going to be very different otherwise (for me anyway).
↑ comment by Said Achmiz (SaidAchmiz) · 2013-11-22T18:03:10.487Z · LW(p) · GW(p)
Seconded on "occupation should be checkboxes" thing.
↑ comment by arundelo · 2013-11-24T15:39:31.010Z · LW(p) · GW(p)
When I run into a radio button group that I have multiple answers for I select one randomly. (While taking this survey I literally flipped a coin.)
Edit: I'm not particularly arguing that occupation shouldn't have been checkboxes, but for something where most people will have a single answer, radio buttons do make the data a bit simpler to deal with.
comment by Shmi (shminux) · 2013-11-22T05:52:18.216Z · LW(p) · GW(p)
Done. I'm glad there was nothing about Schrodinger this time around.
comment by alexgieg · 2013-11-22T16:59:29.196Z · LW(p) · GW(p)
I've taken the survey.
By the way, nice game at the end. I didn't do the math but it seemed evident that defecting was the logical choice (and by reading the comments below I was right). I cooperated anyway, it just felt right. So, defectors, I probably just made one of you a few hundredths of a cent richer! Lucky you! ;-)
comment by dankane · 2013-11-22T08:31:16.027Z · LW(p) · GW(p)
Took the survey. Note: "average" is not a very precise term. For one, "average person" is probably a mediocre stand-in for "typical person" (since there isn't actually a commonly accepted way to take averages of people). Furthermore, questions like "How long, in approximate number of minutes, do you spend on Less Wrong in the average day?" are actually highly ambiguous. The arithmetic mean of times that I spend on Less Wrong over days is substantially different from the median time.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-24T18:41:09.453Z · LW(p) · GW(p)
I think it was supposed to mean arithmetic mean.
comment by Josh You (scrafty) · 2013-11-22T17:51:06.478Z · LW(p) · GW(p)
Survey taken. Defected since I'm neutral as to whether the money goes to Yvain or a random survey-taker, but would prefer the money going to me over either of those two.
Replies from: christopherj↑ comment by christopherj · 2013-12-03T01:09:25.745Z · LW(p) · GW(p)
It seems that the fate of the prize money is having a huge effect on people's choice to cooperate or defect. Yavin could modify the numbers by some potentially large percentage by offering to either donate the remainder of the prize to a charity, or to do something near-equivalent to burning it.
I chose to cooperate because the good feelings are worth more to me than a fraction of a cent, and I expect people to prefer cooperation even if it is the anti-game theory response.
comment by DanArmak · 2013-11-22T09:50:22.313Z · LW(p) · GW(p)
Notes taken while I answered.
What is your family's religious background, as of the last time your family practiced a religion?
We're Ashkenazi Jews, but AFAIK the last time any ancestor of mine practiced a religion was in my great-grandparents' generation. (And then only because I knew only one of them personallyh, so it's reasonable to assume at least one of the others could have been religious.) I get that every human is descended from religious ones, but conflating this datapoint with someone whose actual parents practiced a religion once seems wrong.
Probability
For some of these my confidence was so low that I didn't answer. For some questions, there are also semantic quibbles that would affect the answer:
- Supernatural: AFAIK there is no agreed-on definition of "supernatural" events other than "physically impossible" ones which of course have a probability of 0 (epsilon). OTOH, if you specify "events that the average human observer would use the word 'supernatural' to describe", the probability is very high.
- Anti-Agathics: what counts as reaching an age of 1000 years? Humans with a few patched organs and genes? Cyborgs? Uploads with 1000 subjective years of experience?
- Simulation: this is complicated by ontological differences: whether, when universe A is simulated in universe B, this somehow contributes to B's "realness" measure, or actually creates B. Is existence of a universe a binary predicate? I answered as if it is.
Type of global catastrophic risk: although I chose the most probable, there wasn't a large difference in estimated probability for the top few leading dangers.
about how often do you read or hear about another plausible-seeming technique
At first I thought "every few days". But then I realized these techniques almost never work out or are unsupported by evidence, and so it would be wrong to call them plausible-seeming. So I recalibrated and answered much more rarely.
Then I saw the next questions asked how often I tried the technique and how often it actually worked. But I already choose not to try them most of the time because I expect not to succeed. So I let my previous answer stand. I hope this was as intended.
CFAR bonus questions:
You are a certain kind of person
Are these questions claiming that I, DanArmak, am this kind of person who can change; or that everyone can change? The answers would be very different. I assumed the latter, but it would be nice to have confirmation.
Other nitpicks: a certain kind on which dimension? Some aspects of personality are much harder to change than others.
What is the measure of "true" change? By the means available to us today, we can't change into truly nonhuman intelligences, so does that mean our "kind" cannot be changed? And the answers to the questions will change over time as technology creates new more effective interventions.
And: does "basic things" mean "fundamental things" or "minor insignificant things"? Normally I would assume "fundamental things", but then it seems identical to the previous question.
On a personal note, this set of questions struck me as incompatible after answering the previous sets. They seem to deliberately probe my irrational biases and cached beliefs, and I felt I couldn't answer them while I was deliberately thinking reflectively and asking myself why I believed the answers I was giving.
How would you describe your opinion on immigration?
The politics of immigration in Israel are totally different from those of the US (and I expect this holds for many other countries too in their different ways). I didn't answer because I was afraid of biasing the poll, and it would have been nice to get more guidance in the question.
Replies from: Yvain, EGI, SaidAchmiz, Jiro, ThrustVectoring, army1987↑ comment by Scott Alexander (Yvain) · 2013-11-22T22:03:57.707Z · LW(p) · GW(p)
I endorse you still putting your background as Ashkenazi Jewish, as this gives interesting ethnic information beyond that in the race question.
Replies from: army1987, DanArmak↑ comment by A1987dM (army1987) · 2013-11-25T13:15:19.236Z · LW(p) · GW(p)
Maybe you could have split “White (non-Hispanic)” into “White (Jewish)” and “White (other)”.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-12-15T16:49:42.143Z · LW(p) · GW(p)
(Then again, it would be unclear which one a Sephardi Jew from Argentina currently living in the US would pick.)
↑ comment by EGI · 2013-11-25T12:25:05.714Z · LW(p) · GW(p)
Supernatural: AFAIK there is no agreed-on definition of "supernatural" events other than "physically impossible" ones which of course have a probability of 0 (epsilon). OTOH, if you specify "events that the average human observer would use the word 'supernatural' to describe", the probability is very high.
Somewhere on LessWrong I have seen supernatural defined as "involving ontologically basic mental entities". This is imho the best deffinition of supernatural I have ever seen and should probably be included into this question in the future. Other definitions do not really make sense with this question, as you allready pointed out.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-11-26T04:29:42.250Z · LW(p) · GW(p)
I don't think the concept of "ontologically basic" is coherent.
Replies from: hyporational, EGI↑ comment by hyporational · 2013-11-26T13:17:01.706Z · LW(p) · GW(p)
I personally think it's a strawman, but I don't see why it's necessarily incoherent for people who reject reductionism.
Can you expand?
Replies from: EGI↑ comment by EGI · 2013-11-26T22:20:23.678Z · LW(p) · GW(p)
Here I understand "ontologically basic" to mean "having no Kolmogorov complexity / not amenable to reductionistic exlanations / does not posses an internal mechanism". Why do you think this is not coherent?
Replies from: Eugine_Nier, JoshuaZ↑ comment by Eugine_Nier · 2013-11-27T21:22:09.606Z · LW(p) · GW(p)
Assuming the standard model of quantum mechanics is more or less correct which enteties are ontologically basic?
1) Leptons and quarks
2) The quantum fields
3) The universal wave function
4) The Hilbert space where said wave function lives
5) The mathematics used to describe the wave function
Replies from: TheAncientGeek, EGI↑ comment by TheAncientGeek · 2014-06-21T13:27:54.046Z · LW(p) · GW(p)
Interesting, but this does not exactly mean the concrete is incoherent, more that QM isnt playing ball.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-06-21T16:17:05.879Z · LW(p) · GW(p)
I could do this with any other theory of physics just as easily, e.g., in Newtonian mechanics are are particles ontologically basic, or are points in the universal phase space?
Edit: Also, I never said the concrete was incoherent, I said the concept of "ontologically basic" was incoherent.
Replies from: None, TheAncientGeek↑ comment by [deleted] · 2014-06-21T18:03:36.296Z · LW(p) · GW(p)
You're arguing issues of cartography, not geography.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-06-21T19:13:56.909Z · LW(p) · GW(p)
No, I'm saying that the people asking whether something is "ontologically basic" are arguing cartography. Also it's funny how they only ask the question of things they don't believe exist.
Replies from: None↑ comment by TheAncientGeek · 2014-06-21T18:14:29.616Z · LW(p) · GW(p)
I don't that is clear cut, because space and points have often often been denied any reality
Concrete was my tablets version of concept.
↑ comment by EGI · 2013-11-28T08:58:15.696Z · LW(p) · GW(p)
Before I knew of Hilbert space and the universal wave function, I would have said 1, now I am somewhat confused about that.
Replies from: pragmatist↑ comment by pragmatist · 2013-11-28T10:09:26.113Z · LW(p) · GW(p)
There are good reasons not to consider particles ontologically basic. For instance, particle number is not relativistically invariant in quantum field theory. What looks like a vacuum to an inertial observer will not look like a vacuum to an accelerating observer (see here). If the existence of particles depends on something as trivial as an observer's state of motion, it is hard to maintain that they are the basic constituents of the universe.
Replies from: EGI↑ comment by JoshuaZ · 2013-11-26T22:27:36.637Z · LW(p) · GW(p)
So, I understand what it would mean for something to not be amenable to reductionist explanations and maybe what it would mean to not have internal mechanisms. What does it mean to not have Kolmogorov complexity? Do you mean that the entity is capable of engaging in non-computable computations? That doesn't seem like a standard part of the supernatural notion, especially because many common supernatural entities aren't any smarter than humans.
Replies from: EGI↑ comment by EGI · 2013-11-28T09:10:15.608Z · LW(p) · GW(p)
What does it mean to not have Kolmogorov complexity?
What I meant is, that (apart from positional information) you can only give one bit of information about the thing in question: it is there or not. There is no internal complexity to be described. Perhaps I overstreched the meaning of Kolmogorov complexity slightly. Sorry for that.
Do you mean that the entity is capable of engaging in non-computable computations?
No.
Replies from: pragmatist, JoshuaZ↑ comment by pragmatist · 2013-11-28T10:01:36.908Z · LW(p) · GW(p)
What I meant is, that (apart from positional information) you can only give one bit of information about the thing in question: it is there or not. There is no internal complexity to be described. Perhaps I overstreched the meaning of Kolmogorov complexity slightly. Sorry for that.
There's a quite popular view hereabouts according to which the universal wave function is ontologically basic. If that view is correct, or even possibly correct, your construal of "ontologically basic" cannot be, since wave functions do have internal complexity.
Replies from: EGI↑ comment by JoshuaZ · 2013-11-28T20:28:21.592Z · LW(p) · GW(p)
I don't think that' a slight overstretch: how many bits you can give about something doesn't have much to do with its K-complexity. Moreover, I'm not sure what it means to say that you can only talk about something being somewhere and its existence. How then do you distinguish it from other objects?
↑ comment by Said Achmiz (SaidAchmiz) · 2013-11-22T15:17:10.593Z · LW(p) · GW(p)
We're Ashkenazi Jews, but AFAIK the last time any ancestor of mine practiced a religion was in my great-grandparents' generation. (And then only because I knew only one of them personallyh, so it's reasonable to assume at least one of the others could have been religious.) I get that every human is descended from religious ones, but conflating this datapoint with someone whose actual parents practiced a religion once seems wrong.
Likewise here, the last time my family practiced a religion was when my grandparents were children (my family is also Ashkenazi Jewish). I wasn't raised religious at all, but there was certainly a good deal of cultural effect.
↑ comment by Jiro · 2013-11-22T15:21:16.380Z · LW(p) · GW(p)
OTOH, if you specify "events that the average human observer would use the word 'supernatural' to describe", the probability is very high.
How about "events that the average human observer would use the word 'supernatural' to describe, even given some knowledge about their nature (regardless of whether that knowledge would be available to the average human observer)"?
So a ghost that is a spirit is supernatural while a ghost that is a hallucination is not, even if an average human observer would be unable to tell them apart.
Replies from: fubarobfusco, DanArmak↑ comment by fubarobfusco · 2013-11-22T16:41:28.164Z · LW(p) · GW(p)
How about messages from outside the simulation? The simulation itself may be running in an orderly material universe (we could call this "exonatural"), and may run according to fixed orderly rules most of the time ("usually endonatural"), but still allow the simulators to tweak it. As an analogy, consider what happens in Conway's Life when you pause it and draw or erase a glider.
↑ comment by DanArmak · 2013-11-22T19:10:41.953Z · LW(p) · GW(p)
We can discuss it and maybe agree on an interesting meaning that we could ask people about. The problem is that I don't think all participants in this poll interpreted the question in the same way.
As for your example, it doesn't illuminate a general rule for me. If supernatural things can actually happen, what is the definition of "supernatural"?
↑ comment by ThrustVectoring · 2013-11-27T21:34:42.766Z · LW(p) · GW(p)
What is your family's religious background, as of the last time your family practiced a religion?
We're Ashkenazi Jews, but AFAIK the last time any ancestor of mine practiced a religion was in my great-grandparents' generation.
I just realized that I parsed the quoted question wrong in the survey - I assumed that it meant the last time your immediate family practiced religion, not the most recent ancestral practice of religion.
↑ comment by A1987dM (army1987) · 2013-11-25T12:46:16.400Z · LW(p) · GW(p)
The politics of immigration in Israel are totally different from those of the US (and I expect this holds for many other countries too in their different ways). I didn't answer because I was afraid of biasing the poll, and it would have been nice to get more guidance in the question.
I answered about the politics of immigration in my country, for consistency with the other questions.
comment by JoachimSchipper · 2013-11-22T07:44:09.423Z · LW(p) · GW(p)
Surveyed.
Also, spoiler: the reward is too small and unlikely for me to bother thinking through the ethics of defecting; in particular, I'm fairly insensitive to the multiplier for defecting at this price point. (Morality through indecisiveness?)
comment by Antisuji · 2013-11-22T06:16:43.929Z · LW(p) · GW(p)
I took the survey. Thanks for putting this together, Yvain!
I chose DEFECT: CFAR/MIRI can keep their money. Furthermore, if I win I precommit to refusing payment and donating $120 * (1 - X) to MIRI, where X is the proportion of people who answer COOPERATE. I humbly suggest that others do the same.
comment by Dreaded_Anomaly · 2013-11-22T04:04:37.902Z · LW(p) · GW(p)
Taken, answering all of the questions I was capable of answering. I will be very interested to see the results on some of the new questions. (The shifts on existing questions could also be interesting, but I don't expect much to change.)
comment by JakeArgent · 2013-11-22T16:47:59.588Z · LW(p) · GW(p)
First survey and comment, and I liked it too! (Including the bonuses, especially the reward question :)
comment by Emily · 2013-11-22T11:23:30.027Z · LW(p) · GW(p)
I took the survey. Also just realised that my choice of pass phrase was really silly... I was trying to make it easy for myself to remember what the second word would be, but failed to observe that the first word could become public and therefore it would be sensible to choose something that wouldn't be obvious to just about anybody from knowing the first word! Ah well, in the unlikely event that I win the draw, whoever gets in first can have the prize, I guess...
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-11-25T23:33:06.394Z · LW(p) · GW(p)
Um, I might be being stupid but I think you can just announce your pair of words now in order to "lay claim to them".
Replies from: Emily↑ comment by Emily · 2013-11-26T16:48:30.257Z · LW(p) · GW(p)
The only reason I wouldn't want to do that would be if Yvain is going to publish the pass phrases along with the data, for the curious. Then it would be publicly obvious which were my responses (not that I said anything I particularly care about keeping private on there, I suppose...).
comment by So8res · 2013-11-22T16:39:40.899Z · LW(p) · GW(p)
Survey taken, answered all questions I could. This excluded the IQ question set. I've never taken an IQ test. I've never been offered an IQ test, nor considered taking one. Is that strange? The survey seemed pretty confident that I'd have measured my IQ.
Replies from: Vaniver, Lumifer↑ comment by Vaniver · 2013-11-22T17:47:44.440Z · LW(p) · GW(p)
A previous incarnation of the test just asked what your IQ was. We got both people who had taken official tests responding, and people who were just estimating their IQ. The second group is really noisy, and made it difficult to meaningfully talk about the IQ of LWers.
I suggested the current question as a way to get high-quality information out of survey-takers, but I also wanted a question where people estimated their IQ (maybe as two questions, for the lower and upper bound of a 50% CI) so that we could still get the low-quality information.
Replies from: Alexander↑ comment by Alexander · 2013-11-24T00:44:24.354Z · LW(p) · GW(p)
I consider giqtest.com also professional/scientific, despite being taken online.
(I understand the general aversion towards online tests, and don't mind the current wording.)
Respondents with high IQ seem more likely to have taken official tests, though; doesn't this overestimate LW's mean?
Replies from: Vaniver↑ comment by Vaniver · 2013-11-24T05:59:28.177Z · LW(p) · GW(p)
Respondents with high IQ seem more likely to have taken official tests, though; doesn't this overestimate LW's mean?
Any self-report will overestimate LW's mean, even if there is no disproportionality among test-takers. I've taken this into account with various assumed population means in the analysis of previous surveys, but there's fudging involved (if the average IQ of responders is 130, is it really sensible to expect non-responders have an average IQ of 100?).
↑ comment by Lumifer · 2013-11-22T17:07:34.828Z · LW(p) · GW(p)
I've never taken an IQ test either.
However in the US the usual standardized tests (SAT, GRE, GMAT, LSAT, MCAT) are highly correlated with IQ and going by percentiles you can get a reasonable IQ estimate easily enough.
Replies from: Vaniver, JQuinton↑ comment by Vaniver · 2013-11-22T17:48:29.317Z · LW(p) · GW(p)
This is no longer true for high IQs, and most of the conversion tables are only for the old SAT. A 1600 just ain't what it used to be.
Replies from: Lumifer↑ comment by Lumifer · 2013-11-22T18:38:30.244Z · LW(p) · GW(p)
Measuring high IQs is difficult in general, but a rough estimate on the basis of, say, SAT scores is better than no data at all.
Replies from: Vaniver↑ comment by Vaniver · 2013-11-23T02:36:42.698Z · LW(p) · GW(p)
My point is that the renorming in the 1990s (if I remember correctly) chopped off the right tail of the SAT distribution. It used to be that about 1 in 4000 people got SATs of 1600, and so that implied a commensurately high IQ, but now about 1 in 300 do (only looking at M+CR), so the highest IQ level that the SAT is sensitive to has dropped significantly.
If I remember correctly from the last year's survey, the mean SAT score of LWers who reported it implied that the mean LWer was about 98th percentile, which seemed about right to me (and suggests that the SAT is a decent tool at discriminating between most LWers).
↑ comment by JQuinton · 2013-11-22T20:30:38.900Z · LW(p) · GW(p)
I haven't taken any official IQ test nor have I taken any standardized tests. The only sort of official intelligence test I took was the ASVAB, though I forgot what my score was. I did score high enough to take the DLAB though (I was originally tasked to be a Turkish linguist in the Air Force).
comment by tzok · 2013-11-22T12:27:59.469Z · LW(p) · GW(p)
I have taken the survey, also the extra part. Although I was never tested for IQ in professional way and since it was a question in the non-extra part, I assume that most LW readers were. Interesting observation (if true). Maybe it is a nationally dependent thing? This ad-hoc hypothesis can be validated by the survey if only enough people from enough countries take it
comment by David_Gerard · 2013-11-22T08:37:55.276Z · LW(p) · GW(p)
taken!
comment by [deleted] · 2013-11-22T08:15:15.071Z · LW(p) · GW(p)
Surveyed. Is it okay to answer commited theist/pastafarian? :)
comment by hyporational · 2013-11-22T07:48:59.897Z · LW(p) · GW(p)
Surveyed. Thank you.
comment by Watercressed · 2013-11-22T06:59:30.498Z · LW(p) · GW(p)
Survey Taken
comment by Kawoomba · 2013-11-22T15:29:15.271Z · LW(p) · GW(p)
It is done.
Short comments:
(Calibration Question) Without checking a source, please give your best guess for the current population of Europe in millions (according to Wikipedia's "Europe" article)
This is ambiguous! While strictly speaking "Europe" defaults to "the continent of Europe" spanning to the Ural, in common parlance "Europe" is used interchangeably with "European Union", similar to how you interpret "American student" in your very survey, a totum pro parte. Stahp with the totums pro parte for calibration questions, I beseech thee! (Of course I wouldn't have minded had I not given the correct answer for the European Union...)
(Akrasia: Elsewhat 1) Have you ever other things to improve your mental functioning?
Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like?
(Human Biodiversity) (...) are in fact scientiically justified
comment by SteveReilly · 2013-11-22T14:59:27.813Z · LW(p) · GW(p)
I took the survey as well
comment by Paul Crowley (ciphergoth) · 2013-11-22T12:32:28.151Z · LW(p) · GW(p)
I surveyed.
COMPLAIN! I have one partner but I'm definitely not monogamous. Sorry :)
Replies from: Emilycomment by JackV · 2013-11-22T09:57:30.004Z · LW(p) · GW(p)
I took the survey.
I think most of my answers were the same as last year, although I think my estimates have improved a little, and my hours of internet have gone down, both of which I like.
Many of the questions are considerably cleaned up -- much thanks to Yvain and everyone else who helped. It's very good it has sensible responses for gender. And IIRC, the "family's religious background" was tidied up a bit. I wonder if anyone can answer "atheist" as religious background? I hesitated over the response, since the last religious observance I know of for sure was G being brought up catholic, but I honestly think living in a protestant (or at least, anglican) culture is a bigger influence on my parents cultural background, so I answered like that.
I have no idea what's going to happen in the raffle. I answered "cooperate" because I want to encourage cooperating in as many situations as possible, and don't really care about a slightly-increased chance of < $60.
Replies from: VAuroch↑ comment by VAuroch · 2013-11-22T22:39:04.928Z · LW(p) · GW(p)
I could and did answer atheist as background. My parents are both inspoken* nonbelievers, though they attended a Unitarian Universalist church for two years when their kids (me included) were young, for the express purpose (explained well after the fact) of exposing us to religion and allowing us to make our own choices.
*The opposite of outspoken.
comment by luminosity · 2013-11-22T08:07:45.389Z · LW(p) · GW(p)
Taken the survey. Thanks for doing this, Yvain.
comment by Adele_L · 2013-11-22T05:40:47.048Z · LW(p) · GW(p)
Took the survey.
I'm interested in seeing what sort of interventions ended up working for people with akrasia.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-25T12:57:26.106Z · LW(p) · GW(p)
I'm interested in seeing what sort of interventions ended up working for people with akrasia.
Have you seen the akrasia tactics review threads?
comment by NancyLebovitz · 2013-11-22T18:32:34.088Z · LW(p) · GW(p)
I took the survey. Thanks for running it.
Should Muslim be divided into types?
I'm not sure what supernatural means for the more arcane simulation possibilities. I consider it likely that if we're simulated, it's from a universe with different physics.
I would rather see checkboxes for global catastrope, since it's hard to judge likelihood and I think the more interesting question is whether a person thinks any global catastrophe is likely.
Would it be worth having a text box for questions people would like to see on a future survey? I'm guessing that you wouldn't need to tabulate it,-- if you posted all the questions, I bet people here would identify the similar questions and sort them into topics.
Replies from: Yvain, TheOtherDave↑ comment by Scott Alexander (Yvain) · 2013-11-22T22:00:57.017Z · LW(p) · GW(p)
So far no one of several hundred people has identified Muslim, so I think finer gradations there would be overkill.
I can't do checkboxes.
I ask every year what questions people want in a future survey on this site. That way the good ones can get updated and people can hold discussions about them.
↑ comment by TheOtherDave · 2013-11-22T18:37:54.477Z · LW(p) · GW(p)
I consider it likely that if we're simulated, it's from a universe with different physics.
I'm curious: why? (Not necessarily disagreeing, just wondering.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-11-22T19:36:20.246Z · LW(p) · GW(p)
Because the simulations we make have simpler physics than we do.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-22T20:02:31.081Z · LW(p) · GW(p)
Sensible.
On the face of it, I would expect that if a physics P1 is the result of some agent A that lives under some other physics P2 constructing a simplified physics for simulation purposes, it would have characteristically different properties from a physics P3 that is not the result of such a process. Put differently... if our physics is P1, it should be more likely to be easily understood by A's cognitive processes than if it's P3.
That said, I don't understand the general constraints on either physicses or cognitive processes well enough to even begin to theorize about what specific properties I would expect to differentially find in P1 and P3.
Still, I wonder whether someone a lot smarter and better informed than me could use that as a starting point for trying to answer that question.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-11-23T19:33:12.740Z · LW(p) · GW(p)
I agree that it seems more likely that if we're in a simulation, it's got a simplified version of our simulator's physics rather than some drastically different physics. On the other hand, this is very much guesswork.
And on yet another hand, if you assume that our simulators have huge amounts of computational power, they might be exploring universes with possible laws of physics thoroughly enough that the proportion of simulations with simplifications of the home physics isn't very high.
I'm faintly horrified at the idea of physics which is much more complicated than ours-- ours is complicated enough.
comment by [deleted] · 2013-11-22T14:32:00.603Z · LW(p) · GW(p)
I took the survey.
comment by Ben Pace (Benito) · 2013-11-22T08:24:51.571Z · LW(p) · GW(p)
Answered them all as best I could :^)
I left the 'Singularity' question blank because it was I'll-defined - I treated it like a question specifically on the IE, but anyhu, my Priors on that are totally wacky. I expect it to happen, but I have no knowledge of the time at all really.
comment by beoShaffer · 2013-11-22T06:08:18.635Z · LW(p) · GW(p)
Took the survey and cooperated.
comment by BenLowell · 2013-11-22T11:27:09.968Z · LW(p) · GW(p)
If possible, I'm interested in how unique the passwords were.
Replies from: DanArmak, Nornagest, ChrisHallquist, handoflixue, christopherj, ChrisHallquist↑ comment by DanArmak · 2013-11-22T19:20:01.521Z · LW(p) · GW(p)
I used a random password generator (set to 'readable', because the survey asked for 'words' or some such). Why would you do anything else?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-11-23T09:05:48.197Z · LW(p) · GW(p)
Hassle? I couldn't be arsed to make a record anywhere, so I just used some "name of first pet" type information I wouldn't forget.
↑ comment by Nornagest · 2013-11-22T19:49:50.952Z · LW(p) · GW(p)
I was sorely tempted to use "squeamish ossifrage". But with more than a thousand regulars, many of whom are interested in computing trivia, I figure it's likely that someone else thought that would be clever.
↑ comment by ChrisHallquist · 2013-11-23T05:43:52.245Z · LW(p) · GW(p)
I'm pretty sure mine was unique - I went into my Ruby interpreter, loaded the dictionary I'd been using for class projects and used "sample" twice.
↑ comment by handoflixue · 2013-11-22T17:44:34.232Z · LW(p) · GW(p)
Second that :)
↑ comment by christopherj · 2013-12-03T01:20:53.638Z · LW(p) · GW(p)
I used a random number generator for mine. Not so much because I think someone else could claim my prize, but on general principles that it is the correct choice.
↑ comment by ChrisHallquist · 2013-11-23T05:47:10.526Z · LW(p) · GW(p)
I'm pretty sure mine was unique - I went into my Ruby interpreter, loaded the dictionary I'd been using for class projects and used "sample" twice.
comment by CAE_Jones · 2013-11-22T08:01:43.011Z · LW(p) · GW(p)
I meant to skip some of the extra credit questions (the ones about the changeability of personality in particular), but wound up stuck answering one of them by software glitch on my computer (I couldn't uncheck it entirely, but at least tried to keep it from being noise).
comment by NoisyEmpire · 2013-11-22T17:47:13.065Z · LW(p) · GW(p)
Surveyed.
comment by JQuinton · 2013-11-22T16:47:49.137Z · LW(p) · GW(p)
I took the survey. I didn't really know how to answer the "relationship" part since I'm not really poly right now, but have a number of "friends with benefits". So I answered it zero.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-11-23T09:02:57.350Z · LW(p) · GW(p)
That seems like the "right" answer to me. I didn't count FWB when answering.
comment by Tyrrell_McAllister · 2013-11-22T15:27:28.736Z · LW(p) · GW(p)
Suggestion: If you are upvoting people who took the survey, sort comments by "New" first so that late takers get their upvote.
comment by PedroCarvalho · 2013-11-22T16:35:23.777Z · LW(p) · GW(p)
Cool. Survey taken.
comment by ailyr · 2013-11-22T15:55:23.029Z · LW(p) · GW(p)
Surveyed.
Minor nitpick: I think it is better to clarify definition of Europe in calibration question. Because if you go to Wikipedia to check which definition of Europe survey authors had in mind, you will immediately see Europe population on the same page.
Replies from: Mestroyercomment by [deleted] · 2013-11-22T07:45:57.569Z · LW(p) · GW(p)
I finished and had fun even if parts of it made me feel dumb (I never thought about that calibration question before and am pretty sure I got it wildly wrong). The monetary reward at the end looks interesting but even in the unlikely case that I won I might have too much trouble claiming any kind of prize right now...
comment by Irgy · 2013-11-22T05:24:40.294Z · LW(p) · GW(p)
I found myself geuinely confused by the question "You are a certain kind of person, and there's not much that can be done either way to really change that" - not by the general vagueness of the statement (which I assume is all part of the fun) but by a very specific issue, the word "you". Is it "you" as in me? Or "you" as in "one", i.e. a hypothetical person essentially referring to everyone? I interpreted it the first way then changed my mind after reading the subsequent questions which seemed to be more clearly using it the second way.
Replies from: Unnamed, selylindi↑ comment by Unnamed · 2013-11-22T08:58:17.072Z · LW(p) · GW(p)
(Dan from CFAR here) - That question (and the 3 similar ones) came from a standard psychology scale. I think the question is intentionally ambiguous between "you in particular" and "people in general" - the longer version of the scale includes some questions that are explicitly about each, and some others that are vaguely in the middle. They're meant to capture people's relatively intuitive impressions.
You can find more information about the questions by googling, although (as with the calibration question) it's better if that information doesn't show up in the recent comments feed, since scales like this one are often less valid measures for people who know what they're intended to measure.
comment by Benquo · 2013-11-22T14:25:02.698Z · LW(p) · GW(p)
I took the survey.
I was within a factor of 2 on the Europe question, which is pretty good, I think.
As a general rule I "cooperate" on prisoner's dilemmas where the prize is of a trivial size, regardless of my opinion about the incentives and people involved. An interesting experiment might be to take people familiar with the prisoner's dilemma, flip the "cooperate" and "defect" incentives, and see if it makes a difference.
comment by Tuxedage · 2013-11-23T21:35:24.288Z · LW(p) · GW(p)
I have taken the survey, as I have done for the last two years! Free karma now?
Also, I have chosen to cooperaterather than defect was because even though the money technically would stay within the community, I am willing to pay a very small amount of money from EV in order to ensure that LW has a reputation for cooperation. I don't expect to lose more than a few cents worth of expected value, since I expect 1000+ people to do the survey.
comment by MichaelAnissimov · 2013-11-22T18:22:40.024Z · LW(p) · GW(p)
done
comment by Steven_Bukal · 2013-11-22T15:26:29.489Z · LW(p) · GW(p)
Did the survey. Thanks, Yvain.
comment by Tyrrell_McAllister · 2013-11-22T15:16:11.425Z · LW(p) · GW(p)
Survey taken. Nearly all questions answered, except for the Akrasia ones, since I haven't implemented many formal practices to fight akrasia.
comment by polymathwannabe · 2013-11-22T17:56:24.456Z · LW(p) · GW(p)
Answered the entire survey (except questions for U.S. residents). I can't see why Newcomb's problem is a problem. Getting $1,001,000 by two-boxing is an outcome that just never happens, given Omega's perfect prediction abilities. You should one-box.
Replies from: polymathwannabe, JQuinton, DanArmak, jdgalt↑ comment by polymathwannabe · 2013-11-22T18:29:26.293Z · LW(p) · GW(p)
What's the method for submitting proposals for next surveys?
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-12-14T20:33:20.891Z · LW(p) · GW(p)
Yvain usually posts a post in Discussion about a month before the survey asking for such proposals.
↑ comment by JQuinton · 2013-11-22T22:49:39.273Z · LW(p) · GW(p)
I asked a question about this in a previous open thread but no one responded.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2013-11-22T23:01:06.516Z · LW(p) · GW(p)
The conditions of the problem state that Omega is a failproof predictor. If that's the case, the paradox vanishes. Attempts to second-guess Omega's choices only make sense if there's a reason to doubt Omega's powers.
↑ comment by DanArmak · 2013-11-22T19:14:36.111Z · LW(p) · GW(p)
If one outcome never happens (i.e. it is known that it will not happen in the future), then saying what you "should" do is a type error. There is only what you will do. One-boxing becomes a description, not a prescription.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2013-11-22T19:54:20.653Z · LW(p) · GW(p)
One-boxing is not necessarily what you will do. You can still judge incorrectly, and choose to two-box, and end up with $1,000. That's something you can still choose to do, but not what you should do.
comment by Keller · 2013-11-22T17:39:51.478Z · LW(p) · GW(p)
I worry that I harmed the results by mentioning that I have meditated for cognitive benefit reasons, without a way to note that it wasn't to deal with Akrasia. I wanted to answer truthfully, but at the same time the truthful answer was misleading.
Replies from: JenniferRM, Vaniver↑ comment by JenniferRM · 2013-11-24T04:43:00.577Z · LW(p) · GW(p)
Searched for a comment on this, found yours, and upvoted because I share the test design concern.... In my case I ended up saying "No" to all technique questions other than "Other", despite having dealt in the past with something that might be called "akrasia" and also despite having taken vitamins, and tried therapy and meditation in the past.
I assumed, because of each "How well did X help with akrasia?" followup question that there was an implicit "Have you done X for akrasia?" whenever it asked about "doing X", and I've never thought vitamins or therapy or meditation would help with akrasia and didn't do them for that and didn't track how they interacted.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-24T18:09:43.870Z · LW(p) · GW(p)
Likewise, I've "other things to improve your mental functioning" that have nothing to do with akrasia, and, conversely, other things about akrasia which have nothing to do with mental functioning (e.g. Beeminder and LeechBlock).
↑ comment by Vaniver · 2013-11-22T17:45:11.210Z · LW(p) · GW(p)
I worry that I harmed the results by mentioning that I have meditated for cognitive benefit reasons, without a way to note that it wasn't to deal with Akrasia.
If you didn't record yourself as having akrasia, this seems like it's still useful information. It can be interesting to compare "these are the things akratics try for cognitive self-improvement" and "these are the things non-akratics try for cognitive self-improvement," and the survey didn't specify to skip that section if you don't consider yourself as having serious akrasia.
If you do consider yourself as having had serious akrasia, and meditated for unrelated reasons, then I'm not sure what I would respond there, although it seems like you might have some information about whether or not meditation helps with akrasia.
comment by komponisto · 2013-11-22T20:35:30.242Z · LW(p) · GW(p)
Taken.
comment by David Althaus (wallowinmaya) · 2013-11-22T19:14:52.433Z · LW(p) · GW(p)
Took the survey.
comment by covaithe · 2013-11-22T19:10:18.885Z · LW(p) · GW(p)
Survey taken. I defected, because I am normally a staunch advocate of cooperation and the stakes were low enough that it seemed like a fun opportunity to go against my usual inclinations. If I had read the comments first, I would likely have been convinced by some of the cooperation arguments advanced here.
Replies from: Vanivercomment by [deleted] · 2013-11-22T18:46:58.900Z · LW(p) · GW(p)
Took the survey.
comment by DubiousTwizzler · 2013-11-23T16:32:49.097Z · LW(p) · GW(p)
Survey taken
comment by Zaq · 2013-11-22T22:23:02.704Z · LW(p) · GW(p)
Took the survey. I definitely did have an IQ test when I was a kid, but I don't think anyone ever told me the results and if they did I sure don't remember it.
Also, as a scientist I counted my various research techniques as new methods that help make my beliefs more accurate, which means I put something like 2/day for trying them and 1/week for them working. In hindsight I'm guessing this interpretation is not what you meant, and that science in general might count as ONE method altogether.
comment by mcallisterjp · 2013-11-22T20:31:25.884Z · LW(p) · GW(p)
Surveyed. Looking forward to the data and analysis, as per every year.
comment by LoganStrohl (BrienneYudkowsky) · 2013-11-24T06:03:13.649Z · LW(p) · GW(p)
Survey complete! I answered ALL the questions. ^_^
comment by Stabilizer · 2013-11-23T03:21:46.149Z · LW(p) · GW(p)
Took it.
I definitely gave a finite probability for "God" if "God" defined as a super-intelligent being that created the universe. This is of course quite different from an intervening god who is interested in say, human affairs.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-11-23T22:12:39.879Z · LW(p) · GW(p)
I definitely gave a finite probability for "God" if "God" defined as a super-intelligent being that created the universe. This is of course quite different from an intervening god who is interested in say, human affairs.
Why? Since human brains are probably the most complicated, hence interesting, systems in the universe (with the possible exception of the brains of any intelligent aliens), a super-intelligent creator would probably be very interested in human affairs.
Replies from: Dentin↑ comment by Dentin · 2013-11-25T03:43:52.579Z · LW(p) · GW(p)
Since human brains are probably the most complicated, hence interesting, systems in the universe (with the possible exception of the brains of any intelligent aliens)
I see no reason whatsoever to believe that brains are in fact that complicated, and further, not everything complicated is interesting.
comment by VAuroch · 2013-11-22T22:27:07.159Z · LW(p) · GW(p)
Took survey. Reminded me that I've never had an IQ test; is it worthwhile?
Replies from: hylleddin, eurg, Vaniver↑ comment by Vaniver · 2013-11-23T18:48:34.429Z · LW(p) · GW(p)
Reminded me that I've never had an IQ test; is it worthwhile?
IQ tests are mostly useful for other people evaluating you. Compare to, say, taking the SAT without wanting to go to college. It's mildly useful for you to know how your SAT score compares to other peoples', but it's very useful for colleges deciding whether or not to admit you.
Most tests will give you your subtest results, so you can see "okay, I'm better at doing visual processing than auditory processing" or related things, but this will rarely be a surprise; in that case, you probably already preferred reading books to listening to audiobooks, and in the reverse case you probably preferred the reverse.
comment by Sigmaleph · 2013-11-22T22:08:44.123Z · LW(p) · GW(p)
Took the survey. I was unusually confident of an incorrect number for the population of Europe because I looked it up recently, but remembered it wrong.
Guess I learned something, in that I should adjust down my confidence in recalled figures after a few weeks.
comment by AndekN · 2013-11-25T19:24:35.796Z · LW(p) · GW(p)
I took the survey.
This is, incidentally, my first comment on LessWrong. I've lurked for years, and pretty much thought I'll probably stay as a lurker for good. For some reason taking the survey made me want to break my silence.So that's a bonus, I guess.
comment by Alexander · 2013-11-24T00:12:02.207Z · LW(p) · GW(p)
Another lurker that took the (full) survey and signed up...
I discovered LW last year through gwern.net
My biggest barrier to registration was the risk of more procrastination. So, thanks in advance for any encouragement!
Replies from: christopherj↑ comment by christopherj · 2013-12-03T01:37:59.842Z · LW(p) · GW(p)
I recommend not procrastinating! (I'm one to talk)
comment by michaelsullivan · 2013-11-23T04:01:42.139Z · LW(p) · GW(p)
taken.
comment by Sabiola (bbleeker) · 2013-11-25T17:35:33.075Z · LW(p) · GW(p)
Took the survey.
comment by Username · 2013-11-25T06:14:23.045Z · LW(p) · GW(p)
Surveyed! I noticed that someone said that they cooperated on the prisoner's dilemma problem, so I'll balance the odds and tell you all that I defected. Am curious to see if this will reflect in the karma people give this comment.
Also, I wouldn't do this, but you leave the option open of someone poisoning the well and taking the survey a bunch of times to improve their chance of winning the money. Are you screening for duplicate IP addresses?
comment by Fivehundred · 2013-11-25T03:18:09.000Z · LW(p) · GW(p)
I took it, and even did the bonus questions. Yay me!
comment by Larks · 2013-11-23T22:34:10.131Z · LW(p) · GW(p)
Survey completed! Also, everyone, please cooperate!
Yvain, will you reveal who won the money? Whether they cooperated or defected?
Replies from: lmm↑ comment by lmm · 2013-11-26T12:20:31.095Z · LW(p) · GW(p)
That would be rather unfair to defectors, I think.
Replies from: William_Quixote↑ comment by William_Quixote · 2013-11-26T14:08:59.306Z · LW(p) · GW(p)
as a wise man once said, "Not fair? who's the *uc&ing nihilist around here?" and by nihlist I mean defector
Replies from: lmmcomment by CaptainBooshi · 2013-11-23T20:37:18.651Z · LW(p) · GW(p)
Took the survey yesterday and forgot to comment here afterwards. I chose to cooperate since the small chance of winning a little money mattered less to me than the pleasure I would get though even such a minor show of benevolence. I also have never taken an IQ test, and am glad to see at least a fair number of other people in the comments who have not either.
Replies from: gwern↑ comment by gwern · 2013-11-23T22:19:13.510Z · LW(p) · GW(p)
You wouldn't necessarily have known you were taking an IQ test. I learned I was administered IQ tests in elementary school only by accident, when I found a summary in my parents' papers. 'So', I thought, 'that's why my speech therapist kept asking me questions any fool would know, like the meaning of the word "gyp".'
Replies from: JoshuaZcomment by [deleted] · 2013-11-23T10:46:19.681Z · LW(p) · GW(p)
I took the survey.
comment by free_rip · 2013-11-23T09:30:53.570Z · LW(p) · GW(p)
Took the survey. Prisoner's dilemma was a nice addition - would be interesting next year to have 'would you co-operate in a prisoner's dilemma situation' earlier in the survey before the for-stakes version, and compare how often people co-operate in the for-stakes then as compared to this year (also compare across who has taken a LW census before, since this one might bias that a bit).
comment by dthunt · 2013-11-24T00:45:01.917Z · LW(p) · GW(p)
Took the survey.
Would probably not have defected a year ago, and it would not have been an easy decision for me at that time.
I appear to be getting better at estimating.
I think the IQ questions should probably just be dropped from future tests. A number of people get tested as kids and get crazy numbers and never get tested again (since there's no real point, and people are generally afraid of seeing that number dive, people who get a crazy number are probably less likely to retest than others). That's a charitable explanation for the results in last year's survey, which I didn't take.
Replies from: simpliciocomment by Sophronius · 2013-11-23T22:53:51.843Z · LW(p) · GW(p)
I just took the survey. Thanks for spending time on making and evaluating it! A few questions/comments:
When you asked for time spent on less wrong, did you mean mean time or median time? I assumed mean, which resulted in a higher number since I occasionally come here to procrastinate and spend way too much time in a single sitting...
Am I interpreting the agathics question correctly in that a person dying, getting frozen cryonically, and then being unfrozen and living for a 1000 years would count?
Singularity question, which starts by asking when the Singularity (with capital letter S) will occur seems a bit leading to me. I'd expect that if you asked "Do you think a singularity will occur, and if so, when?" that people would give lower probabilities.
comment by faul_sname · 2013-11-23T05:19:54.171Z · LW(p) · GW(p)
Took the survey.
Got the Europe question right, unless Yvain rounds -- I was off by 9.90%.
Replies from: knbcomment by [deleted] · 2013-11-23T04:00:49.858Z · LW(p) · GW(p)
Survey (mostly) done. My answers about the future were based on this comment
http://lesswrong.com/lw/iyc/new_vs_businessasusual_future/a13w
and assigned equal probabilities to the five listed outcomes over the next few centuries
comment by [deleted] · 2013-11-23T01:34:13.813Z · LW(p) · GW(p)
Having completed the survey, I took this as an opportunity to register an account.
comment by Zubon · 2013-11-23T01:17:02.532Z · LW(p) · GW(p)
I hereby take part in the tradition and note that the tradition makes the following moot for relatively low levels of karma. You may round off your karma score if you want to be less identifiable. If your karma score is 15000 or above, you may put 15000 if you want to be less identifiable.
Income question: needs to specify individual or household. You may also want to specify sources, such as whether to include government aid, only include income from wages, or separate boxes for different categories of income.
I have done professional survey design and am available to assist with reviewing the phrasing of questions for surveys, here or on other projects.
Replies from: PeterisP↑ comment by PeterisP · 2013-11-23T07:12:53.749Z · LW(p) · GW(p)
Income question needs to be explicit about if it's pre-tax or post-tax, since it's a huge difference, and the "default measurement" differs between cultures, in some places "I earn X" means pre-tax and in some places it means post-tax.
Replies from: eurg↑ comment by eurg · 2013-11-23T16:51:13.831Z · LW(p) · GW(p)
Also, in many European countries it means "pre- and post some different tax". Because one part is payed by the employer, and the other by the employee. Populism, Politics and Economics. Good results guaranteed.
Replies from: hyporational, kalium↑ comment by hyporational · 2013-11-27T13:46:17.985Z · LW(p) · GW(p)
Yeah, and don't forget VAT and similar taxes.
comment by ArisKatsaris · 2013-11-22T22:35:54.232Z · LW(p) · GW(p)
Took the survey. Cooperated.
comment by Zack_M_Davis · 2013-11-22T03:34:59.212Z · LW(p) · GW(p)
For the Prize Question, you should use a random number generator and cooperate with probability 0.8. Why? Suppose that the fraction of survey-takers that cooperate is p. Then the value of the prize will be proportional to p and there will be p + 4(1 - p) raffle entries. The expected value of Cooperating is p/(p + 4(1-p)) and the expected value of Defecting is 4(1-p)/(p + 4(1-p)). In equilibrium, these must be the same: if one choice were more profitable than the other, then people would switch until this was no longer the case. Thus p = 4(1 - p) and thus p = 4/5.
Addendum 29 November: Actually, this is wrong; see ensuing discussion.
Replies from: ThrustVectoring, stevko↑ comment by ThrustVectoring · 2013-11-22T04:44:20.817Z · LW(p) · GW(p)
The expected value of defecting is 4p/(p + 4(1-p), to within one part in the number of survey takers. Whether or not you defect makes no difference as to the proportion of people who defect.
The solution is to determine how likely it is that a random participant is going to defect, conditional on your choice of cooperate or defect. If you're playing with a total of N copies of yourself, you cooperate and get the maximal payout ($60/N). If you're playing against cooperate bots, you defect and get $60*4N/(N-1).
We can generalize this to partial levels. If you play with D defectors and C cooperators whose opinion you can't change, and X people who will cooperate when you cooperate (and defect when you defect), then the payouts are as thus:
C: (C + X)/(C + D + X) D: 4(C /(C + D + X)
You can solve for the break even point by setting C + X = 4 * C
So the answer is that you should defect, unless you think that for every person who is going to cooperate no matter what, there are at least three people who are thinking with similar enough reasoning to come up with the same answer you come up with (regardless of what answer that is).
Replies from: Oscar_Cunningham, hylleddin↑ comment by Oscar_Cunningham · 2013-11-25T22:56:33.703Z · LW(p) · GW(p)
I think you've got the denominators of your fractions wrong. There are 4 raffle tickets for everyone who defects. I get the values
C: (C + X)/(C + 4D + X) D: 4(C /(C + 4D + 4X)
which solves to a horrible quadratic surd.
If we wanted to we could combine your method with Zack's and assume that C people cooperate, D defect and X make the same choice I do, which is to cooperate with probability p. I think this gets kinda ugly though.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-11-26T00:44:15.408Z · LW(p) · GW(p)
The fractions I wrote are payout * number of tickets, not the chance of winning. But you do have a point: changing many people from cooperate to defect does dilute the total pool of tickets, and not be an unnoticeable amount.
The corrected answer is Payout * Chance to win, which is:
C: (C + X)/(C + D + X) * (1 / (C + 4D + X)
D: (C/(C+D+C)) * (4 / (C + 4D + 4X))
And you don't want to combine my method with Zack's. You don't want a probabilistic strategy - you want to figure out what your beliefs are as far as "how many people do I expect to be in categories C, X, and D". Given your beliefs about how your choices affect others, there's exactly one right choice.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-11-26T16:05:59.391Z · LW(p) · GW(p)
(By the way, the numbers I gave are the same as the ones you gave, only I cancelled a common factor of (C+D+X))
And you don't want to combine my method with Zack's. You don't want a probabilistic strategy - you want to figure out what your beliefs are as far as "how many people do I expect to be in categories C, X, and D". Given your beliefs about how your choices affect others, there's exactly one right choice.
I think that your "one right choice" might sometimes be a probabilistic one. To make this more obvious, consider a game where the value of the prize is maximal when exactly half of the participants choose C, and the value goes down as the proportion gets further from a half (and any of the participants is equally likely to win the prize). Then I think it's obvious that the correct strategy is to estimate C, D, and X as before, and then cooperate with probability p so that C+pX=D+(1-p)X. Then because everyone else in X acts as you do you'll end up with exactly half the people choosing C, which is what you want.
Note that even some of the people in X who you are "acausally controlling" still end up choosing a different option from you (assuming that your random number generators are independent). This allows you to exactly optimise the proportion of people who choose C, which is what makes the strategy work.
I think the same thing applies in Yvain's game. In particular, if we thought that C=D=0 then I think that Zack's analysis is exactly correct (although I wouldn't have used exactly the same words as he does).
EDIT: I retract the last sentence. Zack's calculation isn't what you want to do even in the C=D=0 case. In that case I endorse cooperating with p=1. But I still think that mixed strategies are best in some of the cases with C or D non-zero. In particular what about the case with D=0 but C=X? Then I reckon you should pick C with p=0.724.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2013-11-30T05:57:35.503Z · LW(p) · GW(p)
I think this is it. Suppose there are C CooperateBots, D DefectBots, and X players who Cooperate with probability p. The expected utility of the probabilistic strategy is (proportional to) (p(C + pX) + 4(1-p)(C + pX))/(C + 4D + pX + 4(1-p)X). Then (he said, consulting his computer algebra system) if C/X < 1/3 then p = 1 (Cooperate), if C/X > 3 then p = 0 (Defect), and p assumes intermediate values if 1/3 < C/X < 3 (including 0.7239 if C/X = 1, as you mention).
↑ comment by hylleddin · 2013-11-22T09:45:22.307Z · LW(p) · GW(p)
The expected value of defecting is 4p/(p + 4(1-p), to within one part in the number of survey takers. Whether or not you defect makes no difference as to the proportion of people who defect.
Unless you're using timeless decision theory, if I understand TDT correctly (which I very well might not). In that case, the calculations by Zack show the amount of causal entanglement for which cooperation is a good choice. That is, P(others cooperate | I cooperate) and P(others defect | I defect) should be more than 0.8 for cooperation to be a good idea.
I do not think my decisions have that level of causal entanglement with other humans, so I defected.
Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.
Replies from: Oscar_Cunningham, hylleddin↑ comment by Oscar_Cunningham · 2013-11-26T16:40:25.426Z · LW(p) · GW(p)
Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.
I defected for the same reasons as you. We're entangled! Reading the responses of the other survey takers I think it's clear that very few people are entangled with us, so we did indeed make the right choice!
↑ comment by hylleddin · 2013-11-22T09:55:19.360Z · LW(p) · GW(p)
Nevermind, you already covered this, though in a different fashion.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-11-22T15:30:25.039Z · LW(p) · GW(p)
Yeah, and the math is a little different, three entangled decision makers for each cooperate-bot you can defect against (the number of defectors don't matter, surprisingly). You get three extra chances to get the money generously donated to the pool by the cooperate bots by defecting, compared to causing a certain number of people to help you make the pool even larger.
comment by [deleted] · 2013-11-26T02:14:04.403Z · LW(p) · GW(p)
Well that was the most interesting survey I have taken in a long time - looking forward to seeing the results. I was a little concerned at the start, as it seemed like some sort of dating service so the comment 'hang in there - this bit is almost over' was well placed.
comment by Jennifer_H · 2013-11-25T07:01:37.264Z · LW(p) · GW(p)
One survey (and bonus questions!) completed.
comment by Yaakov T (jazmt) · 2013-11-24T05:18:39.018Z · LW(p) · GW(p)
I took the survey.
Thank you for putting this together Some of the questions were unclear to me, for example: does living with family mean my parents or my spouse and children? (I guessed the former, but was unsure) For the politics question, there should be an option for not identifying with any label (or if that will lead to everyone not wanting to be labeled an option for disinterest in politics could be an alternative.) Should an atheist who practices a religion (e.g. buddhism) skip the question on religion? P(aliens), this question leaves out the time dimension which seems important to establishing a probability for aliens, e.g. if aliens live 5 bilion light years away, are we asked the probability that there were aliens there 5 billion years ago such that we could receive a message from them now, or whether there are aliens now, which we will not be able to discover for another few billion years. P(supernatural) its not clear what counts as a supernatural event, e.g. god is included even though most would not define god as an event nor as occurring since the beginning of the universe (since if god created the universe he is either nontemporal or prior to the universe) for the CFAR questions I wasn't sure what qualified as a " plausible-seeming technique or approach for being more rational / more productive / happier / having better social relationships / having more accurate beliefs / etc." does it have to be a brand new technique, or even a modification of one already known. Is it askeing about generic techniques or even domain specific ones? Also, most techniques I try are not ones I hear about, but rather ones I come up with on my own, I dont know if others here are similar. Also all of the change questions seemed poorly defined and unclear.
comment by [deleted] · 2013-11-23T00:36:18.214Z · LW(p) · GW(p)
I took the survey.
comment by linkhyrule5 · 2013-11-22T08:36:42.406Z · LW(p) · GW(p)
No, I don't read instructions and am going to ruin the survey results for everyone.
snicker
Also, wow, the population of Europe is wildly lower than I thought it was, it's outside my 90% range...
Random math: one way of deciding whether or not to cooperate in the reward question is plot reward versus percentage-UDT-users in the LW community (under the assumption that everyone in that set will do the same thing you do, and everyone else splits 50-50). If that percentage is larger than about 65% (which I'm 70% sure it is), cooperating is superior to defection, but defection actually has the higher maximum expected value - if the entire community chooses randomly, anyway.
...
blink blink
Aw, darn it, I should've flipped a coin...
Edit: No, wait, nevermind, that would halve my expected reward.
comment by [deleted] · 2013-11-26T04:18:16.901Z · LW(p) · GW(p)
I did the survey, mostly.
comment by MugaSofer · 2013-11-24T19:52:31.375Z · LW(p) · GW(p)
Surveyed, including bonus. Only just remembered to comment.
I see the logic, but did think that the Prisoner's Dilemma question was overly complicated - possibly leading to some participants not making the connection to their beliefs about How To Behave In Prisoner's Dilemmas (well, I see now from below that it led to at least one.)
I have no idea if this is a good or bad thing.
comment by Emile · 2013-11-24T09:31:58.677Z · LW(p) · GW(p)
I have taken the survey, thanks a lot Yvain!
I wouldn't have minded if it was shorter.
One minor nitpick for next time: there were a couple questions where the title was the opposite of what the question was about: P(Global catastrophic risk) was actually about P(no global catastrophic risk), and Defect calibrate were about how many people cooperated.
I suspect a couple people might not read the questions and answer the opposite of what they meant.
comment by ialdabaoth · 2013-11-24T02:18:35.317Z · LW(p) · GW(p)
Took the survey.
A few observations:
Family's religious background should probably include an 'Athiest/Agnostic' answer, rather than just lumping in with 'Other'. At the very least, it would be interesting to see what kinds of patterns the 'Other' box breaks down into.
I computed P(Supernatural) as dependent on P(Simulation), based on my understanding of the two concepts. Would anyone be interested in a Discussion page on whether those probabilities can be logically separated?
↑ comment by hyporational · 2013-11-24T07:03:32.742Z · LW(p) · GW(p)
I computed P(Supernatural) as dependent on P(Simulation)
I did the same with god first, but then realized that god was already lumped in with ghosts and fairies and stuff as supernatural and didn't want to make that group look more probable.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-11-24T16:59:00.981Z · LW(p) · GW(p)
Why not? Once we've established that 'Simulation' allows 'Supernatural', why limit the allowed Supernatural agents to only come from Superuser/root accounts?
Replies from: hyporational↑ comment by hyporational · 2013-11-24T17:33:37.550Z · LW(p) · GW(p)
I would have envisioned that anything outside the simulation is supernatural, not that simulations allow supernatural things as they're usually understood. I don't remember whether the god question meant just a creator or also an intervener. The usual simulation hypothesis is sufficient to establish that a supernatural creator exists. For an intervening god, or for unicorns and goblins you'd need extra evidence as it seems we live in a universe where empiricism works well.
Replies from: Kurros↑ comment by Kurros · 2013-11-26T00:40:29.502Z · LW(p) · GW(p)
To me, the simulation hypothesis definitely does not imply a supernatural creator. 'Supernatural' implies 'unconstrained by natural laws', at least to me, and I see no reason to expect that the simulation creators are free from such constraints. Sure, it means that supernatural-seeming events can in principle occur inside the simulation, and the creators need not be constrained by the laws of the simulation since they are outside of it, but I fully expect that some laws or other would govern their behaviour.
Replies from: ialdabaoth, Lion↑ comment by ialdabaoth · 2013-11-26T00:50:59.737Z · LW(p) · GW(p)
To me, "Supernatural" needs to be evaluated from within the framework of the speaker's reality. Otherwise, the term loses all possible semantic meaning.
Replies from: Kurros, Lion↑ comment by Kurros · 2013-11-27T09:02:16.244Z · LW(p) · GW(p)
But don't you think there is an important distinction between events that defy logical description of any kind, and those that merely require an outlandish multi-layered reality to explain? I admit I can't think of anything that could occur in our world that cannot be explained by the simulation hypothesis, but assuming that some world DOES exist outside the layers of nested simulation I can (loosely speaking) imagine that some things really are logically impossible there. And that if the inhabitants of that world observe such impossible events, well, they will wrongly concluded that they are in a simulation, but actually there will be truly supernatural happenings afoot.
I mention this somewhat pointless story just because religious philosophers would generally not accept that God is merely supernatural in your sense, I think they would insist on something closer to my sense, nonsense though it may be.
comment by gwillen · 2013-11-24T01:03:16.334Z · LW(p) · GW(p)
Surveyed!
Thanks for putting this together.
Perceived flaws:
Percentages are probably not the best way to elicit well-calibrated guesses about very probable or very improbable events. (The difference between 1/1,000 and 1/1,000,000 is a lot bigger in reality than it looks, when you put them both between 0 and 1 on a scale of 0 to 100.)
Computing P(Many Worlds) requires assuming that the phrase "Many Worlds" refers to a specific set of concrete predictions about the nature universe, which admit the possibility of truth or falsity. I tend to disagree with that presumption.
P(Anti-Agathics) seems, from the name, not to be intended to include cryonics, but does seem to include it in the actual text. I predict paradoxical answers in which people give P(Cryonics) > P(Anti-Agathics), even though cryonics is a way of allowing a person alive today to reach the age of 1000 years.
P(Simulation) may or may not actually be a well-defined question. If, as some people are surely visualizing while answering it, there are aliens somewhere hovering over a computer terminal with us running on it, certainly the answer is 'yes'. Whatever the reality, it seems likely to be a lot stranger than that. Eliezer's own "Finale of the Ultimate Meta Mega Crossover" describes a scenario (admittedly fanciful) in which one would be hard pressed to answer the "simulation" question with a simple yes or no.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-24T17:29:03.940Z · LW(p) · GW(p)
Percentages are probably not the best way to elicit well-calibrated guesses about very probable or very improbable events. (The difference between 1/1,000 and 1/1,000,000 is a lot bigger in reality than it looks, when you put them both between 0 and 1 on a scale of 0 to 100.)
None of the questions in the survey sound to me like ones where one could easily get more than 99% sure of (outside an argument)
Eliezer's own "Finale of the Ultimate Meta Mega Crossover" describes a scenario (admittedly fanciful) in which one would be hard pressed to answer the "simulation" question with a simple yes or no.
Thanks for the spoiler. ;-|
comment by Jayson_Virissimo · 2013-11-22T21:49:58.861Z · LW(p) · GW(p)
Done. There were a few questions that were iffy, but overall I think this year's survey was a significant improvement from previous versions. Thanks Yvain for doing this.
comment by Eneasz · 2013-11-22T21:19:11.468Z · LW(p) · GW(p)
I'm seconding the request for next year to include a Monogamish option. I'm in a basically monogamous relationship, but we both sometimes sleep with friends.
(also I took the survey)
Replies from: MixedNuts↑ comment by MixedNuts · 2013-11-23T10:49:52.427Z · LW(p) · GW(p)
Why do you want this to be a separate option, rather than "other"?
Replies from: Eneasz↑ comment by Eneasz · 2013-11-26T16:35:59.529Z · LW(p) · GW(p)
Because I think it's one of the three major relationship models. Pure Monogamy is traditional, and Polyamory is the reaction against it, but Monogamish is how a lot of relationships actually work (while operating under the cloak of monogamy). It's like a worldwide religion survey allowing only "Christian" and "Muslim", and lumping Hinduism under "Other". There's another major option here that should be broken out.
Replies from: MixedNuts↑ comment by MixedNuts · 2013-12-01T20:08:25.186Z · LW(p) · GW(p)
Last year there were 2% "other" answers, versus 13% "polyamorous" and 30% "uncertain/no preference" ones. This suggests there is no need to break down "other" any further, unless people in relationship models like yours pick "uncertain" rather than "other" and would switch if "monogamish" was an option.
Replies from: Eneaszcomment by [deleted] · 2013-11-22T16:59:47.781Z · LW(p) · GW(p)
Ok, went and took the survey.
And I only lied about one question!
comment by Dan_Moore · 2013-11-25T15:33:15.797Z · LW(p) · GW(p)
I completed the survey & had to look up the normative ethics choices (again). Also cisgender. I cooperated with the prisoner's dilemma puzzle & estimated that a majority of respondents would also do so, given the modest prize amount.
Also, based on my estimate of a year in Newton's life in last year's survey, I widened my confidence intervals.
comment by sketerpot · 2013-11-25T04:52:44.664Z · LW(p) · GW(p)
Took the survey. Cooperated because most puzzles which explicitly use the words "cooperate" and "defect" have been created in such a way as to make cooperation the better choice.
(Considering my fairly low chances of winning, a deep analysis would have had only recreational value, and there were other fun things to do.)
comment by NoSuchPlace · 2013-11-24T17:35:09.037Z · LW(p) · GW(p)
Completed survey.
comment by Aharon · 2013-11-22T18:56:18.238Z · LW(p) · GW(p)
I'm a European, and the thought that geographical Europe might be meant didn't even occur to me,since in most of my daily interactions (media consumed, small talk, etc.), "Europe" is used interchangeably with "European Union". Teaches me to read such survey questions more thoroughly.
I want to congratulate you on how well you integrated the many suggestions you got, I see many improvements compared to the 2012 (for example, the introductory text convinced me to take the survey right away, when I was one of those who put it off last year).
Replies from: None↑ comment by [deleted] · 2013-11-25T10:58:08.533Z · LW(p) · GW(p)
I'm European (EU member state) and it didn't occur to me anyone would be interested in the combined population of EU member states.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-26T09:26:15.794Z · LW(p) · GW(p)
Might have something to do with the fact that your country hadn't entered the EU until relatively recently. (Mine was one of the founding members of the EEC, but I'm quite familiar with the Wikipedia article naming conventions so I correctly guessed what the question was about.)
comment by [deleted] · 2013-11-26T05:59:58.302Z · LW(p) · GW(p)
Done. Loved the prisoner's dilemma.
comment by Peter Wildeford (peter_hurford) · 2013-11-26T05:09:18.189Z · LW(p) · GW(p)
Survey'd!
comment by BronecianFlyreme · 2013-11-26T04:32:18.899Z · LW(p) · GW(p)
Surveyed! And for the first time, too. This survey was pretty interesting and definitely not what I expected
comment by Suryc11 · 2013-11-25T04:09:46.715Z · LW(p) · GW(p)
Took the survey. Very interesting questions overall, especially the site-wide Prisoner's Dilemma.
I'd like to note that I was very confused by the (vague and similar) CFAR questions regarding the possibility of people changing, but I'm assuming that was intentional and look forward to an explanation.
comment by moridinamael · 2013-11-25T01:00:11.963Z · LW(p) · GW(p)
Mission complete.
comment by TheMajor · 2013-11-24T22:39:12.551Z · LW(p) · GW(p)
Took the survey, and continued to finally make an account. Some questions were ambiguous though (as some other people partially pointed out). I had most problems with:
- Having more children. Over which period of time? As an adolescent I'm not really keen on having children just yet, but I might be in 15 or 20 years.
- Time on LW: I've recently finished reading almost all posts on LW, which meant I spent several hours a day on LW. But now that I have finished reading all those I am only reading new posts, which takes no more than 5-10 minutes a day on average. So there is a large difference between a best estimate of the amount of time I spent on LW any previous day and the best estimate of the time I will spend tomorrow. Which of these is the average day?
- Hear about: I had problems interpreting the question. Taking the wording literally the category specified is extremely broad, including even casual comments by colleagues along the lines of: 'Try checking the batteries more frequently.' (which is a technique to improve your productivity, provided batteries are important in your line of work).
- Akrasia: meditation. I've meditated after sporting frequently in the past, which had nothing to do with akrasia. I decided not to mention the meditation (contrary to Keller, whose comment I only noticed after filling in the survey).
comment by EGI · 2013-11-24T18:20:12.792Z · LW(p) · GW(p)
Surveyed, including bonus.
I really liked the monetary reward prisoners dillema. I am really curious how this turns out. Given the demographic here, I would predict ~ 85% cooperate.
The free text options were rendered in german (Sonstige). Was that a bug or does it serve some hidden purpose?
Replies from: aspera, army1987↑ comment by aspera · 2013-11-25T18:05:49.030Z · LW(p) · GW(p)
My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.
Replies from: EGI, Kurros↑ comment by EGI · 2013-11-26T22:11:53.657Z · LW(p) · GW(p)
My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
I expected most of the LessWrong comunity to cooperate for two reasons:
- I model them as altruistic as in Kurros comment.
- I model them as oneboxing in newcombs problem.
One consideration I did not factor into my prediction is, that - judging from the comments - many people refuse to cooperate in transfering money form CFAR/Yvain to a random community member.
↑ comment by Kurros · 2013-11-26T00:27:48.415Z · LW(p) · GW(p)
You don't think people here have a term for their survey-completing comrades in their cost function? Since I probably won't win either way this term dominated my own cost function, so I cooperated. An isolated defection can help only me, whereas an isolated cooperation helps everyone else and so gets a large numerical boost for that reason.
Replies from: aspera↑ comment by aspera · 2013-11-26T04:08:28.351Z · LW(p) · GW(p)
It's true: if you're optimizing for altruism, cooperation is clearly better.
I guess it's not really a "dilemma" as such, since the optimal solution doesn't depend at all on what anyone else does. If you're trying to maximize EV, defect. If you're trying to maximize other people's EV, cooperate.
↑ comment by A1987dM (army1987) · 2013-11-24T18:34:39.791Z · LW(p) · GW(p)
The free text options were rendered in german (Sonstige). Was that a bug or does it serve some hidden purpose?
I think it's Google Docs's fault -- they were in Italian for me.
comment by jpet · 2013-11-25T20:03:52.555Z · LW(p) · GW(p)
Took it. Comments:
Hopefully you have a way to filter out accidental duplicates (i.e. a hidden random ID field or some such), because I submitted the form by accident several times while filling it out. (I was doing it from my phone, and basically any slightly missed touch on the UI resulted in accidental submission).
Multiple choice questions should always have a "none" option of some kind, because once you select a radio button option there's no way to deselect it. Most of them did but not all.
I answered "God" with a significant probability because the way the definitions is phrased, I would say it includes whoever is running the simulation if the simulation hypothesis is true. I'm sure many people interpreted it differently. I'd suggest making this distinction explicit one way or the other next time.
↑ comment by Kurros · 2013-11-26T00:02:32.457Z · LW(p) · GW(p)
It defined "God" as supernatural didn't it? In what sense is someone running a simulation supernatural? Unless you think for some reason that the real external world is not constrained by natural laws?
Replies from: Lion, jazmt, scav, hyporational↑ comment by Lion · 2013-11-26T00:21:29.492Z · LW(p) · GW(p)
Maybe my definition of "supernatural" isn't the correct definition, but I often think of the word as describing certain things which we do not (currently) understand. And if we do eventually come to understand them, then we will need to augment our understanding of the natural laws...Assuming this "supernatural" stuff actually exists.
I suppose a programer could defy the laws he made for his virtual world when he intervenes from outside the system....But earthly programers obey the natural physical laws when they mess with the hardware, which also runs based on these same laws. I understand this is what you mean by "constrained by natural laws".
Replies from: NNOTM↑ comment by Nnotm (NNOTM) · 2013-11-26T14:49:18.924Z · LW(p) · GW(p)
There are no "correct" or "incorrect" definitions, though, are there? Definitions are subjective, it's only important that participants of a discussion can agree on one.
Replies from: Lumifer, hyporational↑ comment by Lumifer · 2013-11-26T15:42:19.229Z · LW(p) · GW(p)
There are no "correct" or "incorrect" definitions, though, are there?
Well... Definitions that map badly onto the underlying reality are inconvenient at best and actively misleading at worst.
Besides, definitions do not exist in a vacuum. They can be evaluated by their fitness to a purpose which means that if you specify a context you can speak of correct and incorrect definitions.
Replies from: NNOTM↑ comment by Nnotm (NNOTM) · 2013-11-26T23:13:11.322Z · LW(p) · GW(p)
That's true, though I think "optimal" would be a better word for that than "correct".
↑ comment by hyporational · 2013-11-26T14:53:06.533Z · LW(p) · GW(p)
Even agreement isn't necessary, but successful communication would be nice.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-26T15:36:20.935Z · LW(p) · GW(p)
When A says X to B, it helps if A and B agree on what X refers to at that time, even if X refers to something different when B says X.
Replies from: hyporational↑ comment by hyporational · 2013-11-26T16:27:34.496Z · LW(p) · GW(p)
True. There's also the option "B implicitly understands what A means by X although it usually means something else to B" which is different from "A and B explicitly agree on what X refers to at that time".
Consider also the possibility that A says X to B correctly predicting that it means something else to B. This would also be sufficient for successful communication, no explicit agreement needed.
Perhaps you meant these to be contained in your statement, and NNOTM did too. In that case we both failed to understand eachother without explicit agreement :)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-26T16:46:24.373Z · LW(p) · GW(p)
Yes, I agree that (case 1) A and B explicitly agreeing on what X means is different from (case 2) B implicitly understanding what X means to A, or (case 3) A implicitly understanding what X will mean to B.
And, yes, I meant "A and B agree on what X refers to [when A says X to B]" to include all three cases, as well as several others.
And yes, if you understood me to be referring only to case 1, then we failed to understand each other.
Replies from: hyporational↑ comment by hyporational · 2013-11-26T17:08:01.813Z · LW(p) · GW(p)
Could be a language issue. The Finnish word for agreement pretty much always refers to explicit agreement, whereas there is no simple word for implicit agreement in Finnish language that isn't directly translatable to "mutual understanding" or something like that.
Replies from: komponisto↑ comment by komponisto · 2013-11-26T17:40:04.482Z · LW(p) · GW(p)
In English, "agree" often means something like "coincide". (And Romance languages sometimes say "coincide" for "agree", as in opinions coinciding.)
↑ comment by Yaakov T (jazmt) · 2013-11-26T16:07:09.737Z · LW(p) · GW(p)
For a discussion of the meaning of supernatural see here: http://onlinelibrary.wiley.com/doi/10.1525/eth.1977.5.1.02a00040/pdf
↑ comment by scav · 2013-11-26T14:46:05.165Z · LW(p) · GW(p)
If everything in your universe is a simulation, then the external implementation of it is at least extra-natural from your point of view, not constrained by any of the simulated natural laws. So you might as well call it supernatural if you like.
If you include all layers of simulation all the way out to base reality as part of the one huge natural system, then everything is natural, even if most of it is unknowable.
Replies from: Kurros, jazmt↑ comment by Kurros · 2013-11-27T08:49:36.298Z · LW(p) · GW(p)
I'm no theologian, but it seems to me that this view of the supernatural does not conform to the usual picture of God philosophers put forward, in terms of being the "prime mover" and so on. They are usually trying to solve the "first cause" problem, among other things, which doesn't really mesh with God as the super-scientist, since one is still left wondering about where the world external to the simulation comes from.
I agree that my definition of the supernatural is not very useful in practice, but I think it is necessary if one is talking about God at all :p. What other word should we use? I quite like your suggested "extra-natural" for things not of this world, which leaves supernatural for things that indeed transcend the constraints of logic.
Replies from: scav, hyporational↑ comment by scav · 2013-11-27T12:38:59.856Z · LW(p) · GW(p)
Well, I can't find any use for the word supernatural myself, even in connection with God. It doesn't seem to mean anything. I can imagine discussing God as a hypothetical natural phenomenon that a universe containing sentient life might have, for example, without the s word making any useful contribution.
Maybe anything in mathematics that doesn't correspond to something in physics is supernatural? Octonions perhaps, or the Monster Group. (AFAIK, not being a physicist or mathematician)
Replies from: Kurros↑ comment by Kurros · 2013-11-27T22:43:52.407Z · LW(p) · GW(p)
Hmm, I couldn't agree with that later definition. Physics is just the "map" after all, and we are always improving it. Mathematics (or some future "completed" mathematics) seems to me the space of things that are possible. I am not certain, but this might be along the lines of what Wittgenstein means when he says things like
"In logic nothing is accidental: if a thing can occur in an atomic fact the possibility of that atomic fact must already be prejudged in the thing.
If things can occur in atomic facts, this possibility must already lie in them.
(A logical entity cannot be merely possible. Logic treats of every possibility, and all possibilities are its facts.)" (from the Tractatus - possibly he undoes all this in his later work, which I have yet to read...)
This is a tricky nest of definitions to unravel of course. I prefer to not call anything supernatural unless it lies outside the "true" order of reality, not just if it isn't on our map yet. I am a physicist though, and it is hard for me to see the logical possibility of anything outside the "true" order of the universe. Nevertheless, it seems to me like this is what people intend when they talk about God. But then they also try to prove that He must exist from logical arguments. These goals seem contradictory to me, but I guess that's why I'm an athiest :p.
I don't know where less "transcendant" supernatural entities fit into this scheme of course. Magic powers and vampires etc need not neccessarily defy logical description, they just don't seem to exist.
I agree that in the end, banishing the word supernatural is probably the easiest way to go :p.
↑ comment by hyporational · 2013-11-27T12:25:00.537Z · LW(p) · GW(p)
I'd like to keep the word supernatural in my (inner?) vocabulary, but "unconstrained by physics" makes absolutely no sense to me, so I tried to choose a definition that doesn't make my brain hurt. If we inspect the roots of the word, you can see it roughly means "above nature", nature here being the observable universe whether it's a simulation or not. I find this definition suits the situation pretty well.
Replies from: Kurros↑ comment by Yaakov T (jazmt) · 2013-11-26T16:05:03.101Z · LW(p) · GW(p)
↑ comment by hyporational · 2013-11-26T13:33:39.030Z · LW(p) · GW(p)
We had some discussion of this here.
comment by Sithlord_Bayesian · 2013-11-23T06:01:12.940Z · LW(p) · GW(p)
Taken. Thanks for putting in the effort to do the surveys. I noticed that the question on IQ calibration asked about "the probability that the IQ you gave earlier in the survey is greater than the IQ of over 50% of survey respondents", and I wondered if you meant to ask instead about (the probability that the IQ given earlier is greater than the reported IQ of over 50% of survey respondents). I recall that people tended to report absurdly high IQs in earlier surveys.
comment by undermind · 2013-11-26T20:56:35.477Z · LW(p) · GW(p)
Did the survey.
Results: I'm better at estimating continental populations than I had thought; I am frustrated by single-option questions in many cases (e.g. domain of study, nothing for significantly-reduced-meat-intake-but-not-strict-vegetarian, interdependent causes of global catastrophe) and questions that are too huge to be well-formulated, let alone reasonably answer (supernatural/simulation/God).
Also the question about aliens made me unaccountably sad: even if I retroactively adjust my estimates of intelligent alien life upwards (which I would never do), I have to face the incredibly low probability that they're in the Milky way.
comment by bramflakes · 2013-11-22T10:45:07.615Z · LW(p) · GW(p)
Huh, I put svir uhaqerq zvyyvba sbe Rhebcr'f cbchyngvba. Turns out I was thinking of the Rhebcrna Havba, (svir uhaqerq naq frira zvyyvba) engure guna Rhebcr vgfrys, which is substantially higher.
Replies from: None, NancyLebovitz, kilobug↑ comment by [deleted] · 2013-11-22T14:23:43.956Z · LW(p) · GW(p)
Please rot13 this (and spell out the numbers)!
ETA I have not yet taken the survey yet - skimmed through it yesterday - but when I do, I'll skip the calibration question.
Replies from: bramflakes↑ comment by bramflakes · 2013-11-22T14:56:43.075Z · LW(p) · GW(p)
Oops, sorry! Fixed.
↑ comment by NancyLebovitz · 2013-11-22T18:28:53.199Z · LW(p) · GW(p)
I wish you hadn't posted that-- I read the comments before taking the survey.
Replies from: bramflakes↑ comment by bramflakes · 2013-11-22T21:15:42.140Z · LW(p) · GW(p)
Sorry, I thought it would be buried near the bottom ;c
comment by bgaesop · 2013-11-22T09:31:29.392Z · LW(p) · GW(p)
Several of these questions are poorly phrased. For instance, the supernatural and god questions, as phrased, imply that the god chance should be less than the chance of supernatural anything existing. However, I think (and would like to be able to express) that there is a very small (0), chance of ghosts or wizards, but only a small (1) chance of there being some sort of intelligent being which created the universe-for instance, the simulation hypothesis, which I would consider a subset of the god hypothesis.
Replies from: VAuroch, Jayson_Virissimo↑ comment by Jayson_Virissimo · 2013-11-23T01:39:39.609Z · LW(p) · GW(p)
I interpret a (a steel-manned) supernatural (above or outside of nature) event to be something like the Simulator changing program variables from outside the simulation in contradiction with its normal rules of operation. But, my priors said that there are more simulations without interference from the Simulator (besides "natural laws", named constants in the source code, initial condition values passed in before run-time, etc...) than with interference, so I assigned a higher probability to the God Hypothesis than to supernatural events having occurred (in our world).
Although, having written this down, I'm not sure my priors made as much sense as it felt like they did beforehand.
comment by Nnotm (NNOTM) · 2013-11-26T14:42:55.366Z · LW(p) · GW(p)
I took it. I was surprised how far I was off with Europe.
comment by Lion · 2013-11-26T07:17:29.554Z · LW(p) · GW(p)
I already commented on other people's comments and got Karma while not stating that I took it. Am I still supposed to just say "I took it" and get more Karma without commenting anything more of value? Well, I took it. All of it. And I chose to "cooperate" because it seemed more ethical. 30$-60$ isn't enough to arouse my greed anyway.
Oh, btw. Hi everybody, I'm new here even though I created this account years ago when I was lurking. I knew I'd come back.
Replies from: witzvocomment by Ander · 2013-11-25T22:56:37.711Z · LW(p) · GW(p)
Took the survey, and finally registered after lurking for 6 months.
I liked the defect/cooperate question. I defected because it was the rational way to try to 'win' the contest. However, if one had a different goal such as "make Less Wrong look cooperative" rather than "win this contest", then cooperating would be the rational choice. I suppose that if I win, I'll use the money to make my first donation to CFAR and/or MIRI.
Now that I have finished it, I wish I had taken more time on a couple of the questions. I answered the Newcomb's Box problem the opposite of my intent, because I mixed up what 2-box and 1-box mean in the problem (been years since I thought about that problem). I would 1-box, but I answered 2-box in the survey because I misremembered how the problem worked.
Replies from: scav, Kurros, Eneasz, None↑ comment by scav · 2013-11-26T14:52:26.203Z · LW(p) · GW(p)
Heh. I also didn't care about the $60, and realised that taking the time to work out an optimal strategy would cost more of my time than the expected value of doing so.
So I fell back on a character-ethics heuristic and cooperated. Bounded rationality at work. Whoever wins can thank me later for my sloth.
Replies from: RussellThor↑ comment by RussellThor · 2013-12-01T08:34:39.508Z · LW(p) · GW(p)
Same thats pretty much why I choose cooperate.
↑ comment by Kurros · 2013-11-26T00:08:34.567Z · LW(p) · GW(p)
Lol, I cooperated because $60 was not a large enough sum of money for me to really care about trying to win it, and in the calibration I assumed most people would feel similarly. Reading your reasoning here, however, it is possible I should have accounted more strongly for people who like to win just for the sake of winning, a group that may be larger here than in the general population :p.
Edit: actually that's not really what I mean. I mean people who want to make a rational choice to maximum the probability of winning for its own sake, even if they don't actually care about the prize. I prefer someone gets $60 and is pleasantly surprised to have won, than I get $1. I predict that overall happiness is increased more this way, at negligible cost to myself. Even if the person who wins defected.
Replies from: Ander↑ comment by Ander · 2013-11-26T01:09:11.028Z · LW(p) · GW(p)
Agreed, I think that the rational action in this scenario depends on one's goal, and there are different things you could choose as your goal here.
I also think I shouldve set a higher value for my 90% confidence of the number of people who would cooperate, because its quite possible that a lot more peopel than I expected would choose alternate goals for this other than 'winning'.
↑ comment by Eneasz · 2013-11-26T16:41:19.580Z · LW(p) · GW(p)
So if a group using your decision-making-process all took this survey, "rationally" trying to win the contest, they would end up winning $0. :)
Replies from: Ander↑ comment by Ander · 2013-11-26T18:49:09.075Z · LW(p) · GW(p)
Correct, just like people trying to 'win' a single iteration prisoner's dilemna problem would defect.
I'm not claiming its the morally correct option or anything, just that its the correct strategy if your goal is to win.
Replies from: Eneasz↑ comment by [deleted] · 2014-01-01T17:19:47.526Z · LW(p) · GW(p)
If you had to play Newcomb's problem against the Less Wrong community as Omega, would you one-box or two-box? The community would vote as to whether to put the money in the second box or not; whichever choice got more votes would determine whether the money was in the second box or not. Each player from the community would be rewarded individually if e guessed your choice correctly.
comment by JacekLach · 2013-11-22T21:02:10.669Z · LW(p) · GW(p)
I'm confused by the CFAR questions, in particular the last four. Are they using you as 'the person filling out this survey' or the general you as in a person? "You can always change basic things about the kind of person you are" sounds like the general you. "You are a certain kind of person, and there's not much that can be done either way to really change that" sounds like the specific you.
Help?
Replies from: Adele_L↑ comment by Adele_L · 2013-11-23T02:14:14.107Z · LW(p) · GW(p)
The ambiguity is intentional, apparently.
Replies from: JacekLach↑ comment by JacekLach · 2013-11-23T02:19:56.397Z · LW(p) · GW(p)
Huh!
Now I'm even more confused. How can my answer be useful if they don't know how I interpret the question? Esp. since my answers are pretty much opposite depending on the interpretation...
My bad for not finding that comment. I skimmed through the thread, but didn't see it.
comment by teageegeepea · 2013-11-23T05:43:52.146Z · LW(p) · GW(p)
I tend to dismiss Steven Landsburg's critique of the standard interpretation of experiments along the lines of the Ultimatum Game, since nobody really thinks it through like him. But I actually did think about it when taking this survey (which is not the same as saying it affected my response).
comment by discopirate · 2013-12-05T05:35:38.315Z · LW(p) · GW(p)
I took the survey, after having found out about the site a mere 15 minutes prior. As you might imagine this is my first comment.
comment by fiddlemath · 2013-12-04T18:50:22.914Z · LW(p) · GW(p)
Census'd! And upvoted! But an upvote isn't really quite strong enough to demonstrate my appreciation for this work. Thank you.
comment by KnaveOfAllTrades · 2013-12-01T02:11:05.933Z · LW(p) · GW(p)
Did the whole thing. Cheers to all involved. :)
comment by [deleted] · 2013-11-27T08:46:22.672Z · LW(p) · GW(p)
I made an account after taking this survey.
comment by A1987dM (army1987) · 2013-11-24T17:22:12.255Z · LW(p) · GW(p)
I wanted an ADBOC answer to the HBD question. Lacking that, I answered the question about the belief (regardless of whether I endorse policies that people with the same belief typically endorse -- like I did for the AGW question), but given that (unlike the AGW question) it was in the politics section and that it mentioned a movement, I felt a bit uncomfortable doing that. Also, I interpreted "we" in the Great Stagnation question as "American", given that that's what the cited Wikipedia article says.
In the income question I only counted my PhD scholarship after taxes, and not the "reimbursement" of travel expenses (which often exceed the amount I actually spend while travelling) nor the private tutoring I've very occasionally done (I kind-of consider the money a gift in exchange of a favour).
I rounded my top-level contributions to Main and Discussion down to zero.
Replies from: Vaniver, army1987, army1987↑ comment by Vaniver · 2013-11-26T22:33:36.494Z · LW(p) · GW(p)
nor the private tutoring I've very occasionally done (I kind-of consider the money a gift in exchange of a favour).
In the US, at least, this would be taxable income. (I find this amusing in the context of the sibling comment about tax evasion being a problem.)
↑ comment by A1987dM (army1987) · 2013-11-29T19:56:09.374Z · LW(p) · GW(p)
Other comments to my answers:
In the Living With question, what's the point of the “most of the time”? These days I probably spend more time at my girlfriends' than at my own place (though neither makes up the absolute majority of hours in an average week), but I wouldn't consider myself to be living in the former because I don't have the keys to that place, don't pay the rent there, don't do homework there (other than setting and clearing the table when I eat there), and don't spend any nontrivial amount of time without my girlfriend there. So I answered “With roommates” (where I do do all of those things), but given the “most of the time” I'm not sure that was what I was supposed to answer.
“Are you planning on having more children? Answer yes if you don't have children but want some” -- we want some children some day, but we're not planning on having children now. (I'm not even sure how I answered anymore.)
There's no such thing as a minimum wage law in my country. Rather than spending time trying to figure out what the answers should be supposed to mean in this situation, I just skipped the question.
“How would you describe your opinion of social justice, as you understand the term? See also http://en.wikipedia.org/wiki/Social_justice” -- as I understand the term before or after reading the lede of that WP article? On reading it, I realized there's a mostly kind-of sort-of sane mainstream social justice movement that social justice warriors on Tumblr and the like aren't representative of any more than the likes of Dworkin and Daly are of kind-of sort-of sane mainstream feminism, so I answered 4/5 -- but would have probably answered somewhere around 2/5 hadn't I seen that WP article.
↑ comment by A1987dM (army1987) · 2013-11-25T09:40:39.384Z · LW(p) · GW(p)
Also, in the taxes question, I think that the tax revenue is too low in my country, but the tax rates are about right or even slight too high -- it's tax evasion which is way too big (and I'm not sure how I'd go about reducing that). I averaged my answer answer if the question had been about tax revenues and my answer if it had been about tax rates, weighed by my probability assignments for each meaning, and picked the middle answer.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-11-26T04:55:44.781Z · LW(p) · GW(p)
From what I hear, your country is in a vicious cycle where high tax rates encourage tax evasion so the government raises taxes (and creates new taxes) to compensate which further encourages tax evasion.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-26T08:53:59.130Z · LW(p) · GW(p)
Yes, that's essentially correct (now that we have technocratic governments; before that, no politician dared raise taxes or reduce public spending (because either would be unpopular) sending the public debt up towards infinity).
comment by JoshuaZ · 2013-11-25T01:43:22.328Z · LW(p) · GW(p)
Regarding the preferred relationship status I'm not sure that combining uncertain with no preference was ideal. I have no strong preferences on that issue and I'm very certain of that.
Also, the religion question was difficult, in that I had to choose between "atheist but spiritual" and "atheist and not spiritual"- I'm an atheist but go to religious services regularly. But it isn't out of anything "spiritual" which is at best a hideously ill-defined term, but rather out of emotional and communal attachment.
The Singularity question is also broad, since there are so many different meanings. I interpreted it as about an intelligence explosion (partially since I consider the others to be much less likely).
Overall, this version was well-done. Thanks for putting in the effort, and thanks for everyone who helped contribute questions.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-25T12:51:44.063Z · LW(p) · GW(p)
"spiritual" which is at best a hideously ill-defined term
I would have interpreted it as “I realize that I am not a monkey brain, but am a timeless abstract optimization process to which this ape is but a horribly disfigured approximation”, but IIRC I was told that was not the actual meaning.
Replies from: Leonhart, kalium↑ comment by kalium · 2013-11-27T05:01:23.042Z · LW(p) · GW(p)
Who told you the actual meaning, and what was it?
I interpreted as interested in seeking out the sort of mental state associated with a "religious experience", and put down "atheist but spiritual" because of my curiosity about meditation.
comment by christopherj · 2013-12-03T01:00:56.995Z · LW(p) · GW(p)
Survey taken. I'm particularly interested in what the ratio between an individual's estimate of alien life in the milky way vs observable universe is (not just the individual averages)
comment by MondSemmel · 2013-12-02T17:16:24.532Z · LW(p) · GW(p)
Answered the survey, including the bonus questions. Took me 32 min altogether. Comments:
How many people are aware of their IQs? I'm from Germany and have never taken an IQ test. Is knowing about one's IQ common enough in the US that not making that question a bonus question made sense?
There were quite a few questions (e.g. estimate weekly internet consumption, estimate how often you read about ideas for self-improvement) which felt pointless - how could you possibly get accurate estimates from people, given how ambiguous these questions were, and how difficult these estimates are?
The money question: After I failed to come up with a unique passphrase, I chose cooperate and left the rest blank. This kind of stuff tempts my perfectionism, and that's a lose-lose situation for me.
comment by AlexMennen · 2013-11-23T19:36:49.319Z · LW(p) · GW(p)
I defected, and then afterwards I realized that the proportion of people cooperating could likely have a causal effect on future in-group cooperativeness among LWers. Dammit, I should have thought of that earlier.
Replies from: Benquocomment by ChrisHallquist · 2013-11-23T02:37:03.012Z · LW(p) · GW(p)
Some notes on my answers:
- I put 0 for supernatural, God, and religion, not because I think the answer is literally zero but because I didn't think Yvain wanted us answering using exponential notation.
- Some of the other probability estimating questions deeply confused me, and I'm pretty sure I didn't base my answer on any kind of consistent assumptions. Like I based my cryonics answer on the assumption that uploading counts without even really thinking about it, but then assigned a lower probability to anti-agathics by assuming it required keeping the original meat alive. Also I'm really confused by the simultation hypothesis debate.
↑ comment by linkhyrule5 · 2013-11-23T03:00:23.438Z · LW(p) · GW(p)
It was noted that for our convenience, 0 is interpreted as epsilon and 100 as 100-epsilon.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-11-23T05:36:48.174Z · LW(p) · GW(p)
Ah, I saw that but wasn't familiar with the terminology.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-11-23T14:44:10.702Z · LW(p) · GW(p)
In case you are still unfamiliar: epsilon is a common symbol in mathematics used to designate any negligibly small number.
Replies from: JoshuaZcomment by rstarkov · 2013-12-05T02:52:21.205Z · LW(p) · GW(p)
This has been the most fun, satisfying survey I've ever been part of :) Thanks for posting this. Can't wait to see the results!
One question I'd find interesting is closely related to the probability of life in the universe. Namely, what are the chances that a randomly sampled spacefaring lifeform would have an intelligence similar enough to ours for us to be able to communicate meaningfully, both in its "ways" and in general level of smarts, if we were to meet.
Given that I enjoyed taking part in this, may I suggest that more frequent and in-depth surveys on specialized topics might be worth doing?
comment by jaime2000 · 2013-11-27T23:16:24.147Z · LW(p) · GW(p)
Surveyed; hope to receive karma per most ancient tradition.
I think your relationship preference question conflates very different clusters. You should differentiate between the kind of polyamory which is trendy in rationalist communities these days, the kind where a wealthy/high-status man is allowed to keep more than one wife (or a wife and couple of mistresses), the kind of serial monogamy which is the default relationship model of the West and Western-influenced countries today (have lots of sexual long-term boyfriend/girlfriend relationships, marry one of these, divorce, repeat), arranged marriages in cultures in which divorce is impossible or virtually impossible, and perhaps some other empirical clusters on relationship-space which I am forgetting about.
After several years of answering the probability questions I finally grew tired of them and left them blank. You would have been more likely to get a response from me if you had used radio buttons like with the political questions (0-25%, 25-50%, 50-75%, 75-100%, or something like that).
Also, next year I would like to see more hypothetical questions. Both standard ones like the Prisoner's Dilemma, the Trolley Problem, etc... and additionally any novel ones you can think of that will reveal interesting attitudes in their responses (for example, that time you asked a cryonics question disguised as an angel reincarnation question).
Finally (and this has been driving me nuts for a couple of years), I keep answering that I was referred to LessWrong by a certain website (that is not a blog). But your referral question has no option for "website", so I write-in the name in "other". Except that when you do the analysis, you apparently lump this answer under the "blog" category, so presumably you wanted me to answer "blog" when I took the survey. But not all websites are blogs (even if all blogs are websites)! Is there any way you can reword that question?
comment by Baughn · 2013-11-27T13:34:45.966Z · LW(p) · GW(p)
I thought Europe was about a third the size it actually is, whee! On the bright side, at least I didn't claim to be confident about that.
On the god/simulation questions, I answered them using the theory that they're the same thing, but in retrospect perhaps that isn't quite what you had in mind?
comment by DanielVarga · 2013-12-01T20:18:05.005Z · LW(p) · GW(p)
Amusingly, google chrome autofill still remembered my answers from last year. This made filling the demographic part a bit faster, and allowed a little game: after giving a probability estimation I could check my answer from a year ago.
comment by Dr_Manhattan · 2013-11-29T19:47:29.190Z · LW(p) · GW(p)
I had to skip "Professional IQ test" questions, having never taken one. What's a cost-effective way to get this done?
Replies from: aletheianink↑ comment by aletheianink · 2013-12-01T00:53:40.669Z · LW(p) · GW(p)
I live in Australia and took the entrance to Mensa IQ test. I was accepted but not given a number, and was told to contact the evaluating psychologist (even though I wasn't sure how to find that out). That may be a way to do things, but since I never followed through I don't know how hard it is to get the results like that. I just put the lower bound for Mensa entrance because I know I at least got that, and mentioned it in the comments so they can discount it if it's not very useful.
comment by Crude_Dolorium · 2013-11-27T20:07:02.953Z · LW(p) · GW(p)
Apparently I don't participate in the community. I only comment once a year, to report that I took the survey.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-27T23:54:59.952Z · LW(p) · GW(p)
This causes your %positive score to be awesome. :-)
comment by blacktrance · 2013-11-22T17:50:01.077Z · LW(p) · GW(p)
I'm disappointed to see that most of my suggestions weren't used.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2013-11-22T21:58:45.824Z · LW(p) · GW(p)
I'm sorry. I couldn't put in checkboxes where you can choose as many as you want, because my software can't process them effectively. And I am reluctant to take suggestions about clarifying or adding more options to different questions as past experience has told me that no matter how fine the gradations are people always ask to have them finer. I took your suggestion about better divisions of Christianity and I thank you for making it.
comment by aletheianink · 2013-12-01T00:51:51.282Z · LW(p) · GW(p)
I took the survey.
comment by loup-vaillant · 2013-11-30T23:19:49.844Z · LW(p) · GW(p)
I took the survey (answered nearly everything).
comment by fortyeridania · 2013-12-11T05:34:42.859Z · LW(p) · GW(p)
Taken.
I defected. If I win I'll donate it all to GiveWell's top-rated charity--so the rest of you defectors have stolen statistical cash from the world's poorest! (Unless you were planning to do the same thing.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-12-11T05:40:34.458Z · LW(p) · GW(p)
"stolen"?
comment by OneGotBetter · 2013-12-01T10:23:58.914Z · LW(p) · GW(p)
Submitted!
I really liked the questions last year related to if you had $x, how happy would you be? I know I missed the 1 week comment period for this survey, but Yvain, could you put those questions in again next year??
cheers
comment by dougclow · 2013-11-28T08:09:11.496Z · LW(p) · GW(p)
I took the survey.
I, like many others, was very amused at the structure of the MONETARY AWARD.
I'm not sure it was an advisable move, though. There's an ongoing argument about the effect of rewards on intrinsic motivation. But few argue that incentives don't tend to incentivise the behaviour they reward, rather than the behaviour the rewarder would like to incentivise. In this instance, the structure of the reward appears to incentivise multiple submissions, which I'm pretty sure is not something we want to happen more.
In some contexts you could rely on most of the participants not understanding how to 'game' a reward system. Here, not so much, particularly since we'd expect the participants to know more game theory than a random sample of the population, and the survey even cues such participants to think about game theory just before they submit their response. Similarly, the expectation value of gaming the system is so low that one might hope people wouldn't bother - but again, this audience is likely to have a very high proportion of people who like playing games to win in ways that exercise their intelligence, regardless of monetary reward.
So I predict there will be substantially more multiple submissions this time compared to years with no monetary reward.
I'm not sure how to robustly detect this, though: all the simple techniques I know of are thwarted by using a Google Form. If the prediction is true, we'd expect more submissions this year than last year - but that's overdetermined since the survey will be open for longer and we also expect the community to have grown. The number of responses being down would be evidence against the prediction. A lot of duplicate or near-duplicate responses aren't necessarily diagnostic, though a significant increase compared to previous years would be pretty good evidence. The presence of many near-blank entries with very little but the passphrase filled in would also be very good evidence in favour of the prediction.
(I used thinking about this as a way of distracting myself from thinking what the optimal questionnaire-stuffing C/D strategy would be, because I know that if I worked that out I would find it hard to resist implementing it. Now I think about it, this technique - think gamekeeper before you turn poacher - has saved me from all sorts of trouble over my lifespan.)
comment by kokotajlod · 2013-11-27T15:06:05.263Z · LW(p) · GW(p)
I enjoyed taking this survey. Thanks!
I can't wait to see the results and play with the data, if that becomes possible.
comment by ahbwramc · 2013-11-27T14:17:08.067Z · LW(p) · GW(p)
Took the survey. I was pretty confident about my answer for Europe because I thought I remembered the number, but it turns out I was wayyy off. So I looked it up and yep, sure enough, the number I was remembering was for the EU, not Europe as a whole. So, uh, whoops.
comment by Zian · 2013-11-27T06:23:16.046Z · LW(p) · GW(p)
I took the survey a few days ago and ran into trouble trying to answer the IQ test-related questions (IQ/SAT/ACT/etc.) because I would have to dig around for the answers to those questions and that required more effort than I wanted to spend on a survey.
The instructions for entering percents was also a bit confusing.
Other than that, the survey was well designed. I really appreciated how clear you were about where it was OK to stop and that it was fine to leave things blank.
Replies from: Nornagest, TheOtherDave↑ comment by Nornagest · 2013-11-27T07:57:40.323Z · LW(p) · GW(p)
It's probably fine to answer the standardized test-related questions to the best of your recollection instead of bothering to dig out paperwork. I'm fairly sure the SAT score I gave was exact, since that number ended up having moderately important consequences for me, but I may have been a point or two off on the ACT, or up to three or four on IQ.
The error bars on the wider survey are almost certainly wide enough that that level of imprecision in individual reporting is of absolutely no consequence, if your experience is anything like mine.
↑ comment by TheOtherDave · 2013-11-27T18:33:55.768Z · LW(p) · GW(p)
The instructions for entering percents was also a bit confusing.
Do you have any advice for what kind of instruction would be less confusing?
comment by Martin-2 · 2013-11-26T20:23:03.138Z · LW(p) · GW(p)
Done. I hate to get karma without posting something insightful, so here's a song about how we didn't land on the moon.
Replies from: redlizard, gjm, None↑ comment by gjm · 2013-11-26T23:44:09.145Z · LW(p) · GW(p)
Just to check whether I've understood: Do you in fact consider that song insightful? If so, what insight do you think it embodies?
(I'm trying to figure out whether you, or they, are being ironic, or whether you are seriously endorsing as insightful a song that seriously complains that the Apollo moon landings were fake. My prior for the latter is rather low, but evidence for the former just doesn't seem to be there.)
Replies from: JoshuaZcomment by Yaakov T (jazmt) · 2013-11-26T01:15:06.758Z · LW(p) · GW(p)
I noticed a bunch of people saying that they will donate the money if they win. I find that a surprisingly irrational sentiment for lesswrong. Unless I am missing something, it seems people are ignoring the principle of the fungibility of money. It seems like the more rational thing to do would be to commit to donating 60$ whether or not you win. (If your current wealth level is a factor in your decision, such that you will only donate with the higher wealth level with the prize, then this can be modified to donating whether or not you win if you receive a windfall of 60$ from any source (your grandmother gives a generous birthday present, your coworker takes you out to lunch every day this week, you find money in the street, you get a surprisingly large bonus at work, your stocks increase more then expected etc))
Replies from: Jiro↑ comment by Jiro · 2013-11-26T06:42:30.072Z · LW(p) · GW(p)
People intend to donate the money when they win because they don't want the prospect of gaining money to influence their decision. Donating it is just an alternative to burning it. (It does also follow that those people who donate it for this reason must find the utility of such a donation to be very small.)
Replies from: jazmt↑ comment by Yaakov T (jazmt) · 2013-11-26T16:20:08.337Z · LW(p) · GW(p)
By 'their decision' do you mean the decision to cooperate or defect? If so you would predict people would not offer to donate if there was no choice involved (e.g. all participants in the survey automatically receive one entry)?
It does not seem like this is what people are describing e.g. http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a3xl http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a2zz and http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a36h
comment by Scott Alexander (Yvain) · 2013-11-23T18:55:52.403Z · LW(p) · GW(p)
I just realized I forgot a very important question I really want to know the answer to!
"What is your 90% confidence interval for the percent of people you expect to answer 'cooperate' on the prize question?"
I've added this into the survey so that people who take it after this moment can answer. If you've taken the survey already, feel free to record your guess below (if you haven't taken the survey, don't read responses to this comment)
Replies from: Manfred, threewestwinds, army1987, CoffeeStain, faul_sname, MathieuRoy, EGI, Ishaan, gjm, Nornagest, None, Eneasz↑ comment by threewestwinds · 2013-12-01T21:11:51.332Z · LW(p) · GW(p)
I failed at reading comprehension - took it as "the minimum percentage of cooperation you're 90% confident in seeing" and provided one number instead of a range. ^^;;
So... 15-85 is what I meant, and sorry for the garbage answer on the survey.
↑ comment by A1987dM (army1987) · 2013-11-24T17:39:36.707Z · LW(p) · GW(p)
V unir ab vqrn, fb V nafjrerq ng znkvzhz ragebcl (v.r. svir gb avargl-svir); vf gung evtug be fubhyq V whfg unir yrsg gurz oynax?
↑ comment by CoffeeStain · 2013-11-23T23:22:05.047Z · LW(p) · GW(p)
Right down the middle: 25-75
↑ comment by faul_sname · 2013-12-09T21:15:40.907Z · LW(p) · GW(p)
10 - 65 %
↑ comment by Mati_Roy (MathieuRoy) · 2013-12-02T06:09:12.670Z · LW(p) · GW(p)
7-80
comment by [deleted] · 2013-11-23T14:16:15.888Z · LW(p) · GW(p)
I was confused by what was meant with supernatural. I mean if you believe you live in a simulation of course things that are not constrained by the physical laws of our universe might occasionally show up in it. Preferred the ontologically basic mental entity formulation of previous polls.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-11-23T14:51:43.097Z · LW(p) · GW(p)
Indeed, these vague questions that are meant to supposedly "capture the intuition" may have their uses, but not in this community. Instead, the vagueness just pollutes the interpretability of the results: For example, "anything which can be described at some level becomes a part of the natural laws just by including that definition, ergo nothing supernatural can exist" maps true beliebers and staunch Hitchensites to the same answer. Another thus ensuing problem: "There are parts about you which cannot be changed" can translate to "... under any conceivable circumstances" or to "given a typical life trajectory". Both are different questions with different intuitions.
comment by Mati_Roy (MathieuRoy) · 2013-11-23T07:16:25.744Z · LW(p) · GW(p)
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe?
I've included our potential simulators in this.
What is the probability that any of humankind's revealed religions is more or less correct?
I've included religions such as venturist.
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time?
I've put the answer includying and excludying the use of cryonics.
I estimate that 90% of people will have deffect.
I wouldn't mind if my survey wasn't anonymous.
comment by CoffeeStain · 2013-11-23T00:04:36.602Z · LW(p) · GW(p)
I defected, because I'm indifferent to whether the prize-giver or prize-winner has 60 * X dollars, unless the prize-winner is me.
Replies from: Nornagest↑ comment by Nornagest · 2013-11-23T00:21:36.282Z · LW(p) · GW(p)
I cooperated, because I'm more or less indifferent to monetary prizes of less than twenty dollars or so, and more substantial prizes imply widespread cooperation. I view it as unlikely that I can get away with putting myself into a separate reference class, so I might as well contribute to that.
Replies from: CoffeeStain↑ comment by CoffeeStain · 2013-11-23T00:30:02.961Z · LW(p) · GW(p)
Hmm, come to think of it, deciding the size of the cash prize (for it being interesting) is probably worth more to me as well. I'll just have to settle for boring old cash.
comment by ChrisHibbert · 2013-11-30T19:56:01.198Z · LW(p) · GW(p)
I don't answer survey questions that ask about race, but if you met me you'd think of me as white male.
I'm more strongly libertarian (but less party affiliated) than the survey allowed me to express.
I have reasonably strong views about morality, but had to look up the terms "Deontology", "Consequentialism", and "Value Ethics" in order to decide that of these "consequentialism" probably matches my views better than the others.
Probabilities: 50,30,20,5,0,0,0,10,2,1,20,95.
On "What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?", I had to parse several words very carefully, and ended up deciding to read "significant" as "measureable" rather than "consequential". For consequential, I would have given a smaller value.
I answered all the way to the end of the super bonus questions, and cooperated on the prize question.
comment by aspera · 2013-11-25T18:02:23.279Z · LW(p) · GW(p)
Nice job on the survey. I loved the cooperate/defect problem, with calibration questions.
I defected, since a quick expected value calculation makes it the overwhelmingly obvious choice (assuming no communcation between players, which I am explicitly violating right now). Judging from comments, it looks like my calibration lower bound is going to be way off.
comment by MalcolmOcean (malcolmocean) · 2013-12-29T05:18:26.129Z · LW(p) · GW(p)
Completed survey.
Feedback: I feel like it would be valuable to disambiguate between "I'm planning to have more children in <2years" from "I'd like to someday have kids".
"am I a student?" and "how do I make money?" like separate questions to me. Like student is sort of an occupation, but it's not a way to earn money. I am both a student and self-employed, and about 6 months of the year I do internships = for-profit work.
It would be awesome if Time of LW included both a mean and median time or something, also perhaps a total time spent on it. For me it varies hugely, and I really had no idea what to put. Some weeks I spend many hours on it, other weeks 0.
comment by RussellThor · 2013-12-01T02:38:03.011Z · LW(p) · GW(p)
Yes I did the survey. PW: one two.
Firstly I need to also say that giving probabilities to things that are either very low or very unknown is not very helpful. For example, aliens etc I don't know and as others have pointed out, God or simulation master, are they the same thing? Also giving the probability to us being Boltzmann brains or something very weird like that is undefined as it involves summing over the multi-verse which is un-countably infinite etc. For the simulation hypothesis I think we simply cant give a sensible number.
On a more general note, for friendly AI/unfriendly AI I think more attention should be on the social and human aspect. I don't see what maths proofs have to offer here. We already know you can potentially get bad AI because if you get an evil person say then give them a brain upload, self modifying powers etc, then they quite possibly will self modify to make themselves even more evil and stronger, turn off their conscience etc. What the boundaries of this are we don't know and need actual experiments to find out. Also how one person behaves and a society of self modifiers could quite possibly be a very different matter. Questions like do a large range of people with different values converge or diverge when given these powers is what we want to know.
comment by Brendon_Wong · 2013-11-29T08:25:44.744Z · LW(p) · GW(p)
Answered all questions, I hope I helped!
I'm very curious to see how the monetary reward works out.
comment by V_V · 2013-11-24T20:56:09.194Z · LW(p) · GW(p)
What about being the ultimate defector and submitting multiple times to increase your chances of winning (and screwing up the survey results as a side effect)?
Replies from: Benquo↑ comment by Benquo · 2013-11-25T22:02:07.992Z · LW(p) · GW(p)
Hmm, or you could just do four times as many "cooperate" dummy entries, similarly increase your chance of winning, and increase the size of the prize as well. Are "No I will screw things up" answers counted towards the PD?
Replies from: V_V↑ comment by V_V · 2013-11-25T22:09:24.460Z · LW(p) · GW(p)
But that would still be a defection on a meta-level towards the people who played a single time.
If you account for the possibility of playing multiple times, this game is an example of the Tragedy of the commons
comment by jdgalt · 2013-11-23T02:49:58.049Z · LW(p) · GW(p)
Did that.
Re. relationships: The only people I've heard use "polyamorous" are referring to committed, marriage-like relationships involving more than two adults. There ought to be a category for those of us who don't want exclusivity with any number.
I've left most of the probability questions blank, because I don't think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I'll try P(Aliens) when we've looked at several thousand planets closely enough to be reasonably sure of answers about them.
In addition, I don't think some of the questions can have meaningful answers. For example, the "Many Worlds" interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
P(religion) also suffers from vagueness: what conditions would satisfy it? Not only are some religions vaguely defined, but there are many belief systems that are arguably relgions or not religions. Buddhism? Communism? Atheism?
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story "With Folded Hands" explains why.)
Extra credit items:
Great Stagnation -- I believe that the rich world's economy IS in a great stagnation that has lasted for most of a century, but NOT for the reasons Cowen and Thiel suggest. The stagnation is because of "progressive" politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people's opportunities to innovate and profit by it. This is not a trivial matter, but a problem quite comparable to those listed in the "catastrophe" section, and one which may very well prevent a solution to a real catastrophe if we become headed for one. (Both parties' constant practice of campaigning-by-inventing-a-new-phony-emergency-every-month makes the problem worse, too: most rational people now dismiss any cry of alarm as the boy who cried wolf. Certainly the environmental movement, including its best known "scientists", have discredited themselves this way.) This is why the struggle for liberty is so critical.
Replies from: fubarobfusco, ciphergoth, MugaSofer, Daniel_Burfoot, None, Vaniver↑ comment by fubarobfusco · 2013-11-23T18:36:00.323Z · LW(p) · GW(p)
Re. relationships: The only people I've heard use "polyamorous" are referring to committed, marriage-like relationships involving more than two adults. There ought to be a category for those of us who don't want exclusivity with any number.
Huh. This is what I've usually heard referred to as "polyfidelity". The poly social circles that I'm familiar with encompass also (among others) people who have both "marriage-like" and "dating-like" relationships, people who have multiple dating-like relationships and no marriage-like ones, and people who have more complicated arrangements.
P(religion) also suffers from vagueness: what conditions would satisfy it? Not only are some religions vaguely defined, but there are many belief systems that are arguably relgions or not religions. Buddhism? Communism? Atheism?
The question is "What is the probability that any of humankind's revealed religions is more or less correct?"
"Revealed religion", to my interpretation, means "a religion whose teachings are presented as revelation from divine or supernatural entities". (See Wikipedia, where "revealed religion" links to the article on religious revelation.)
This would not include Communism or atheism. Buddhism (as usual) is complicated, since there are sects of Buddhism that make what sure sound to me like claims of revelation, while others sound more evidence-based. For that matter, it might not include Scientology, which presents itself as scientific discovery by human genius, rather than divine revelation, at least at the lower levels.
↑ comment by Paul Crowley (ciphergoth) · 2013-11-23T09:12:05.014Z · LW(p) · GW(p)
My circle uses polyamorous to include wholly non-exclusive relationships; to indicate exclusivity we'd say "polyfidelity".
↑ comment by MugaSofer · 2013-11-23T15:30:26.765Z · LW(p) · GW(p)
I've left most of the probability questions blank, because I don't think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I'll try P(Aliens) when we've looked at several thousand planets closely enough to be reasonably sure of answers about them.
I left them blank myself because I haven't developed the skill to do it, but the obvious other interpretation ... are you saying it's in-principle impossible to operate rationally under uncertainty?
In addition, I don't think some of the questions can have meaningful answers. For example, the "Many Worlds" interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
Do you usually consider statements you don't anticipate being able to verify meaningless?
The obvious next question would be to ask if you're OK with your family being tortured uner the various circumstances this would suggest you would be.
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story "With Folded Hands" explains why.)
I believe I've read that story. Azimov-style robots prevent humans from interacting with the environment because they might be harmed and that would violate the First Law, right?
Could you go into more detail regarding how as you "usually hear it described" it would be a "catastrophe if it happened"? I can imagine a few possibilities but I'd like to be clearer on the thoughts behind this before commenting.
The stagnation is because of "progressive" politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people's opportunities to innovate and profit by it.
Hmm. On the one hand, political stupidity does seem like a very serious problem that needs fixing and imposes massive opportunity costs on humanity. On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
Certainly the environmental movement, including its best known "scientists", have discredited themselves this way.
I don't know, I find most people don't identify such a pattern and thus avoid a BWCW effect; while most people above a certain standard of rationality are able to take advantage of evidence, public-spirited debunkers and patterns to screen out most of the noise. Your milage may vary, of course; I tend not to may much attention to environmental issues except when they impinge on something I'm already interested in, so perhaps this is harder at a higher volume of traffic.
Replies from: TheOtherDave, jdgalt↑ comment by TheOtherDave · 2013-11-23T16:38:28.646Z · LW(p) · GW(p)
On the other hand, this sounds like a tribal battle-cry
Upvoted entirely for this phrase.
↑ comment by jdgalt · 2014-12-01T23:21:22.234Z · LW(p) · GW(p)
I've left most of the probability questions blank, because I don't think it is meaningfully possible to assign numbers to events I have little or no quantitative information about. For instance, I'll try P(Aliens) when we've looked at several thousand planets closely enough to be reasonably sure of answers about them.
I left them blank myself because I haven't developed the skill to do it, but the obvious other interpretation ... are you saying it's in-principle impossible to operate rationally under uncertainty?
No, I just don't think I can assign probability numbers to a guess. If forced to make a real-life decision based on such a question then I'll guess.
In addition, I don't think some of the questions can have meaningful answers. For example, the "Many Worlds" interpretation of quantum mechanics, if true, would have no testable (falsifiable) effect on the observable universe, and therefore I consider the question to be objectively meaningless. The same goes for P(Simulation), and probably P(God).
Do you usually consider statements you don't anticipate being able to verify meaningless?
No, and I discussed that in another reply.
The obvious next question would be to ask if you're OK with your family being tortured uner the various circumstances this would suggest you would be.
I've lost the context to understand this question.
The singularity is vague, too. (And as I usually hear it described, I would see it as a catastrophe if it happened. The SF story "With Folded Hands" explains why.)
I believe I've read that story. Azimov-style robots prevent humans from interacting with the environment because they might be harmed and that would violate the First Law, right?
Yes. Eventually most human activity is banned. Any research or exploration that might make it possible for a human to get out from under the bots' rule is especially banned.
Could you go into more detail regarding how as you "usually hear it described" it would be a "catastrophe if it happened"? I can imagine a few possibilities but I'd like to be clearer on the thoughts behind this before commenting.
The usual version of this I hear is from people who've read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
The stagnation is because of "progressive" politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people's opportunities to innovate and profit by it.
Hmm. On the one hand, political stupidity does seem like a very serious problem that needs fixing and imposes massive opportunity costs on humanity. On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn't open to reason? You need the sales skill of a demagogue, which I haven't got.
Certainly the environmental movement, including its best known "scientists", have discredited themselves this way.
I don't know, I find most people don't identify such a pattern and thus avoid a BWCW effect;
What's that?
while most people above a certain standard of rationality are able to take advantage of evidence, public-spirited debunkers and patterns to screen out most of the noise. Your milage may vary, of course; I tend not to may much attention to environmental issues except when they impinge on something I'm already interested in, so perhaps this is harder at a higher volume of traffic.
One of the ways in which the demagogues have taken control of politics is to multiply political entities and the various debates, hearings, and elections they hold until no non-demagogue can hope to influence more than a vanishingly small fraction of them. This is another very common, nasty tactic that ought to have a name, although "Think globally, act locally" seems to be the slogan driving it.
Replies from: MugaSofer, TheOtherDave↑ comment by MugaSofer · 2014-12-25T07:23:53.621Z · LW(p) · GW(p)
The obvious next question would be to ask if you're OK with your family being tortured under the various circumstances this would suggest you would be.
I've lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it's unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
The usual version of this I hear is from people who've read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
Oh. That's an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, ho relevant is the distinction? I mean, sure, it might (say) be lying about it's preferences, but ... surely it'll have exactly the same impact on society, regardless?
On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn't open to reason?
ahem ... I'm ... actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I'm not sure I'd go quite so far as to say it's "obvious" and anyone who disagrees must be "senseless ... not open to reason".
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It's a problem. (That's why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
What's that?
Ahh ... "Boy Who Cried Wolf". Sorry, that was way too opaque, I could barely parse it myself. Not sure why I thought that was a good idea to abbreviate.
Replies from: jdgalt↑ comment by jdgalt · 2015-02-28T19:26:53.918Z · LW(p) · GW(p)
The obvious next question would be to ask if you're OK with your family being tortured under the various circumstances this would suggest you would be.
I've lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it's unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
The usual version of this I hear is from people who've read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
Oh. That's an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, [how] relevant is the distinction? I mean, sure, it might (say) be lying about it's preferences, but ... surely it'll have exactly the same impact on society, regardless?
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.
On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn't open to reason?
ahem ... I'm ... actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I'm not sure I'd go quite so far as to say it's "obvious" and anyone who disagrees must be "senseless ... not open to reason".
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It's a problem. (That's why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
For the same reason, I never expect judges, journalists, or historians to be "unbiased" because I don't believe true "unbiasedness" is possible even in principle.
Replies from: MugaSofer↑ comment by MugaSofer · 2015-03-02T11:50:21.002Z · LW(p) · GW(p)
I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.
But is it possible to impersonate intelligence? Isn't anything that can "fake" problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).
I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.
What makes you think that "individual rights" are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they're the correct moral theory, what evidence would you point to? You might change my mind.
One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
Oh, everyone is misguided. (Hence the name of the site.) But they generally aren't actual evil strawmen.
↑ comment by TheOtherDave · 2014-12-01T23:59:32.510Z · LW(p) · GW(p)
a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
So, there's two pieces there, and I'm not sure how those pieces interact on your view.
Like, if we had a highly reliable test for true self-awareness, but it turned out that interest groups could manufacture large numbers of genuinely self-aware systems that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can't reliably test for true self-awareness, but we don't have a reliable way to manufacture apparently-self-aware systems that vote or fight a particular way, would that be better? Why?
Replies from: jdgalt↑ comment by jdgalt · 2015-02-28T19:45:30.740Z · LW(p) · GW(p)
I would consider the genuinely self-aware systems to be real people. I suppose it's a matter of ethics (and therefore taste) whether or not that's important to you.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2015-03-06T03:19:14.391Z · LW(p) · GW(p)
I don't understand how that answers my question, or whether it was intended to.
I mean, OK, let's say the genuinely self-aware systems are real people. Then we can rephrase my question as:
Like, if we had a highly reliable test for real personhood, but it turned out that interest groups could manufacture large numbers of real people that would reliably vote and/or fight for their side of political questions, would that be better? Why?
Conversely, if we can't reliably test for real personhood, but we don't have a reliable way to manufacture apparently real people that vote or fight a particular way, would that be better? Why?
But I still don't know your answer.
I also disagree that matters of ethics are therefore matters of taste.
Replies from: Jiro↑ comment by Jiro · 2015-03-06T10:49:48.055Z · LW(p) · GW(p)
We have votes because we want to maximize utility for the voters. Allowing easily manufactured people to vote creates incentives to manufacture people.
So the answer to this depends on your belief about utilitarianism. If you aggregate utility in such a way that adding more people increases utility in an unbounded way, then you should do whatever you can to encourage the creation of more people regardless of whether their votes cause harm to existing people, so it is good to create incentives for their creation and you should let them vote. (You also get the Repugnant Conclusion.) If you aggregate utility in some way that produces diminishing returns and avoids the Repugnant Conclusion, then it is possible that at some point creating more new people is a net negative. If so, you'd be better off precommitting to not let them vote because not letting them vote prevents them from being created, increasing utility.
Note: Most people, insofar as they can be described as utilitarian at all, will fall into the second category (with the precommitment being enforced by their inherent inability to care much for people who they cannot see as individuals).
This also works when you substitute "allowing unlimited immigration" for "creating unlimited amounts of people", Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2015-03-07T03:18:38.244Z · LW(p) · GW(p)
Yes, agreed with all this.
And yes, like most people, I don't have a coherent understanding of how to aggregate intersubjective utility but I certainly don't aggregate it in ways that cause me to embrace the Repugnant Conclusion. (By contrast, on consideration I do seem to embrace Utility Monsters, distasteful as the prospect feels on its face.)
Your choice of how to aggregate utility also affects whether it is good to trade off utility among already existing people just like it affects whether it is good to create new people.
Well, not "just like." That is, I might have a mechanism for aggregating utility that treats N existing people in other countries differently from N people who don't exist, and makes different tradeoffs for the two cases. But, yes, those are both examples of tradeoffs which a utility-aggregating mechanism affects.
↑ comment by Daniel_Burfoot · 2013-11-23T16:02:30.774Z · LW(p) · GW(p)
campaigning-by-inventing-a-new-phony-emergency-every-month
This phenomenon is very real and should have a catchy phrase to describe it.
Replies from: TheOtherDave, Lumifer, simplicio↑ comment by TheOtherDave · 2013-11-23T16:35:01.781Z · LW(p) · GW(p)
In my workplace we call it "crisis management," fully aware of the ambiguity of that phrase.
↑ comment by simplicio · 2013-11-26T14:49:15.158Z · LW(p) · GW(p)
Similar to the "shock doctrine", but that is an explicitly leftist idea so it probably doesn't work to name the generalized phenomenon.
↑ comment by Vaniver · 2013-11-23T04:24:23.519Z · LW(p) · GW(p)
Great Stagnation -- I believe that the rich world's economy IS in a great stagnation that has lasted for most of a century, but NOT for the reasons Cowen and Thiel suggest. The stagnation is because of "progressive" politics, especially both the welfare state and overregulation/nanny-statism, which destroy most people's opportunities to innovate and profit by it.
I get the impression that this is actually a core part of Thiel's argument. Consider this, for example.
comment by b_sen · 2013-12-26T04:09:32.975Z · LW(p) · GW(p)
Delurked and taken (finally); this is my first comment. I'd been wanting to take this survey for a while, but offline matters kept me away until now. At least I got in a good stab at most of the extra credit questions.
I second the following suggestions:
- Clarify the income question on tax status (pre-tax / post-tax / pre-some taxes and post-others) and individual vs. household. I mention the third tax possibility here because some taxes are deducted by the employer, so employees don't see that money in their paychecks. If the question intended is along the lines of "Other than tax refunds, how much money do you / your household receive (that you can use in a budget and could theoretically spend, although some of it may be set aside for further taxes) in a year?", then this matters.
- Add a "None" option to the mental illness question to distinguish between "none" and "didn't answer". Checkboxes would be nice, since mental illnesses can interact with each other, but Yvain has stated that he can't put them in the survey. I will mention this anyway in case checkboxes become viable for future versions of the survey.
I will also make a further suggestion, although I understand that it may be too onerous to implement: have an option to make only part of one's responses private. I mention this because I started by choosing the "public but anonymous" option, but switched to "private" once I got to the point that all my responses together could probably identify me out of the dataset if someone was moderately determined to do so and knew a few specific facts about me.
In my case, making a single extra credit section private (showing it as if I hadn't answered in the public dataset) would have been enough for me to be comfortable putting the remaining responses in the public dataset. That section has data that I don't mind giving Yvain and CFAR, but don't want to leave readily available to potential future agents trying to identify my responses in the dataset. I would prefer to make only the single section private, but I did not have that option available. I am also curious if other people are in the same boat.
Should I win, I precommit to spending the prize on myself, as per Yvain's stated wishes for the prize.
comment by Vika · 2013-12-20T16:01:34.293Z · LW(p) · GW(p)
Took the full survey (ouch, my calibration is terrible, especially if I misunderstand the question...). I find it a bit frustrating that it asks only about the SAT and ACT (which I haven't taken), and not, for example, the GRE. Otherwise it was really fun without taking very long, thanks Yvain!
comment by ColonelMustard · 2013-12-09T05:04:45.629Z · LW(p) · GW(p)
Took the survey. I assume from the phrasing that 'country' means where I'm "from" rather than where I currently reside (there is more room for uncertainty about the former than about the latter). Might be interesting to put both questions.
Replies from: radical_negative_one↑ comment by radical_negative_one · 2013-12-09T06:41:20.347Z · LW(p) · GW(p)
The survey's exact wording is:
If multiple possible answers, please choose the one you most identify with.
So, if you for example grew up in France and currently live in the USA, and you thought of yourself primarily as being "from France" then France would be the correct answer. If you thought of yourself mainly as American, then USA would be the correct answer.
In other words, neither answer would be "wrong".
Replies from: ColonelMustard↑ comment by ColonelMustard · 2013-12-09T12:24:16.228Z · LW(p) · GW(p)
"Where are you from" and "where do you live now" are different questions. The first of these has multiple answers for a lot of people I know; the second probably doesn't. I would suggest both questions be asked next year.
comment by Bill_McGrath · 2013-12-07T12:29:47.773Z · LW(p) · GW(p)
Survey taken!
I tried it a few days ago and it didn't submit as far as I can tell - in between I looked up the answer to the calibration question, but I answered as I did originally (NAILED IT anyway).
Survey gripe: I answered "left-handed" for the handedness question, but I only really write with my left hand, and do everything else with my right. My left hand might be a little more dextrous but my right is definitely stronger. As such I'd see myself as cross-dominant rather than ambidextrous; is this something that could be included on future surveys or is it not useful for the kind of data you're collecting?
comment by goatherd · 2013-11-30T00:16:14.792Z · LW(p) · GW(p)
For the questions about the many worlds hypothesis, and whether we are living in a simulation, it seems to me that there is no way to know the truth, because the world would look just the same, but it may sometimes be useful to think as if they were true? Or am I just missing something fundamental?
I enjoyed reading about the MONETARY REWARD.
Replies from: hyporational↑ comment by hyporational · 2013-11-30T13:12:44.514Z · LW(p) · GW(p)
For simulations, some people think it's possible to know. The argument is based on anthropics. The Quantum Physics Sequence might make you more certain of the MWI, but I haven't read it. You could also base your probability of MWI on Occam's Razor somewhat, since it seems to be the simpler interpretation.
comment by pgbh · 2013-11-29T16:38:43.691Z · LW(p) · GW(p)
Took the survey.
I chose to defect. Defecting maximizes the expected payoff for me personally, and the expected overall payoff isn't affected by my decision since Yvain just keeps whatever money isn't claimed.
An interesting variant would have been for Yvain to throw away whatever money was lost due to defections, or donate it to some organization most don't like. In that case I would probably have cooperated.
comment by anon (ExaminedThought) · 2013-11-28T02:06:04.451Z · LW(p) · GW(p)
removed Replies from: army1987, aletheianink, TheOtherDave↑ comment by aletheianink · 2013-12-01T00:55:18.721Z · LW(p) · GW(p)
I don't know if this helps, but I felt the same way, and took the Mensa entrance test to find out my IQ. Turns out that they don't actually give you the results, just tell you if you've entered ... and at the moment, that's satisfied my desire to know without feeling unhappy it's not high enough.
↑ comment by TheOtherDave · 2013-11-28T02:23:44.609Z · LW(p) · GW(p)
If it's lower than I want...
What do you want your IQ to be?
Replies from: ExaminedThought↑ comment by anon (ExaminedThought) · 2013-11-28T02:26:59.634Z · LW(p) · GW(p)
removedReplies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-28T02:54:34.538Z · LW(p) · GW(p)
(nods) So, would you rather know what it is now, so you can either be content at having achieved your goal or know how much you have to increase it by to achieve your goal? Or would you rather remain ignorant?
comment by plex (ete) · 2013-12-14T07:59:26.794Z · LW(p) · GW(p)
Done, including most bonus questions. Missed the IQ ones since I've never had that test, and defected before reading the comment saying the money was coming from someone's pocket rather than lesswrong (order of preference for where money is: my pocket>lesswrong>random lesswrong survey completer). Though I'd probably still defect knowing it's coming from Yvain.. ideally next time you could find a source of prize money who everyone wants to take money from?
comment by roryokane · 2013-11-30T19:20:49.616Z · LW(p) · GW(p)
I took the survey.
I chose to Defect on the monetary reward prize question. Why?
- I realized that the prize money is probably contributed by Yvain. And if $60-or-less were to be distributed between a random Less Wrong member and Yvain, I would rather as much of it as possible go to Yvain. This is because I know Yvain is smart and writes interesting posts, so the money could help him to contribute something to the world that another could not. Answering Defect lowers the amount of prize money, making Yvain keep more of it.
- Also, I would rather I have the $60-or-less than anyone other Less Wrong member, and answering Defect gets me a bigger chance of that happening.
Edit: pgbh had the same reasoning.
comment by ChristianKl · 2013-11-29T08:04:22.327Z · LW(p) · GW(p)
I took the survey. In a bit sad that there are less questions than last year, but in total I like it.
comment by drnickbone · 2013-11-28T00:05:24.756Z · LW(p) · GW(p)
Taken the survey, for the second time. Doesn't feel like a year...
I'm a bit curious about the Prisoners' Dilemma question. I co-operated, as my rationale was "Well, I'm unlikely to win anyway, and I don't really want to spoil the prize for whoever does win, so C". Not sure if that counts as a true PD...
Replies from: Baughncomment by ChrisHallquist · 2013-11-23T02:26:12.373Z · LW(p) · GW(p)
I chose defect, and plan on donating the money to MIRI if I win.
I would also like to hereby precommit to (assuming a repeat of the monetary award next year), donating the money to x-risk reduction next year if I win, and also chosing "cooperate" if a large number of people make a similar precommitment and "defect" otherwise.
Replies from: AlexMennen, lmm↑ comment by AlexMennen · 2013-11-23T02:30:36.451Z · LW(p) · GW(p)
I would also like to hereby precommit to (assuming a repeat of the monetary award next year), donating the money to x-risk reduction next year if I win, and also chosing "cooperate" if a large number of people make a similar precommitment and "defect" otherwise.
I make the same precommitment.
Edit: Partially retracted. I'm committing to donating the prize to x-risk reduction if I win, but not to defecting next year if not many other people also make that precommitment. See here.
comment by [deleted] · 2013-12-22T23:39:22.419Z · LW(p) · GW(p)
I took the survey! It was certainly the most interesting online information-gathering survey I've ever taken, mostly because of the end- in retrospect, not sure what I expected.
comment by mathnerd314 · 2013-12-21T02:59:55.113Z · LW(p) · GW(p)
I took the survey; apparently I get karma for that? :-)
comment by gnomicperfect · 2013-12-15T03:17:47.766Z · LW(p) · GW(p)
I took the survey. Been lurking for about two years sans account.
I guess this makes me part of the borganism now.
comment by jknapka · 2013-12-12T20:43:15.344Z · LW(p) · GW(p)
Survey taken. I hope I didn't break it - I am a committed atheist, but also an active member of a Unitarian Universalist congregation, and I indicated that in spite of the explicit request for atheists not to answer the denomination question. (Atheist UUs are very common, and people on the "agnostic or less religious" side of the spectrum probably make up around 40% of the UU congregations I'm familiar with.)
comment by PhilSchwartz · 2013-12-10T03:36:01.528Z · LW(p) · GW(p)
Took the survey. Feels good to be posting a comment again, think it's potentially a way to get people to overcome the tendency to just lurk.
comment by westopheles (ww2) · 2013-12-10T00:18:06.500Z · LW(p) · GW(p)
I've completed the survey
comment by ppp · 2013-12-31T06:01:10.970Z · LW(p) · GW(p)
Finally registered, hereby delurked, and completed the survey with over 24 hours to spare! Looking forward to the results. Thanks for doing this.
I understand how tricky putting together a good survey can be, never mind having to make it play well on Google Forms. Probably the most vexing item for me was the Akrasia:Illness one. Because of the high levels of comorbidity among the choices, I can't imagine I was the only one wondering how to "select the most important".
I still have a long way to go in absorbing even a surface level of everything here, and relish the upcoming year for its potential to illuminate and enlighten. Some of the questions have already sent me off on further explorations.
Oh, also, igtheism/ignosticism would have been a nice choice to have (I've only recently discovered the terms, and found they matched pleasantly with my views on the subject).
comment by Alejandro1 · 2013-12-29T17:39:16.126Z · LW(p) · GW(p)
Did it.
comment by [deleted] · 2013-12-15T09:15:32.067Z · LW(p) · GW(p)
comment by ChrisHallquist · 2013-11-23T20:27:02.056Z · LW(p) · GW(p)
One other comment on the survey: I totally surprised my self by describing my political views as "socialist," once I saw "socialist" defined as "for example Scandanavian countries: socially permissive, high taxes, major redistribution of wealth." I'm not actually very clear on the details of how Scandanavian countries differ from, say, Britain, France, or Germany, but insofar as I'm inclined to support things like guaranteed basic income the shoe seems to fit. I wonder if the wording will result in more socialists on this year's survey.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-24T17:34:28.042Z · LW(p) · GW(p)
AFAICT the wording is the same as in all the previous years' surveys.
comment by knb · 2013-11-23T20:05:13.590Z · LW(p) · GW(p)
Possible Survey Spoiler. You may want to take the survey before reading this.
I'm not sure if monetary prize question was intended to serve as a reimagining of the prisoner's dilemma, but that seems to be the way people are interpreting it in these comments. I would like to point out that the cooperate/defect question is fundamentally different from the original Prisoner's Dilemma because the total amount of prison time in the original scenario actually is dependent on your cooperation or defection. In this game, the total amount of money is unchanged by our actions.
Defecting reassigns more money to yourself and Yvain (or whoever is paying for the prize.) Cooperating assigns more money to other survey takers. I don't really see why anyone should prefer giving money to random other survey takers rather than themselves or Yvain.
In future surveys, this could be corrected for (assuming this is intended to serve as a prisoner's dilemma) by promising to burn the portion of the prize money that is defected away.
Replies from: army1987, TheOtherDave, gattsuru, Wes_W↑ comment by A1987dM (army1987) · 2013-11-24T17:36:50.639Z · LW(p) · GW(p)
In future surveys, this could be corrected for (assuming this is intended to serve as a prisoner's dilemma) by promising to burn the portion of the prize money that is defected away.
Nah, that would just slightly increase the value of the US dollar, and be equivalent to reassigning more money to anyone who's holding any US dollars. You'd have to destroy intrinsically valuable resources instead.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-11-25T04:26:34.555Z · LW(p) · GW(p)
Nah, that would just slightly increase the value of the US dollar, and be equivalent to reassigning more money to anyone who's holding any US dollars. You'd have to destroy intrinsically valuable resources instead.
Dump some mass into a very large black hole. (Note: Please don't do this, I consider this to be one of the worst possible crimes against sentient life.)
↑ comment by TheOtherDave · 2013-11-23T21:02:43.838Z · LW(p) · GW(p)
It might just as well be intended to establish how many people are so entrained on "cooperating is virtuous in PD-like problems" that we choose cooperation-like choices without actually thinking through the consequences.
I wonder, now, what a typical audience would select if offered a standard PD problem with the labels swapped.
Replies from: knb↑ comment by gattsuru · 2013-11-23T21:12:42.991Z · LW(p) · GW(p)
I don't really see why anyone should prefer giving money to random other survey takers rather than themselves or Yvain.
I'm unable to take the money, but think that there is value toward incentivizing both high survey returns and people taking the monetary prize question seriously, so cooperating was strongly more valuable.
Replies from: knb↑ comment by Wes_W · 2013-11-23T21:26:44.317Z · LW(p) · GW(p)
It seems to me that Yvain keeping money which he himself arranged to give away should not necessarily be considered to have positive utility. That leaves self-interest. But whether self-interest can/should motivate cooperation is the fundamental question of PD in the first place, isn't it?
Replies from: knbcomment by jbash · 2013-11-22T14:36:17.296Z · LW(p) · GW(p)
Not taken, and will not be taken as long as it demands that I log in with Google (or Facebook, or anything else other than maybe a local Less Wrong account).
Replies from: arundelo↑ comment by arundelo · 2013-11-22T15:10:13.486Z · LW(p) · GW(p)
I didn't have this problem, maybe because I was already logged into Google; probably docs.google.com is doing some automatic behavior because it sees you have an expired cookie. You should be able to avoid this with an incognito window or whatever your browser's equivalent is.
Replies from: jbashcomment by William_Quixote · 2013-12-29T15:06:34.031Z · LW(p) · GW(p)
Survey taken!
Answered all questions.
-Survey caused me to realize that my mental model of Europe as EU does not line up with the world's model of Europe as geographical area. Good thing to learn.
-I think my answers to the CFAR questions are contradictory
-Would have liked more granularity in the vegetarianism question
Thanks very much to Yvain for running this
comment by solipsist · 2013-12-28T19:22:26.166Z · LW(p) · GW(p)
I've taken the survey.
The Prisoner's Dilemma question didn't do it for me. What will happen to the remainder of the $60? Will it be burned? Also I make enough money that winning $30 probably isn't worth the hassle and security risk to transfer it into my bank account.
EDIT Wow, that came out negative! Thank you Yvain for organizing this survey! Depriving money from you does not seem like an altruistic option.
comment by Fermatastheorem · 2013-12-23T05:56:19.247Z · LW(p) · GW(p)
Survey taken! Can't wait to see the results.
comment by Darklight · 2013-12-17T00:50:09.655Z · LW(p) · GW(p)
Took the survey. This survey made me remember that I've never actually done a proper IQ test. I should consider rectifying that situation. Other than that, I was surprised that your extensive "Complex Affiliation" political section did not include "Liberal". Modern Liberalism is a distinct political tradition that I would argue ought to be on such an extensive list, especially given that it's on the much shorter earlier list of political affiliations. :V
Also, though I don't yet "self-identify" with Effective Altruism, I do sympathize with their goals and ideals, and am mulling over the idea of joining the movement.
Aside from that, good work coming up with some quite clever questions. No doubt the results should make interesting fodder for thought.
Though one question, what happens if the first word of the two word passphrase is the same as someone else's? Am I fair to assume that anyone who was unable to come up with an original enough first word is effectively disqualified from winning the prize, or is the prize going to be shared among those who chose the same first word?
Edit: I just realized that technically what could happen with the two word passphrase is that even if two or more people had the same first word, chances are they would have different second words and so even if multiple people thought they had won after the first word was revealed, only the one with the correct second word would actually win. Which would suck for the others with that first word who didn't win, as they'd be given such hope and then have those hopes dashed. XD
comment by Jay_Schweikert · 2013-12-07T21:57:50.735Z · LW(p) · GW(p)
Answered every question to which I had an answer. I haven't spent much time on Less Wrong recently, but it's really pretty remarkable how just answering Less Wrong surveys causes me to think more seriously than just about anything else I come across in any given week.
comment by Akiyama · 2013-12-29T18:35:16.271Z · LW(p) · GW(p)
I took the survey, then tried to sign up for Less Wrong.
Only to discover that I already had an account. So my answers to the questions of whether I had an account and how many karma points I had were wrong!
I just wanted to say - firstly, I'm surprised that "Liberal" is given as an option on the short political affiliation question, but not on the longer one! I wrote it in.
Secondly, I STRONGLY object to the UK Labour Party being given an example of a liberal party! I imagine that Americans would have the same reaction to the Republican party being given as an example of a liberal party (they freed the slaves, didn't they?). To my mind, the Labour Party of the 21st Century is both illiberal and right-wing.
comment by A1987dM (army1987) · 2013-12-01T17:13:31.294Z · LW(p) · GW(p)
In the Wikipedia article about Europe, the figure for the population in the infobox is slightly different from that in the main text of the lead section.
comment by A1987dM (army1987) · 2013-11-24T17:09:22.006Z · LW(p) · GW(p)
I accidentally submitted the survey before finishing it, so I'm taking it again. So, when you see two very similar responses except the second contains a few more answers than the first, please ignore the first.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-11-25T22:07:54.266Z · LW(p) · GW(p)
I'm glad I'm not the only one.
comment by JackV · 2013-11-24T10:03:38.634Z · LW(p) · GW(p)
Do we make suggestions here or wait for another post?
A few friends are Anglo-Catholic (ie. members of the Church of England or equivalent, not Roman Catholic, but catholic, I believe similar to Episcopalian in USA?), and not sure if they counted as "Catholic", "Protestant" or "Other". It might be good to tweak the names slightly to cover that case. (I can ask for preferred options if it helps.)
https://fbcdn-sphotos-f-a.akamaihd.net/hphotos-ak-prn2/1453250_492554064192905_1417321927_n.jpg http://en.wikipedia.org/wiki/Anglo-Catholicism
comment by csvoss (Terdragon) · 2013-12-01T06:10:03.835Z · LW(p) · GW(p)
Is there anywhere I can read an explanation of (or anyone who can explain) the distinction between "Atheist but spiritual" and "Atheist and not spiritual"?
Replies from: blashimov↑ comment by blashimov · 2013-12-01T16:35:52.601Z · LW(p) · GW(p)
My understanding, you might believe in some continued life after death, something about human souls, any sort of supernatural things, but not believe in a personified interacting deity who gave humans orders like worship me, do this/that etc., nor be a deist who thinks there is such a being but doesn't give orders for some reason.
Replies from: Terdragon↑ comment by csvoss (Terdragon) · 2013-12-01T21:27:45.824Z · LW(p) · GW(p)
Okay. Good thing I submitted "Atheist and not spiritual", then!
I guess that makes sense. When I hear "Atheist but spiritual" my first response tends to be "Sure, I would appreciate songs and rituals about the wonders of science and the awe-inspiring nature of the universe. That's spirituality, right?" -- and my first response tends not to be "Oh, right, I guess there technically could be people who believe in supernatural stuff that's not gods." Perhaps because I tend to forget such beliefs exist...
comment by b1shop · 2013-11-25T06:39:25.700Z · LW(p) · GW(p)
A comment on the prize for those who've already taken it:
Qrsrpg frrzf yvxr pyrneyl gur evtug zbir. Gurer'yy or uhaqerqf bs erfcbaqragf fb V'yy unir n znetvany vzcnpg ba gur fvmr bs gur cevmr. Ubjrire, V dhnqehcyr zl punapr bs jvaavat ol pubbfvat qrsrpg. Gung orvat fnvq, V ubcr lbh cvpxrq pbbcrengr.
Replies from: None, Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2013-11-25T08:24:58.153Z · LW(p) · GW(p)
Gung vf cerpvfryl gur ernfba jul guvf xvaq xvaq bs ernfbavat qbrfa'g jbex. Unira'g lbh ernq hc ba gur cevfbaref qvyrzn? Be qb lbh vzcyl gung YJref jvyy zber yvxryl pbbcrengr guna abg. Gung znl or pbeerpg - ohg bayl vs lbh pubbfr cebonovyvfgvpnyyl. Gur engvbany nccebnpu urer vf gb ebyy n qvr naq qrsrpg jvgu $c= 25%-rcfvyba$ (rcfvyba orvat n ohssre sbe gubfr abg fzneg rabhtu). Gung jvyy znkvzvmr birenyy cre crefba.
Replies from: b1shop↑ comment by b1shop · 2013-12-08T17:41:36.021Z · LW(p) · GW(p)
Thanks for picking cooperate.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2013-12-08T20:50:09.187Z · LW(p) · GW(p)
Huh? Nothing to thank. I did roll the die and would have defected on a 1. That would have been the most sensible move for all involved.
Or did you thank me for applying the procedure? In that case: Appreciated.
comment by itaibn0 · 2013-11-23T00:27:25.609Z · LW(p) · GW(p)
Didn't take the survey. There were enough question I was vaguely uncomfortable with answering that I chose not to. I may change my mind later; however, I already read the comments, including some which give information on the probability calibration question.
comment by JTHM · 2014-01-05T05:02:22.818Z · LW(p) · GW(p)
Was I the only person who shamelessly defected only because the defect/cooperate choice isn't really a prisoner's dilemma at all? Obviously, if enough of us defect that the payout is diminished, the winner receives less, but whoever would be paying for the prize would have that much less money missing from his pocket. I would not have defected if I expected my defection were to result in a net loss of resources. For the 2014 survey, how about we try this again, with the modification if enough people defect for the payout to be reduced, a good of equal market value to the reduction in payout shall be purchased and destroyed? (You can't just burn the money, because that's not actual destruction of value, just redistribution of value to everyone else who owns units of that same currency.)
Replies from: knb, ArisKatsaris↑ comment by ArisKatsaris · 2014-01-06T05:39:35.209Z · LW(p) · GW(p)
or the 2014 survey, how about we try this again, with the modification if enough people defect for the payout to be reduced, a good of equal market value to the reduction in payout shall be purchased and destroyed?
Or perhaps for every person that defects Yvain forces himself to listen to X minutes of some music he despises, or watch some show he hates?
I really don't want Yvain to suffer penalties because of the defection of jerks, but that's what true 'defection' must mean, that someone gets hurt. All in all, I can't advise in good conscience for the test to be repeated next year.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2014-01-19T02:09:21.572Z · LW(p) · GW(p)
That's still not "true defection" though, because we all like Yvain and don't want to annoy him.
comment by ChrisHallquist · 2013-11-23T05:51:54.717Z · LW(p) · GW(p)
Er: Rhebcr: V fnvq sbhe uhaqerq svsgl zvyyvba, juvpu V inhtryl erzrzorerq urnevat, ohg jnf irel pbasvqrag. Gheaf bhg gur cbchyngvba bs Rhebcr vf arneyl frira uhaqerq sbegl zvyyvba - SNVY! Rkprcg nppbeqvat gb Jvxvcrqvn, gur cbchyngvba bs gur Rhebcrna Havba vf n yvggyr bire 500 zvyyvba, juvpu vf nyzbfg vafvqr zl 10% vagreiny, naq cebonol jung V jnf guvaxvat bs, nyybjvat sbe erzrzorevat n fbzrjung bhgqngrq ahzore.
Replies from: ciphergoth, ArisKatsaris, army1987↑ comment by Paul Crowley (ciphergoth) · 2013-11-23T09:16:43.814Z · LW(p) · GW(p)
Please spell numbers and rot13 this - thanks!
↑ comment by ArisKatsaris · 2013-11-23T12:15:56.031Z · LW(p) · GW(p)
Downvoting for revealing answer when others might not have taken survey yet, will turn to upvote if you rot13 it.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-11-23T17:01:00.496Z · LW(p) · GW(p)
Rot 13'd, but I'm confused - thought it would be taken for granted that this thread would contain lots of discussion of the survey, so you shouldn't read the thread before taking the survey.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2013-11-30T02:38:49.428Z · LW(p) · GW(p)
(Many people read the recent comments stream.)
↑ comment by A1987dM (army1987) · 2013-11-24T17:44:23.933Z · LW(p) · GW(p)
After this I stopped forgetting that the rest of Europe exists.
Replies from: Nonecomment by Salutator · 2013-11-23T10:20:22.628Z · LW(p) · GW(p)
I threw a D30, came up with 20 and cooperated.
Point being that cooperation in a prisoners dilemma sense means choosing the strategy that would maximize my expected payout if everyone chose it, and in this game that is not equivalent to cooperating with probability 1. If it was supposed to measure strategies, the question would have been better if it asked us for a cooperating probability and then Yvain would have had to draw the numbers for us.
Replies from: Salutator↑ comment by Salutator · 2013-11-23T10:37:12.205Z · LW(p) · GW(p)
This was based on a math error, it actually is a prisoners dilemma.
Replies from: rocurley↑ comment by rocurley · 2013-11-23T20:49:49.787Z · LW(p) · GW(p)
I made a similar mistake, and randomly generated defect.
Welp.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-11-25T23:46:55.867Z · LW(p) · GW(p)
I think accidentally choosing defect is probably the best possible outcome in PD; you get all the advantages of defecting, whilst your decision process still acausally causes other people to cooperate.
Replies from: Calvin↑ comment by Calvin · 2013-11-30T18:56:28.328Z · LW(p) · GW(p)
It can get even better, assuming you put your moral reasoning aside.
What you could do, is to deliberately defect and then publicly announce to everyone that it was a result of random chance.
If you are concerned about lying to others, then I concur, that accdientally choosing to defect is best of both worlds.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-11-30T19:25:10.347Z · LW(p) · GW(p)
What you could do, is to deliberately defect and then publicly announce to everyone that it was a result of random chance.
In the literal PD scenario, I imagine the subsequent converation would go:
"You accidentally informed on us? Okay, we'll accidentally shoot your legs off."
comment by Frazer · 2014-05-02T03:26:24.698Z · LW(p) · GW(p)
Is there a way to be notified when the 2014 survey comes out?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-02T12:09:24.561Z · LW(p) · GW(p)
The LW main blog isn't high volume so, simply subscribing to it's RSS feed should do the trick.
comment by Frood · 2014-01-01T08:45:17.161Z · LW(p) · GW(p)
Just finished it. I missed the deadline, but it seems to have let me submit. Thanks for a good time!
I defected because I decided that I'm one of the last ones to complete the survey, so RIGHT NOW I have the choice between 4 tickets and 1 ticket in a lottery for approximately the same amount of money. My gut now tells me this was bad decision making, so...Contemplation Time!
Replies from: Nonecomment by sbierwagen · 2013-11-22T23:18:31.538Z · LW(p) · GW(p)
I took the last two surveys, but I'm not taking this one, since I anticipate that all my answers will be the same.
Replies from: VAuroch↑ comment by VAuroch · 2013-11-23T01:14:59.338Z · LW(p) · GW(p)
So you'll exclude yourself from the sample, artificially biasing the census against you?
Replies from: sbierwagen↑ comment by sbierwagen · 2013-11-27T18:14:30.214Z · LW(p) · GW(p)
Sure. I see no downside to that which outweighs the time cost of taking the survey yet again.
comment by Taurus_Londono · 2013-11-22T18:49:20.043Z · LW(p) · GW(p)
Feedback, FWIW:
- Can you not infer "relationship status" from "number of current partners"?
- Profession: No "Chemistry"?? ... three choices for "computers," you nicely distinguish finance/economics as separate from "business," and the same for the "statistics" and "mathematics" people, but the central science has to fall under "other hard science"?
↑ comment by Scott Alexander (Yvain) · 2013-11-22T21:57:00.299Z · LW(p) · GW(p)
Thank you for your unpleasantly phrased and confrontational feedback.
The software I use to process this information has a lot of trouble handling "check multiple boxes". Adding "biracial" would be strictly inferior to just asking people which of their two races they identify more with, since biracial gives no race information.
You cannot infer relationship status from number of partners, because status differentiates "married" from "in a relationship", which the partner question cannot do.
So far each one of the three computer options has been selected by significantly more people than the "other hard sciences" group. There are 174 people selecting "practical computing", compared to 10 people in all of "hard science". I base these categories not based on what people think is the "central science" but on what will best distinguish between large categories of people.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-11-23T09:21:40.787Z · LW(p) · GW(p)
No good deed goes unpunished. Yvain, thank you again for all the hard work you put into assembling and analysing this survey every year, it's a boon for all of us.
↑ comment by polymathwannabe · 2013-11-22T18:52:55.836Z · LW(p) · GW(p)
Seconded. I work in book publishing. There was no option for that.