Excuse me, would you like to take a survey?
post by Scott Alexander (Yvain) · 2009-04-26T21:23:53.642Z · LW · GW · Legacy · 132 commentsContents
132 comments
Related to: Practical Rationality Questionnaire
Here among this community of prior-using, Aumann-believing rationalists, it is a bit strange that we don't have any good measure of what the community thinks about certain things.
I no longer place much credence in raw majoritarianism: the majority is too uneducated, too susceptible to the Dark Arts, and too vulnerable to cognitive biases. If I had to choose the people whose mean opinion I trusted most, it would be - all of you.
So, at the risk of people getting surveyed-out, I'd like to run a survey on the stuff Anna Salamon didn't. Part on demographics, part on opinions, and part on the interactions between the two.
I've already put up an incomplete rough draft of the survey I'd like to use, but I'll post it here again. Remember, this is an incomplete rough draft survey. DO NOT FILL IT OUT YET. YOUR SURVEY WILL NOT BE COUNTED.
Incomplete rough draft of survey
Right now what I want from people is more interesting questions that you want asked. Any question that you want to know the Less Wrong consensus on. Please post each question as a separate comment, and upvote any question that you're also interested in. I'll include as many of the top-scoring questions as I think people can be bothered to answer.
No need to include questions already on the survey, although if you really hate them you can suggest their un-inclusion or re-phrasing.
Also important: how concerned are you about privacy? I was thinking about releasing the raw data later in case other people wanted to perform their own analyses, but it might be possible to identify specific people if you knew enough about them. Are there any people who would be comfortable giving such data if only one person were to see the data, but uncomfortable with it if the data were publically accessible?
132 comments
Comments sorted by top scores.
comment by MBlume · 2009-04-27T03:00:02.307Z · LW(p) · GW(p)
What is your opinion on sentience? What features must a given computational process have to be considered morally significant?
What is the moral significance of:
Healthy adult humans
infants
fetuses
higher primates
mammals
humans declared brain-dead
etc?
comment by Nick_Tarleton · 2009-04-26T23:36:54.819Z · LW(p) · GW(p)
Some little things:
- "Professional field" should be multiple-choice.
- What do you mean by "spiritual" under "religious views" – believe in the supernatural? take mysticism seriously in a way compatible with naturalism?
- On p(Aliens), does "the Universe" mean past light cone, present surface of observable universe, or entire (potentially infinite) continuum? How about other Everett branches?
- A definition of "supernatural" before the p(God) question would be nice.
- "Three Worlds Ending" might benefit from a "clear preference for specific other outcome" option.
- Similarly, at least some of the PD and other game theory/superrationality-related questions could have something like "different clear preferences depending on unspecified details of the situation".
↑ comment by robzahra · 2009-04-27T03:34:14.088Z · LW(p) · GW(p)
Agreed with tarleton, the prisoner's dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get
Replies from: cousin_it↑ comment by cousin_it · 2009-04-27T10:30:53.618Z · LW(p) · GW(p)
Going slightly offtopic: Eliezer's answer has irked me for a long time, and only now I got a handle on why. To reliably win by determining whether the opponent one-boxes, we need to be Omega-superior relative to them, almost by the definition of Newcomb's. But such powers would allow us to just use the trivial solution: "cooperate if I think my opponent will cooperate".
Replies from: Vladimir_Nesov, robzahra↑ comment by Vladimir_Nesov · 2009-04-27T12:36:38.877Z · LW(p) · GW(p)
If you know that no matter what you do, the other one will cooperate, then you should defect.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-27T12:56:28.769Z · LW(p) · GW(p)
Eliezer wants to cooperate against a cooperating opponent, as depicted in the beginning of "Three Worlds Collide". What I "should" do is quite another matter.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-27T13:33:30.851Z · LW(p) · GW(p)
You don't cooperate against a Paperclip maximizer if you know it'll cooperate even if you defect. If you cooperate in this situation, it's a murder of 1 billion people. I'm quite confident that if you disagree with this statement, you misunderstand the problem.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-27T14:29:19.884Z · LW(p) · GW(p)
Oh. Time to change my opinion - now I finally see what you and Eliezer mean by "general theory". It reduces to something like this:
My source code contains a rule M that overrides everything else and is detectable by other agents. It says: I will precommit to cooperating (playing the Pareto-optimal outcome) if I can verify that the opponent's source code contains M. Like a self-printing program (quine), no infinite recursion in sight. And, funnily enough, this statement can persuade other agents to modify their source code to include M - there's no downside. Funky!
But I still have no idea what Newcomb's problem has to do with that. Maybe should give myself time to think some more...
Replies from: Psy-Kosh, Vladimir_Nesov↑ comment by Psy-Kosh · 2009-04-27T15:27:24.093Z · LW(p) · GW(p)
Or, more generally: "If, for whatever reason, there's sufficiently strong correlation between my cooperation and my opponent's cooperation, then cooperation is the correct answer"
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-27T16:00:57.601Z · LW(p) · GW(p)
You need causation, not correlation. Correlation considers the whole state space, whereas you need to look at correlation within each conditional area of state space, given one action (your cooperation), or another (your defection), which in this case corresponds to causation. If you only look for unconditional correlation, you are inadvertently asking the same circular question: "what will I do?". When you act, you determine which parts of the state space are to be annihilated, become not just counterfactual, but impossible, and this is what you can (ever) do. Correlation depends on that, since it's computed over what remains. So you can't search for that information, and use it as a basis for your decisions.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-04-27T17:40:45.428Z · LW(p) · GW(p)
If you know the following fact: "The other guy will cooperate iff I cooperate", even if you know nothing about the nature of the cause of the correlation, that's still a good enough reason to cooperate.
You ask yourself "If I defect, what will the outcome be? If I cooperate, what will the outcome be?" Taking into account the correlation, one then determines which they prefer. And there you go.
For example, imagine that, say, two AIs that were created with the same underlying archetecture (though possibly with different preferences) meet up. They also know the fact of their similarity. Then they may reason something like "hrmm... The same underlying algorithms running in me are running in my opponent. So presumably they are reasoning the exact same way as I am even at this moment. So whichever way I happen to decide, cooperate or defect, they'll probably decide the same way. So the only reasonably possible outcomes would seem to be 'both of us cooperate' or 'both of us defect', therefore I choose the former, since it has a better outcome for me. Therefore I cooperate."
In other words, what I chose is also lawful. That is, physics underlies my brain. My decision is not just a thing that causes future things, but a thing that was caused by past things. If I know that the same past things influenced my opponent's decision in the same way, then I may be able to infer "whatever sort of reasoning I'm doing, they're also doing, so..."
Or did I completely fail to understand your objection?
↑ comment by Vladimir_Nesov · 2009-04-27T14:59:52.740Z · LW(p) · GW(p)
My source code contains a rule M that overrides everything else and is detectable by other agents. It says: I will precommit to cooperating (playing the Pareto-optimal outcome) if I can verify that the opponent's source code contains M. Like a self-printing program (quine), no infinite recursion in sight. And, funnily enough, this statement can persuade other agents to modify their source code to include M - there's no downside. Funky!
Something like this. Referring to an earlier discussion, "Cooperator" is an agent that implements M. Practical difficulties are all in signaling that you implement M, while actually implementing it may be easy (but pointless if you can't signal it and can't detect M in other agents).
The relation to Newcomb's problem is that there is no need to implant a special-purpose algorithm like M you described above, you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb's problem.
One inaccuracy here is that there are many Pareto optimal global strategies (in PD there are many if you allow mixed strategies), with different payoffs to different agents, and so they must first agree on which they'll jointly implement. This creates a problem analogous to the Ultimatum game, or the problem of fairness.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-27T15:26:14.897Z · LW(p) · GW(p)
you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb's problem
Didn't think about that. Now I'm curious: how does this decision theory work? And does it give incentive to other agents to adopt it wholesale, like M does?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-27T16:35:43.104Z · LW(p) · GW(p)
That's the idea. I more or less know how my version of this decision theory works, and I'm likely to write it up in the next few weeks. I wrote a little bit about it here (I changed my mind about causation, it's easy enough to incorporate it here, but I'll have to read up on Pearl first). There is also Eliezer's version, that started the discussion, and that was never explicitly described, even on a surface level.
Overall, there seem to be no magic tricks, only the requirement for a philosophically sane problem statement, with inevitable and long-known math following thereafter.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-27T17:05:03.342Z · LW(p) · GW(p)
OK, I seem to vaguely understand how your decision theory works, but I don't see how it implements M as a special case. You don't mention source code inspection anywhere.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-27T17:26:25.091Z · LW(p) · GW(p)
What matters is the decision (and its dependence on other facts). Source code inspection is only one possible procedure for obtaining information about the decision. The decision theory doesn't need to refer to a specific means of getting that information. I talked about a related issue here.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-27T18:46:36.663Z · LW(p) · GW(p)
Forgive me if I'm being dumb, but I still don't understand. If two similar agents (not identical to avoid the clones argument) play the PD using your decision theory, how do they arrive at C,C? Even if agents' algorithms are common knowledge, a naive attempt to simulate the other guy just falls into bottomless recursion as usual. Is the answer somehow encoded in "the most general precommitment"? What do the agents precommit to? How does Pareto optimality enter the scene?
↑ comment by robzahra · 2009-04-27T12:26:55.224Z · LW(p) · GW(p)
Agreed that in general one will have some uncertainty over whether one's opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent.
However, in some cases at least, you don't need to be Omega-superior to predict whether another agent one-boxes....for example, if you're facing a clone of yourself; you can just ask yourself what you would do, and you know the answer. There may be some class of algorithms non-identical to you but which are still close enough to you to make this self-reflection increased evidence that your opponent will cooperate if you do.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-27T12:40:51.680Z · LW(p) · GW(p)
No, you can't ask yourself what you'll do. It's like a calculator that seeks the answers to the question of "what is 2+2?" in a form "what will I answer to the question "what is 2+2"?", in which case the answer 57 will be perfectly reasonable.
If you are cooperating with your copy, you only know that the copy will do the same action, which is a restriction on your joint state space. Given this restriction, the expected utility calculation for your actions will return a result different from what other restrictions may force. In this case, you are left only with 2 options: (C,C) and (D,D), of which (C,C) is better.
Replies from: robzahra↑ comment by robzahra · 2009-04-27T13:04:00.286Z · LW(p) · GW(p)
you're right. speaking more precisely, by "ask yourself what you would do", I mean "engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)", as you've outlined above. Note though that even when the reduction is not complete (for example, b/c you're fighting a similar but inexact clone), there can still be added incentive to cooperate...
comment by MichaelHoward · 2009-04-26T23:05:53.309Z · LW(p) · GW(p)
interesting questions that you want asked... post each as a separate comment
Select the Existential Risk you judge most likely to occur this century?
- Nuclear holocaust
- Badly programmed superintelligence
- Genetically engineered biological agent
- Accidental misuse of nanotechnology (“gray goo”)
- Environmental catastrophe (eg runaway global warming)
- etc
↑ comment by Mike Bishop (MichaelBishop) · 2009-04-27T04:36:18.905Z · LW(p) · GW(p)
Why not ask for probabilities for each, and confidence intervals as well?
↑ comment by hirvinen · 2009-04-27T03:57:01.302Z · LW(p) · GW(p)
Related, but different: Which of these world-saving causes should receive most attention? (Maybe place these in order.)
- Avoiding nuclear war
- Create a Friendly AI, including prevention of creating AIs you don't think are Friendly
- Create AI, no need to be Friendly.
- Prevent creation of AIs until humans are a lot smarter
- Improve human cognition(should this include uploading capabilities?)
- Defense against biological agents
- Delay nanotechnology development until we have sufficiently powerful AIs to set up defenses against gray goo
- Creation and deployment of anti- gray goo nanotechnology
- Avoiding environmental hazards
- Space colonization
- Fighting diseases
- Fighting aging
- something else?
↑ comment by Jack · 2009-04-27T05:17:48.349Z · LW(p) · GW(p)
"Most attention" is ambiguous, particularly when some of the options are phrased as proactive and others reactive/preventative. Do you man funding? Public awareness? Plus there some issues might be incredibly important but require relatively little "attention" to solve while others might be less important but take a lot more resources to solve. I wouldn't know how to answer this question accept to say I don't think any effort should be spent on creating and deploying anti- grey goo nanotech.
↑ comment by CannibalSmith · 2009-04-27T14:26:06.837Z · LW(p) · GW(p)
You must also ask country of residence for this to be valid.
Replies from: hirvinen↑ comment by hirvinen · 2009-04-27T15:10:37.271Z · LW(p) · GW(p)
I think we mean here by existential risks something alone the lines of, in Bostrom's words " - - either annihilate Earth-originating intelligent life or drastically and permanently curtail its potential", making countries irrelevant.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2009-04-27T19:21:15.154Z · LW(p) · GW(p)
Oops, I misread "century" as "country".
comment by JulianMorrison · 2009-04-27T00:33:55.043Z · LW(p) · GW(p)
I wonder if you'd get better probability values if you used AJAX slider controls for a continuous value between 0 and 1. Less chance of anchoring percentages on multiples of 10 and 5.
Replies from: Zvi, gjm↑ comment by Zvi · 2009-04-28T01:18:36.933Z · LW(p) · GW(p)
In a survey does an increase in rounding errors in estimators a problem? As long as there's no bias in how they get rounded we should be fine. If there is such a bias I'm curious what it is and what causes it.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-04-28T17:43:51.850Z · LW(p) · GW(p)
I suspect it would have a strong bias towards obvious fractions and obvious multiples. That isn't a directional bias, but it's an anti-precise bias.
↑ comment by gjm · 2009-04-27T11:45:07.281Z · LW(p) · GW(p)
That would make it effectively impossible to distinguish between 1 and 0.001, or 99 and 99.999. To get around that we'd need to work with something like log(odds ratio), but then there isn't any natural choice of endpoints, people's intuition for what goes where will generally be poor, etc.
comment by gjm · 2009-04-26T23:38:50.617Z · LW(p) · GW(p)
I'd like to see a question on the best level of aid to the Third World (say, an estimated optimum as a fraction of GDP in affluent Western countries). The current level is nonzero but rather low (especially if you exclude things like military aid to allies); some people say it's scandalously low, others that such aid is actively harmful and the level should therefore be zero or very close. (I assume plenty of people also say that the level should be zero because someone in the US has no obligations to someone in sub-saharan Africa, but that opinion isn't expressed so often in public.)
Replies from: Eliezer_Yudkowsky, AlanCrowe, MichaelBishop↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-27T06:31:06.437Z · LW(p) · GW(p)
It seems a bit transparent to me that there's no such thing as a "best level of aid to the Third World". That's asking "How much money do you have to throw at the problem to stop feeling guilty?" There are only marginal efficiencies which determine how much resources you would want to flow in that direction. In the case of Africa, African economists are pleading with us to stop the aid because it's destroying their continent. I don't know about the rest of the Third World. In any case it has to go project by project.
Replies from: gjm, Jack↑ comment by gjm · 2009-04-27T09:58:46.719Z · LW(p) · GW(p)
1. A question posed simply in terms of "the best level" would be measuring some sort of tangled-up combination of respondents' values and their opinions about facts. That might be a bad thing (though I note that the question about political affiliation, at least, has the same feature). Instead, one could ask something like "what level of aid do you think would maximize Africa's GDP after 20 years?" or "what level of aid do you think would maximize average expected QALYs at birth over the whole human population".
2. When considering an individual's charitable activity, of course we should think in terms of marginal efficiencies. That's not so clear when considering the question of the total amount of aid that might go from the affluent West to the Third World.
3. You mean (unless you have relevant information I don't, which is eminently possible) that some African economists are saying that the aid is harmful. It would be much more interesting to know typical African economists' opinions. If nothing else, there is obvious sampling bias here: if two African economists approach an American publisher, one proposing to write a book saying "Aid is actively harmful; stop it now" and one proposing to write one saying "Aid is useful; please do a bit more of it", which one is going to get the contract? It seems to me that there are multiple different factors making it far more likely to be the first one that have scarcely any correlation with the actual truth of the matter.
4. Yes, of course, actual decisions need to be made project by project. That doesn't mean that one can't hold an opinion about the approximate gross amount of aid there should be. (Such as, for instance, "none", which is an opinion you don't seem to object to even though it's the ultimate in not-project-by-project answers since it necessarily returns the same answer for every project.)
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-04-27T12:12:59.756Z · LW(p) · GW(p)
How would everyone feel about a question phrased something like:
"True or false: the marginal effect of extra money being given to aid in Africa through a charity like UNICEF is generally positive."
↑ comment by Jack · 2009-04-27T07:24:26.273Z · LW(p) · GW(p)
AIUI, It matters immensely what type of aid you're talking about, the processes by which it is distributed, anti-corruption mechanisms etc. Giving away food grown in Western countries is disastrous, microcredit, vaccinations, educating women etc. not so much. In any case I took the question to be trying to ascertain community positions on distributive justice issues and Western obligations to the developing world rather than distributive efficiency. So if there is really a widespread sense a question about aid wouldn't reflect those sorts of positions maybe a more theoretical question would be better.
↑ comment by AlanCrowe · 2009-04-27T16:49:26.354Z · LW(p) · GW(p)
Implicit in the question is the idea that aiding the third world costs money. The World Bank claims that America's three billion a year subsidy to its own cotton farmers has knock on effects that make African cotton farmers three hundred million dollars a year worse off. But the American subsidy is a very wasteful internal transfer. If America wants to give African cotton farmers three hundred million dollars in aid, it need only scrap its subsidy at a net benefit to America of perhaps two billion dollars.
Notice that I'm saying something different from "aid is actively harmful". I'm saying that we haven't plucked the low hanging fruit of passive win/win where we stop doing dumb shit and every nation is better off. After that comes active win/win such as building harbours and roads that increase the value of African products by making it cheaper to transport them to First World markets (win for Africans) while making African products more available to First World markets (win for First World). Mobile phones have reduced Third World poverty by letting farmers and fishermen direct their produce to the best markets, even while the mobile phone operators have profited by providing services. Fostering a zero-sum mentality with questions that assume that aiding the Third World costs the same amount of money as the benefit provided is misleading.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-27T17:54:03.750Z · LW(p) · GW(p)
Indeed, in the 'most important world saving causes' list earlier, ending agricultural subsidies wasn't even mentioned but that would probably be top of my list (battling with greatly relaxing immigration restrictions for the top spot).
↑ comment by Mike Bishop (MichaelBishop) · 2009-04-27T04:44:38.354Z · LW(p) · GW(p)
Have people answer two ways: 1) assume essentially no change in the type and quality of projects funded 2) assume some wise politicians make some realistic improvements in transparency, and accountability. The equivalent of No Child Left Behind for foreign aid.
Replies from: Zvi, mattnewport↑ comment by Zvi · 2009-04-28T01:05:14.799Z · LW(p) · GW(p)
Rather than argue over whether such a thing is possible I think that assuming the aid would be spent on whatever would do the most good would be the least convenient posisble world, and the one that gives us the opinion we're after here. Together with the opinion on the realistic case this tells us both what we think of the concept of aid if it works and what we think of it in practice.
↑ comment by mattnewport · 2009-04-27T04:47:07.370Z · LW(p) · GW(p)
2) assume some wise politicians make some realistic improvements in transparency, and accountability.
Why not just assume magical space fairies come down to earth and solve poverty? It's a more realistic expectation.
Replies from: MrShaggy↑ comment by MrShaggy · 2009-04-27T11:49:20.808Z · LW(p) · GW(p)
"Why not just assume magical space fairies come down to earth and solve poverty? It's a more realistic expectation."
Right, like with the No Child Left Behind system, "still waiting for the magical space fairies to wisely make schools accountable since 2001."
comment by gjm · 2009-04-26T23:33:40.824Z · LW(p) · GW(p)
For the IQ question, you should clarify what level of precision you're after. Exact results or rounded ones? Only from professionally-conducted tests, or not? Include ones taken in childhood, or not? And, though you hardly need this pointed out to you, whatever form the question ends up taking you should expect substantial sampling bias in the (non-blank) answers.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-27T19:45:43.914Z · LW(p) · GW(p)
In the U.S., more of us will probably know our SAT or GRE scores. We could also ask about G.P.A.
comment by MichaelHoward · 2009-04-26T23:17:00.307Z · LW(p) · GW(p)
interesting questions...
What's your take on the simulation argument? If you've no strong opinion, pick the most likely:
- The human species is very likely to go extinct before reaching a “posthuman” stage.
- Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).
- We are almost certainly living in a computer simulation.
- I deny that at least one of the above propositions is true.
- I'm unfamilier with the simulation argument.
↑ comment by gjm · 2009-04-26T23:30:06.435Z · LW(p) · GW(p)
I would prefer a more general question about arguments of this form (doomsday argument, "thirdism" in the Sleeping Beauty problem, etc.). I know of very intelligent people who think such arguments are obviously sound and very intelligent people who think they are obviously unsound.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-26T23:55:28.686Z · LW(p) · GW(p)
Count me in the camp that thinks they are obviously pointless.
Replies from: MichaelHoward, MichaelHoward, MichaelHoward↑ comment by MichaelHoward · 2009-04-27T12:43:52.046Z · LW(p) · GW(p)
One reason I'm interested in this is that people's choices vary but most people, myself included, believe their choice is clearly correct.
In fact, I'd ask for another question beneath it: If you picked one of the first 4 options, how confident are you that you're correct?
↑ comment by MichaelHoward · 2009-04-27T12:41:00.780Z · LW(p) · GW(p)
One reason I'm interested in this is people's choices vary widly, but most people, myself included, believe their choice is clearly correct.
In fact, I'd ask for another question beneath it: If you picked one of the first 4 options, how confident are you that you're correct?
↑ comment by MichaelHoward · 2009-04-27T12:40:03.473Z · LW(p) · GW(p)
One reason I'm interested in this is people's choices vary widly, but most people believe their choice is clearly correct.
In fact, I'd ask for another question beneath it: If you picked one of the first 4 options, how confident are you that you're correct?
↑ comment by Zvi · 2009-04-28T00:58:20.762Z · LW(p) · GW(p)
The question as phrased assumes that the simulation argument is valid if you accept the priors; you can say you're not familiar with the simulation argument but you can't say that you think it is wrong. This seems like another sign that opinions on this are strong - as stated this question reminds me of a push poll.
Replies from: MichaelHoward, MichaelHoward↑ comment by MichaelHoward · 2009-04-28T12:32:10.448Z · LW(p) · GW(p)
you can say you're not familiar with the simulation argument but you can't say that you think it is wrong.
Yes you can, option 4, but if that isn't clear then it should be written as something like: 'I disagree with the simulation argument - none of the first 3 propositions are true.'
↑ comment by MichaelHoward · 2009-04-28T12:26:05.259Z · LW(p) · GW(p)
you can say you're not familiar with the simulation argument but you can't say that you think it is wrong.
Yes you can: option 4.
comment by Vladimir_Nesov · 2009-04-26T22:10:45.420Z · LW(p) · GW(p)
- "Time per day on OB/LW" is hard to measure, since I'm just being online, studying and working in parallel.
- "Political views" -- I'd like "not commited" as an option.
- "Santa" -- understanding your position is a process, so e.g. clear-cut "yes/no" doesn't map on my "I contemplated the notion, and was unsure before growing old/perceptive enough to realize it's a running joke".
- Before the questions on probabilities, it'd be nice to ask about the position on interpretation of probability.
- There should be questions about position on morality. I suggest: consequentialist (in Yudkowsky's interpretation/other), hedonist, others?
- Another question: do you hold an explicit utilitarian position (on a form of preference order) such as average or total utilitarianism.
- The question on whether the data given by the person who fills in the survey should be publicly available should be included in the survey.
- I'd like to separate Sci-fi from fantasy.
- Add questions about habits of learning: do you learn technical stuff unrelated to work.
- What do people do in leisure time (watch TV/serf Internet/solve crosswords/study math).
- Technically, "Cooperate" in a standard PD is an incorrect answer, since the fact that you know that the other one is a Cooperator is not built into the problem.
- Don't call that hideous scheme you set up "probability". The log score will punish you infinitely for this heresy.
- A question about procrastination
- A question about diet
- Knowledge of related math: logic, probability theory, Bayesian networks, inference algorithms, expected utility, microeconomics, causal/evidential decision theories.
- Knowledge of biases/ev-psych literature: read stuff on OB/LW, read a serious book, read (how many) papers.
↑ comment by Scott Alexander (Yvain) · 2009-04-26T22:27:24.133Z · LW(p) · GW(p)
I specifically excluded "not committed" as an option on the political views section, because a lot of rationalists have a tendency to go towards "not committed" to signal how they're not blind followers of a party, when really they have very well defined political views.
I, for example, absolutely refuse to register with a political party, answer "independent" to any questions about my political affiliation, talk a good talk about how both parties are equally crooks, and then proceed to vote for the Democrat nine times out of ten. I would kind of like to force people like me to put "Democrat" on there so that we get more data.
I will change this if enough people agree with Vladimir.
Replies from: thomblake, Vladimir_Nesov, Jack, infotropism↑ comment by Vladimir_Nesov · 2009-04-26T22:32:36.239Z · LW(p) · GW(p)
The problem is that In Russia there is only one Party, and studying what the classical options are, or what the little parties are, doesn't seem to be worth my time given the current situation.
Replies from: CronoDAS↑ comment by Jack · 2009-04-27T01:00:39.482Z · LW(p) · GW(p)
I agree that there should be no "not committed" option, but asking non-Americans to identify with an American political party seems kind of unhelpful. Do we think think more traditional ideological terms are to vague to be unhelpful?
Maybe: Conservative, Classical Liberal, Welfare State Liberal, Marxist/Post Marxist, etc?
↑ comment by infotropism · 2009-04-27T13:45:55.625Z · LW(p) · GW(p)
I agree with Vladimir too, you can't always pinpoint people like that.
I'd say I'm uncommitted too. By that I mean to encompass the general idea that I agree with a lot of the ideas that come from, for instance, libertarianism, and at the same time, with a lot of the ideas behind communism. As I never heard of a good synthesis between the two, so I stand uncommitted.
↑ comment by hrishimittal · 2009-04-27T16:14:06.678Z · LW(p) · GW(p)
You can measure time per day on OB/LW or any other app/site using Rescuetime.
Replies from: outlawpoet↑ comment by outlawpoet · 2009-04-27T16:41:58.017Z · LW(p) · GW(p)
I use ManicTime, myself
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-27T19:49:25.575Z · LW(p) · GW(p)
Anybody tried both of these? I think everyone should use similar software. Its an incredibly low cost route to more self-knowledge and discipline.
comment by Mike Bishop (MichaelBishop) · 2009-04-29T02:26:08.747Z · LW(p) · GW(p)
When looking for a long-term romantic partner: How important is intelligence? How important is a consequentialist moral outlook? How important is rationality?
Replies from: Alicorn↑ comment by Alicorn · 2009-04-29T02:28:13.442Z · LW(p) · GW(p)
What about people who would find a consequentialist moral outlook in a potential partner negatively motivating? We'd give the same "how important" answer as someone who found it a positive trait.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-29T04:34:54.724Z · LW(p) · GW(p)
good point
comment by CannibalSmith · 2009-04-27T14:29:59.878Z · LW(p) · GW(p)
"Do you think that the so called Dark Arts are inherently evil and should not be taught, learned and used by us? Why?"
Replies from: Zvicomment by Emile · 2009-04-27T10:10:31.133Z · LW(p) · GW(p)
Are you an active member of other communities built around X, where X is one of the following? (check all that apply)
- Atheism / Skepticism
- Pick-up artists
- Fandom (SF, anime, webcomics, whatever)
- Political debate
- Political activism
- Technology (programming, electronics ...)
- Transhumanism / the singularity
- Entrepreneurship
- Wikipedia
- Free software
- Environmentalism
- Self-help / self-improvement
- Religion
- Science
- Other
(Does someone have a better way of formulating this question?)
comment by Mike Bishop (MichaelBishop) · 2009-04-27T05:02:44.534Z · LW(p) · GW(p)
nutritional supplements, diet, exercise habits, height and weight
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-04-27T15:03:35.143Z · LW(p) · GW(p)
Please tell me the exact questions you want. Also, why height and weight? I appreciate wanting to know what effect diet has on weight, but that's way beyond the scope of this survey.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-27T18:55:54.895Z · LW(p) · GW(p)
We don't want the survey to take too long, so maybe height and weight shouldn't be priorities but there are a lot of reasons, none of them of overwhelming importance, that people might be interested. 1. As you said, the effect of diet on BMI. 2. simply to describe who we are 3. BMI as a very very imperfect measure of akrasia
comment by Mike Bishop (MichaelBishop) · 2009-04-27T04:58:53.190Z · LW(p) · GW(p)
Many questions pose a low risk of identity disclosure. For the few questions that pose a high risk
let people check a box which says "turn this answer into missing data before placing it in the public domain."
comment by gjm · 2009-04-26T23:13:33.035Z · LW(p) · GW(p)
The "inapplicable" option for the Robin-versus-Eliezer singularity question should be phrased in stronger terms than "don't believe in"; someone could easily (say) think that there's a 5% chance of a technological singularity some time, of which 4% is accounted for by one option and 0.5% by the other (and another 0.5% by all others together). But wouldn't it be better just to ask for probability estimates for (1) Robin-style singularity, (2) Elizer-style singularity, and (3) any singularity?
For this, and also for the Three Worlds Collide question, there should be at least one URL.
comment by CannibalSmith · 2009-04-27T14:32:48.824Z · LW(p) · GW(p)
The survey should ask country of residence.
comment by MrShaggy · 2009-04-27T11:53:25.366Z · LW(p) · GW(p)
Perhaps there should be a short survey and a full survey? Or every question (or most other than demographics) have a "no answer" as an already marked default? It's a pretty intensive survey unless you spend a lot of time here I think.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-27T19:05:37.665Z · LW(p) · GW(p)
Agreed, as survey length rises, survey response rate falls.
I recommend making two or more surveys. The first one should take less than 5 minutes and we should push everyone, including non-commenters to fill it out.
We should use our handles, or another ID, to link the data from multiple surveys.
comment by hirvinen · 2009-04-27T01:13:09.797Z · LW(p) · GW(p)
Looking into U.S. political parties especially beyond the big two doesn't look like a good use of my time. Consider replacing that with the scores from the World's smallest political quiz
Replies from: Jack↑ comment by Jack · 2009-04-27T01:41:12.701Z · LW(p) · GW(p)
Those sorts of questions aren't bad ideas but I've become fairly confident that that quiz is designed to recruit more libertarians, not accurately place anyone's political views. This is a better, though longer, political view quiz.
Replies from: hirvinen, gworley↑ comment by hirvinen · 2009-04-27T02:03:28.141Z · LW(p) · GW(p)
Strongly disagree on Political Compass being better. The questions are heavily loaded, the very first question being
If economic globalisation is inevitable, it should primarily serve humanity rather than the interests of trans-national corporations.
and many questions such as
Astrology accurately explains many things.
aren't at all about what should be done or what should be the state of things. What are you going to infer about my political beliefs based on my answer to that?
(Edited to fix formatting.)
Replies from: Jack↑ comment by Jack · 2009-04-27T02:18:24.364Z · LW(p) · GW(p)
Questions are loaded in different directions (in comparison to the World's Smallest where all the questions are loaded in the libertarian direction) so the results balance each other out. Admittedly there are some questions that I wouldn't immediately think would indicate anything about my political beliefs but its seems to accurately place people- at least those I've talked to. Have you taken it and felt that your placement was wrong?
I have no doubt we could come up with a quiz better than either of these if we wanted to put in the time.
Replies from: hirvinen↑ comment by hirvinen · 2009-04-27T02:37:38.506Z · LW(p) · GW(p)
The Political Compass seems to me, based on my own and friends' experiences to have a strong pressure towards the lower left corner. As one of them said, "you would have to want to sacrifice babies to corporations to end up in the upper right corner."
The World's Smallest Political Quiz isn't entirely neutral, but to me it would seem to spread people much more evenly, and importantly all questions are clearly on the two axis along which it measures political stance.
Replies from: Jack↑ comment by Jack · 2009-04-27T03:14:01.755Z · LW(p) · GW(p)
Pressure in that direction is definitely possible given that thats where most of my friends think they belong anyway. Though its strange then that they place the entire American political spectrum in the upper right. I'll reconsider my position on it.
But the World's Smallest isn't a suitable alternative. Its just packed with weasel words and its going to obscure a lot of differences just because a lot of people are going to answer "Maybe sometimes" even if they lean heavily in one direction or the other. Also, the fact that this community is probably skewed libertarian anyway is just going to make it harder to interpret the results. The last thing we need is a poll that will automatically confirm our assumptions about this group's political views.
↑ comment by Gordon Seidoh Worley (gworley) · 2009-04-27T11:21:49.806Z · LW(p) · GW(p)
Although it is designed as part of libertarian recruitment and is used to start discussions with people about freedoms they already support and then "draw them in" by gradually exposing them to other ideas, the reality of the data is that not many people score libertarian (the Web data isn't very accurate because you get a lot more libertarians visiting the site).
In my younger years I did some tabling for the Libertarian Party, giving the quiz, letting them place a dot on a blow up of the quiz grid to let them mark how they scored and compare with others. And I have to say, in all that time, I did not encounter one person out of several hundred that scored libertarian who was not already a card-carrying member. In fact, if anything, most people score down in the authoritarian range.
This is just the data set I've collected, though. Maybe there is a better one out there than the one you can find from the online version of the quiz.
Replies from: Jack↑ comment by Jack · 2009-04-27T14:26:21.986Z · LW(p) · GW(p)
Maybe. But they did a version that was less obviously biased through an actual polling firm and got these results which represent libertarians as eight times more common than the number of people who identify as libertarian. Now maybe there really are those kinds of sympathies for the libertarian position, I'm not sure. But it doesn't give me a lot of confidence since the one online is worse.
But even if it doesn't skew libertarian it still lumps way to many people as centrist for it to be particularly useful. And any test that labels people as "authoritarian" (something usually reserved for totalitarian regimes) is pretty ridiculous.
comment by JulianMorrison · 2009-04-27T00:40:41.866Z · LW(p) · GW(p)
You should survey superstition (astrology, bad luck avoidance, complementary medicine, etc).
Replies from: dclayh↑ comment by dclayh · 2009-04-27T00:45:30.462Z · LW(p) · GW(p)
Is "complementary" medicine the new euphemism for alternative/natural/Eastern/not-tested-with-science medicine? I haven't heard of it before.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-04-27T00:48:13.060Z · LW(p) · GW(p)
Not so new, but yup.
comment by gjm · 2009-04-26T23:42:59.463Z · LW(p) · GW(p)
Robin Hanson notoriously thinks that most medicine does little or no good. I'd guess that he opposes large-scale socialized medicine on these grounds, though that's not a foregone conclusion and I don't think I've seen an explicit statement from him about this. It's probably more usual to think that medicine is great and we should all have easier access to more of it. How about a question somewhere in this vicinity?
Replies from: Yvain, SoullessAutomaton↑ comment by Scott Alexander (Yvain) · 2009-04-27T06:25:46.511Z · LW(p) · GW(p)
Yes, but how can we phrase this rigorously? "Medicine does little good" seems too open to interpretation.
Replies from: Zvi↑ comment by Zvi · 2009-04-28T01:14:52.404Z · LW(p) · GW(p)
There's a few options that come to mind, none of them perfect. One basic one is to ask how much we should be spending on health care; the risk here is if you think there is counterfactual effective medical spending. Another is what we feel is the marginal cost to current medicine of an additional year of life or healthy life, which could also be compared to what people think that year or a life saved is worth. What percentage of the current investment in medicine has a substantial benefit to the patient? is a way to try and measure this directly rather than indirectly.
↑ comment by SoullessAutomaton · 2009-04-27T00:16:11.178Z · LW(p) · GW(p)
I'd guess that he opposes large-scale socialized medicine on these grounds, though that's not a foregone conclusion and I don't think I've seen an explicit statement from him about this.
I vaguely recall Robin noting that socialized medicine (as implemented in other countries) tends to reduce both supply of medical treatment and money spent on such, so I'd actually expect that he would weakly support it in the sense of "more optimal than the current system". I could be wrong, though.
However, I'm pretty sure he feels that other options would be superior.
comment by gjm · 2009-04-26T23:22:35.882Z · LW(p) · GW(p)
You have a question about the Singularity, but none about the more general question of artificial general intelligence. It is at least possible to expect (e.g.) that humanish-level AI will become possible in the next century but that it will not lead to a technological singularity; for instance, someone who expects that the days of exponential performance improvements in computing are almost behind us and that the road to AI will be via full-brain simulation with rather little understanding might well have that expectation. So there would be some value -- I don't know whether enough to justify the extra length of the survey -- in having a question about AI as well as one about the Singularity.
comment by michael · 2009-04-26T22:27:51.234Z · LW(p) · GW(p)
Why ask for political parties? Political views are complicated, if all you can do is pick a party this complexity is lost. All those not from the US (like myself) might additionally have a hard time picking a party.
Those are not easy problems to solve and it is certainly questionable if thinking of some more specific questions about political views and throwing them all together will get you meaningful reliable and valid results. As long as you cannot do better than that asking just for preferred political parties is certainly good enough.
Replies from: outlawpoet↑ comment by outlawpoet · 2009-04-26T22:40:55.659Z · LW(p) · GW(p)
Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-04-27T00:37:33.663Z · LW(p) · GW(p)
Those won't divide the parties outside the US. Every political party in Britain aside from the extreme fringe are for the availability of abortion and government provision of free healthcare, for example.
And things that do divide the parties here, like compulsory ID cards, don't divide the parties in the US.
Replies from: outlawpoet↑ comment by outlawpoet · 2009-04-27T00:46:28.466Z · LW(p) · GW(p)
I'm not really interested in actual party divisions so much as I am interested in a survey of beliefs.
Affiliation seems like much less useful information, if we're going to use Aumann-like agreement processes on this survey stuff.
comment by Mike Bishop (MichaelBishop) · 2009-04-29T02:38:14.360Z · LW(p) · GW(p)
I'd be interested in music taste and sports participation as well...maybe on a secondary survey which asks about hobbies.
comment by Aurini · 2009-04-27T20:13:40.565Z · LW(p) · GW(p)
I'd suggest framing the "How religious was your family" question in a specific cultural context. For example, my family was 'Average Religious' for Canada, but from what I've gathered about the United Stats, that would make them less religious than normal.
Also, I'd be interested to learn what percentage of the members here own weapons for self defense (as opposed to decorative, or other purposes). I'd also suggest the term 'weapons' over 'guns,' once again due to many members living outside of the United States.
comment by Cameron_Taylor · 2009-04-27T15:45:30.659Z · LW(p) · GW(p)
When considering the impact on your success and quality of life, how useful is a dedicated emphasis on improving rationality to you?
- Dramatically improved my life
- Somewhat useful
- Irrelevant
- Somewhat of a hindrance
- A significant dissadvantage
↑ comment by Scott Alexander (Yvain) · 2009-04-27T15:56:38.793Z · LW(p) · GW(p)
I thought Anna already covered that very well. Is there some reason you want to know this as part of an interaction with the other questions on the survey?
comment by Emile · 2009-04-27T10:31:48.225Z · LW(p) · GW(p)
How educated do you consider yourself on the following topics:
- Economics
- World affairs / political geography
- Energy / climate change / pollution / ecology
- Law
- Psychology / neurology
- Biology / medical science
- Artificial intelligence
- Maths and Physics
(any others of interests? These are biased towards important on frequent topics here)
Replies from: MichaelBishop, ciphergoth, MichaelBishop, MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-29T02:30:01.520Z · LW(p) · GW(p)
Split psychology from neuroscience
↑ comment by Paul Crowley (ciphergoth) · 2009-04-27T10:43:07.204Z · LW(p) · GW(p)
Maths and physics deserve finer classification: specific areas of interest might include
- Probability and statistics
- QM
- Complexity theory
- not sure what the word is for the field that would cover Godel, Turing, Kolmogorov, Chaitin etc
↑ comment by Mike Bishop (MichaelBishop) · 2009-04-29T02:47:59.954Z · LW(p) · GW(p)
The possible responses should be fairly concrete: a) I know less on this topic than an average undergraduate major at an average U.S. university, e.g. Michigan State
to f) I am making research contributions to the cutting edge of this field
↑ comment by Mike Bishop (MichaelBishop) · 2009-04-29T02:35:09.262Z · LW(p) · GW(p)
Make a general category for Humanities, and make Applied Statistics distinct from Math.
Added: Make a separate category for Philosophy as well.
Replies from: Alicorncomment by hirvinen · 2009-04-27T02:58:23.897Z · LW(p) · GW(p)
Does the first AGI have to be Friendly, or we're screwed?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-27T08:10:31.339Z · LW(p) · GW(p)
We'll probably be discussing that from Friday on - there's a bar on such discussions before then...
comment by gjm · 2009-04-26T23:17:00.087Z · LW(p) · GW(p)
As others have said, (1) party affiliation is an oversimplification of political beliefs but (2) many many people do broadly hew to one or another party line. But precisely because #2 is true, you can get much the same information as you do from "Democrat or Republican?" by asking one or two questions addressed more directly to the issues.
At least once in the past (probably much more often) researchers have done a political survey, done a principal-component analysis (or something similar) on the results, and published their conclusions about the two-ish main collections of issues on which people in their sample divide. You might want to look up at least one such case (unfortunately I remember no details, and I have no reason to believe that my google-fu is stronger than yours) and use that to decide what actual question(s) to ask.
comment by gjm · 2009-04-26T23:10:06.127Z · LW(p) · GW(p)
Have a note explicitly inviting people to add noise to their karma scores. Noisy karma scores are less useful for identification, though of course if anyone says "about 3000" and you believe them then it doesn't leave much room for doubt about who it is.
comment by Mike Bishop (MichaelBishop) · 2009-05-06T16:28:13.767Z · LW(p) · GW(p)
Occupation, income, self-perceived success in relationships and career, life satisfaction, experience of depression. Parents education and income and rationality. High school popularity, comfort and success interacting with people from different social/cultural groups.
comment by Mike Bishop (MichaelBishop) · 2009-04-27T19:01:53.288Z · LW(p) · GW(p)
As survey length rises, survey response rate falls.
I recommend making two or more surveys. The first one should take less than 5 minutes and we should push everyone, including non-commenters to fill it out.
We should use our handles, or another ID, to link the data from multiple surveys.
comment by dfranke · 2009-04-27T18:54:56.120Z · LW(p) · GW(p)
On the Newcomb question I think you should have an option for Wittgenstein-like positions, i.e. "the premise of the question contains hidden contradictions". I'd offer the same option for the other similarly-formatted questions, although I'm not aware of anyone making any such assertion about PD.
comment by Cameron_Taylor · 2009-04-27T15:50:28.356Z · LW(p) · GW(p)
In what aspect of life is it most useful to improving your rational thinking?
- Professional succes
- Scientific and or technical achievement
- Social life
- Personal development (self improvement, Cognitive Behavioral Therapy, etc)
- Important life decisions.
comment by gjm · 2009-04-26T23:26:09.350Z · LW(p) · GW(p)
It would be interesting to have at least one question in the general domain of economic/political forecasting, for two reasons: (1) such questions have some practical interest; (2) they can be tested later. (Especially valuable if we ask for confidence limits or something, so that calibration as well as accuracy can be tested.)
I don't have strong feelings about what such questions would be best; anyone who agrees with me that there should be such questions and does have strong feelings about what questions might put them in replies to this comment. Random examples: length and depth of the current recession; likely relative importance of (say) US, China, Europe, India in 20 years' time.
comment by Lawliet · 2009-04-26T22:33:02.184Z · LW(p) · GW(p)
I would like to see the results made public, as well as seeing more surveys in general.
Don't have a good indicator of how many people would worry about public data, but as the survey-taking group size increases (as I presume will happen over time on LW) it should become easier to remain unidentifiable.
Plenty of people voluntarily fill out surveys about themselves on social networking sites, and those of us concerned with anonymity probably wouldn't be filling them out either way.
Replies from: byrnema↑ comment by byrnema · 2009-04-27T15:59:27.658Z · LW(p) · GW(p)
Don't have a good indicator of how many people would worry about public data,
Some people are easier to identify than others (for example, if you're female or from a particular country) and any person may feel uncomfortable about a particular question, so that even marginal concern about being identified with an odd view may skew results.
Consider making the data public in a way that gives the complete set of answers to each question, but doesn't allow comparison of how one person answered multiple questions. (I'm sure there's an easy way to say this, I don't know it.) So in other words, you can't tell that the person who answered "karma = -16" also answered "yes" to "superstitious".
Any cross-correlations, of course, would need to be computed using the original, publicly unavailable data.
comment by TheThinKing · 2009-04-26T22:16:49.765Z · LW(p) · GW(p)
There is plenty of literature out there about how groups can go wrong. We need to make sure we do not fall victim to those traps. What are ways we can identify and avoid known pitfalls?
Perhaps we should include some questions about the perception of the community: diversity of viewpoints, strength of conformity, how much you personally identify with the group, things of that nature. These answers could be useful for self-diagnostic purposes, both for the group itself as well as its individual members comparing their answers against others in the group.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-04-26T22:24:29.771Z · LW(p) · GW(p)
I'm not sure what you mean. Can you suggest some?
comment by outlawpoet · 2009-04-26T22:00:03.919Z · LW(p) · GW(p)
I found the last survey interesting because of the use of ranges and confidence measures. Are there any other examples of this that a community response would be helpful for?
comment by Emile · 2009-04-27T10:00:33.639Z · LW(p) · GW(p)
MBTI type (may not be the most "scientifically valid", but it's probably the one the most people would know).
Replies from: dfranke↑ comment by dfranke · 2009-04-27T19:35:56.633Z · LW(p) · GW(p)
Useless, I think. For any site that caters to the hacker/libertarian/technophile cluster, the results are invariably dominated by INTJs and INTPs with a few ENTJs, and everything else put together being in the single digits. The meaning of the specific numbers we get for this site will be completely drowned out by sampling bias and the general imprecision of the test.
Replies from: MBlume↑ comment by MBlume · 2009-04-27T19:38:32.777Z · LW(p) · GW(p)
actually, the test I took identified me as a feeling type, ENFP
ETA: Though it could be relevant that I took the test since reading Feeling Rational which has made a lot of standard quiz questions about thinking vs. feeling read like nonsense to me.