Hard philosophy problems to test people's intelligence?
post by Solvent · 2012-02-15T04:57:39.960Z · LW · GW · Legacy · 36 commentsContents
36 comments
I'm looking for hard philosophical questions to give to people to gauge their skill at philosophy.
So far, I've been presenting people with Newcomb's problem and the Sleeping Beauty problem. I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.
What other problems should I use?
36 comments
Comments sorted by top scores.
comment by Paul Crowley (ciphergoth) · 2012-02-15T08:15:59.763Z · LW(p) · GW(p)
What query are you trying to hug?
Replies from: Solvent↑ comment by Solvent · 2012-02-15T20:56:44.333Z · LW(p) · GW(p)
I'm trying to test their philosophical ability. Some people immediately and intuitively notice bad arguments and spot good ones.
Replies from: ciphergoth, Zetetic↑ comment by Paul Crowley (ciphergoth) · 2012-02-16T17:46:42.721Z · LW(p) · GW(p)
What decision rests on the outcome of your test?
↑ comment by Zetetic · 2012-02-15T21:34:04.353Z · LW(p) · GW(p)
I think there's a problem with your thinking on this - people can spot patterns of good and bad reasoning. Depending on the argument, they may or may not notice a flaw in the reasoning for a wide variety of reasons. Someone who is pretty smart probably notices the most common fallacies naturally - they could probably spot at least a few while watching the news or listening to talk shows.
People who study philosophy are going to have been exposed to many more diverse examples of poor reasoning, and will have had practice identifying weak points and exploiting them to attack an argument. This increases your overall ability to dissolve or decompose arguments by increasing your exposure and by equipping you with a trick bag of heuristics. People who argue on well moderated forums or take part in discussions on a regular basis will likely also pick up some tricks of this sort.
However, there are going to be people who can dissolve one problem, but not another because they have been exposed to something sufficiently similar to one (and are thus probably have some cached details relevant to solving it) but not so for the other:
E.g. a student of logic will probably make the correct choice in the Wason Selection Task and may be able to avoid making the conjunction fallacy, but they may not two box because they fall into the CDT reasoning trap. However, a student of the sciences or statistics may slip up in the selection task but one box, by following the EDT logic.
So if you're using this approach as an intelligence test, I'd worry about committing the fundamental attribution error pretty often. However, I doubt you're carrying out this test in isolation. In practice, it probably is reasonable to engage people you know or meet in challenging discussions if you're looking for people that are sharp and enjoy that sort of thing. I do it every time I meet someone who seems like they might have some inclination toward that sort of thing.
It might help if you provide some context though - who are you asking and how do you know them? Are you accosting strangers with tricky problems or are you probing acquaintances and friends?
comment by Richard_Kennaway · 2012-02-15T13:42:36.993Z · LW(p) · GW(p)
How did you respond to Newcomb and Sleeping Beauty the first time you encountered them, before reading any discussion of them?
Replies from: Solventcomment by J_Taylor · 2012-02-15T07:08:44.946Z · LW(p) · GW(p)
What in the world is "skill at philosophy"?
I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.
You have a higher opinion of people who make socially foolish decisions?
Replies from: None, DuncanS↑ comment by [deleted] · 2012-02-15T07:32:06.734Z · LW(p) · GW(p)
.
Replies from: J_Taylor↑ comment by J_Taylor · 2012-02-18T02:34:11.609Z · LW(p) · GW(p)
I bestow a higher likelihood of long-term closeness on persons who "avoid just icking away from the subject."
Oh, I apologize. I entirely misread what you were doing, I think.
I sorta think you can't possibly disagree with this, or you wouldn't be here.
Um... kind of? I guess it depends on what sort of contrarian opinions you were sharing and what sort of setting you were doing it in.
The latter part assumed you were mainly replying to the second question I asked. I apologize for the bluntness of those questions, also. However, I would like to clarify my first question slightly.
When I see the phrase "skill at philosophy" it makes me think of professional philosophers. You probably are not trying to test for the kinds of skills which are found in professional philosophers, because most of these skills cannot be tested through informal questioning. I now realize that you were trying to test for, I think, the ability to think logically about philosophical topics and openness to unpopular ideas. Sorry for the misinterpretation.
↑ comment by DuncanS · 2012-02-15T19:33:37.554Z · LW(p) · GW(p)
What in the world is "skill at philosophy"?
On the other hand, I suspect that it is possible to rank people according to their skill at philosophy, and come up with an ordering that's reasonably widely agreed, as long as the points are not too close. Just for fun, here's a few to rank...
So I guess there is such a thing.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-02-16T08:42:40.809Z · LW(p) · GW(p)
Beyond the obvious signaling opportunity of saying that creationists are the worst people ever, I'm not having an easy time figuring out which way the ranking should go between a celebrity who appears to be totally apathetic towards philosophy and a creationist apologist who is enthusiastically doing very bad philosophy.
I also wonder how much agreement there would be if we tried to establish the ranking between Richard Dawkins and Jerry Fodor.
Replies from: J_Taylorcomment by MileyCyrus · 2012-02-15T05:08:51.498Z · LW(p) · GW(p)
Are you looking for problems with a counter-intuitive, yet widely accepted answer among academics?
Replies from: Solvent↑ comment by Solvent · 2012-02-15T05:14:16.285Z · LW(p) · GW(p)
Well, I'm mostly using these on people who haven't read much or any philosophy, so those would work. That said, I think that a lot of smart people can get to the right answer even when there isn't any consensus in the philosophical community.
Replies from: shminux, Luke_A_Somers↑ comment by Shmi (shminux) · 2012-02-15T06:43:54.882Z · LW(p) · GW(p)
If there is no consensus, how do you know what answer is "right"? Surely if it was a simple matter of computation or logic, there would be a consensus.
Replies from: Jayson_Virissimo, Manfred, MileyCyrus↑ comment by Jayson_Virissimo · 2012-02-15T08:28:20.742Z · LW(p) · GW(p)
As far as I can tell, he is judging "rightness" by how closely it approximates Less Wrong doctrine.
Replies from: Karmakaiser↑ comment by Karmakaiser · 2012-02-17T16:16:48.537Z · LW(p) · GW(p)
There are so many variables on where someone's thinking could be biased or incomplete that if one is going to take these questions seriously, I think a heuristic approach would be most helpful rather than seeing if someone independently comes to your conclusion.
Off the top of my head I would give points for trying to falsify themselves, taking into account human bias (if they already had knowledge of the literature on bias), asking clarifying questions instead of going with an incomplete interpretation of the problem, a willingness to be criticized when the criticism is correct, and a willingness to brush badly constructed criticism as side.
↑ comment by MileyCyrus · 2012-02-15T07:35:26.165Z · LW(p) · GW(p)
A standard Bayesian problem would work great. I paid my 13 year old nephew $1 to solve one.
Also: If you call a tail a leg, how many legs does a horse have?
Replies from: JenniferRM, David_Gerard↑ comment by JenniferRM · 2012-02-16T00:13:34.078Z · LW(p) · GW(p)
Be careful how you reward people for mental tasks if you care about the long term cultivation of their mind.
↑ comment by David_Gerard · 2012-02-15T08:55:14.674Z · LW(p) · GW(p)
If you call a tail a leg, how many legs does a kangaroo have? If you call an arm a leg, how many legs does a human have? There's a whole sequence on the trouble with putting too much store in the meanings assigned to words.
↑ comment by Luke_A_Somers · 2012-02-15T16:53:35.494Z · LW(p) · GW(p)
I'd settle for a well-thought-out answer, even if it's not the one I agree with.
comment by JonathanLivengood · 2012-02-16T06:50:09.352Z · LW(p) · GW(p)
Searle's Chinese Room is a great (awful) case to test out how well people think. The argument can be attacked (successfully) in so many different ways, it is a good marker of both ability to analyze an argument and ability to think creatively. Even better if after your interlocutor kills the argument one way, you ask him or her to kill it another, different way. (Then repeat as desired.)
Replies from: skepsci, Dmytry↑ comment by skepsci · 2012-02-16T09:43:14.701Z · LW(p) · GW(p)
What do you mean by "great (awful)"? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?
Replies from: JonathanLivengood, skepsci↑ comment by JonathanLivengood · 2012-02-16T17:52:05.328Z · LW(p) · GW(p)
Yes, that's exactly what I mean. The argument itself is terrible. But it invites so many reasonable challenges that it is still very useful as a test of clear thinking. So, awful argument; great test case.
↑ comment by Dmytry · 2012-02-16T09:59:58.554Z · LW(p) · GW(p)
Ya.
Picture a room larger than Library of Congress which answers a simplest question in a million years, and the argument entirely dissolves. Imagine some nonsense the way Searle wants you to (small room, talks fast enough), take possibility of such as a postulate, and you'll create yourself a logically inconsistent system* in which you can prove anything including impossibility of AI.
*Postulating that, say, good ol zx spectrum can run human mind equivalent intelligence in real-time on 128 kilobytes of ram, is ultimately postulating a mathematical impossibility, and you should in principle be able to get all the way to 1=2 from there.
Replies from: JonathanLivengood↑ comment by JonathanLivengood · 2012-02-16T18:07:45.421Z · LW(p) · GW(p)
I'm not sure I understand the Library of Congress bit, but the footnote is exactly right. Even so, that is only one way of resisting Searle's argument. The point for me is that we can measure cleverness to some tolerance by how many ways one finds to fault the argument. For example:
a. The architecture is completely wrong. People don't work by simple look-up tables.
b. Failure of imagination. We are asked to imagine something that passes the Turing test. Anyone convinced by the argument is probably not imagining that premiss vividly enough.
c. The argument depends on a fallacy of division/composition. Searle argues that the system does not understand Chinese since none of its parts understand Chinese. But some humans understand Chinese, and it is implausible that any individual human cell understands Chinese. So, the argument is logically flawed.
d. In order to have an interactive conversation, the room needs to have something like a memory or history. Understanding isn't just about translation but about connecting language to other parts of life.
e. Similarly to (d), the room is not embodied in any interesting way. The room has no perceptual apparatus and no motor functions. Understanding is partly about connecting language to the world. Intelligence is partly about successful navigation in the world. Connect the room to a robot body and then present the case again.
...
Further challenges could be given, I think. But you get the idea.
Replies from: Dmytry↑ comment by Dmytry · 2012-02-16T18:51:00.676Z · LW(p) · GW(p)
I meant, the room got to store many terabytes of information, very well organized too (for the state dump of a chinese speaking person). It's a very big room, library sized, and there's enormous amount of paper that gets processed before it says anything, and enormous timespan.
The argument relies on imagining a room that couldn't possibly have understood anything; imagine the room 'to scale' and the timing to scale, and then assertion that room couldn't possibly have understood anything loses ground.
There's another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.
Replies from: JonathanLivengood↑ comment by JonathanLivengood · 2012-02-16T20:33:04.724Z · LW(p) · GW(p)
There's another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.
Agreed.
comment by DanielLC · 2012-02-15T05:35:59.429Z · LW(p) · GW(p)
By "hard problem" do you mean harder than "If a tree falls in a forest does it make a sound?" or as hard as the hard problem of consciousness?
Would a Star Trek style teleporter teleport you or result in a new person (in a universe where you can be made of different atoms)? What if it creates the duplicate without destroying the original? Is there any action you can take that preserves identity?
Trolley problem. For that matter, utilitarianism vs. deontological ethics.
Copenhagen vs. Many Worlds. Many Worlds vs. Timeless. Those require an understanding of quantum physics, though.
Replies from: Solvent, Incorrect