Imitation is the Sincerest Form of Argument
post by palladias · 2013-02-18T17:05:10.817Z · LW · GW · Legacy · 96 commentsContents
96 comments
I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video (which is available here).
Conventional Turing Tests check whether a programmer can build a convincing facsimile of a human conversationalist. The test has turned out to reveal less about machine intelligence than human intelligence. (Anger is really easy to fake, since fights can end up a little more Markov chain-y, where you only need to reply to the most recent rejoinder and can ignore what came before). Since normal Turing Tests made us think more about our model of human conversation, economist Bryan Caplan came up with a way to use them to make us think more usefully about our models of our enemies.
After Paul Krugman disparaged Caplan's brand of libertarian economics, Caplan challenged him to an ideological Turing Test, where both players would be human, but would be trying to accurately imitate each other. Caplan and Krugman would each answer questions about their true beliefs honestly, and then would fill out the questionaire again in persona inimici - trying to guess the answers given by the other side. Caplan was willing to bet that he understood Krugman's position well enough to mimic it, but Krugman would be easily spotted as a fake!Caplan.
Krugman didn't take him up on the offer, but I've run a couple iterations of the test for my religion/philosophy blog. The first year, some of the most interesting results were the proxy variables people were using, that weren't as strong as indicators as the judges thought. (One Catholic coasted through to victory as a faux atheist, since many of the atheist judges thought there was no way a Christian would appreciate the webcomic SMBC).
The trouble was, the Christians did a lot better, since it turned out I had written boring, easy to guess questions for the true and faux atheists. The second year, I wrote weirder questions, and the answers were a lot more diverse and surprising (and a number of the atheist participants called out each other as fakes or just plain wrong, since we'd gotten past the shallow questions from year one, and there's a lot of philosophical diversity within atheism).
The exercise made people get curious about what it was their opponents actually thought and why. It helped people spot incorrect stereotypes of an opposing side and faultlines they'd been ignoring within their own. Personally, (and according to other participants) it helped me have an argument less antagonistically. Instead of just trying to find enough of a weak point to discomfit my opponent, I was trying to build up a model of how they thought, and I needed their help to do it.
Taking a calm, inquisitive look at an opponent's position might teach me that my position is wrong, or has a gap I need to investigate. But even if my opponent is just as wrong as zer seemed, there's still a benefit to me. Having a really detailed, accurate model of zer position may help me show them why it's wrong, since now I can see exactly where it rasps against reality. And even if my conversation isn't helpful to them, it's interesting for me to see what they were missing. I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error. I'd like to be able to see it more clearly so I can try and spot it in my own thought. (Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").
When I get angry, I'm satisfied when I beat my interlocutor. When I get curious, I'm only satisfied when I learn something new.
96 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2013-02-18T17:59:41.316Z · LW(p) · GW(p)
Documenting my mental processes after reading this post (disclaimer: human introspection sucks, and mine is probably no exception):
Huh, this is one of the better versions of the Devil's advocate game I've ever encountered... Immediate upvote.
Huh, the poster analyzed their mistakes, learned from them and improved the challenge. Too bad I only have one upvote.
Clicking on the links... WTF, this is the girl who converted to Christianity (Catholicism? Really? Out of all the options available?) from Atheism a year or so ago... Anything she posts deserves a downvote...
Stop! What the hell am I doing? This is, like, falling prey to several biases at once. At least I should notice that I am confused. Unable to reconcile the "obviously dumb" conversion move with this quite clever post.
Wait, this is the substance of her post, to begin with!
Deciding to definitely keep the upvote and reserve judgment until after looking through the linked posts.
↑ comment by [deleted] · 2013-02-18T19:29:13.136Z · LW(p) · GW(p)
Even God can quote Bayes when it suits him.
Still upvoted for raw cleverness, though.
Replies from: prase, shminux, Eugine_Nier↑ comment by prase · 2013-02-18T20:42:29.260Z · LW(p) · GW(p)
Bayes was a priest, after all. Now divine quote of gay Turing would be a different feat altogether.
Replies from: fubarobfusco, Qiaochu_Yuan↑ comment by fubarobfusco · 2013-02-18T22:58:13.127Z · LW(p) · GW(p)
... or polyamorous agnostic Russell, maybe?
(Also, Bayes was a Presbyterian minister — not a priest, which (in England) would imply Catholic or Anglican. It was the family trade; his father was also a minister.)
↑ comment by Qiaochu_Yuan · 2013-02-19T03:07:00.825Z · LW(p) · GW(p)
divine quote of gay Turing
I'm not sure I know how to parse this.
Replies from: wedrifid, prase, ESRogs↑ comment by wedrifid · 2013-02-19T05:57:36.105Z · LW(p) · GW(p)
divine quote of gay Turing
I'm not sure I know how to parse this.
Showing results for: Divine quotation of gay Turing
- God quoting Turing would be more remarkable than got quoting Bayes because the latter was a priest (and so already affiliated with God) while the former is notoriously homosexual (while God is allegedly violently homophobic).
↑ comment by Eugine_Nier · 2013-02-20T05:02:48.061Z · LW(p) · GW(p)
God quoting Turing would be more remarkable than got quoting Bayes because the latter was a priest (and so already affiliated with God) while the former is notoriously homosexual (while God is allegedly violently homophobic).
So? God is still willing to work with (and through) sinners.
Replies from: wedrifid↑ comment by wedrifid · 2013-02-20T05:19:31.393Z · LW(p) · GW(p)
So? God is still willing to work with (and through) sinners.
It isn't my position. Merely one I translated into well formed English. Any questions should be directed to the original source.
↑ comment by Qiaochu_Yuan · 2013-02-19T06:01:48.202Z · LW(p) · GW(p)
The word I had trouble parsing was "of." I think ESRogs' hypothesis is probably correct, though.
Replies from: wedrifid↑ comment by wedrifid · 2013-02-19T06:08:04.362Z · LW(p) · GW(p)
The word I had trouble parsing was "of." I think ESRogs' hypothesis is probably correct, though.
That seems highly unlikely: it would make prase's comment not fit the context. I think you have been misled.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-19T06:16:08.621Z · LW(p) · GW(p)
Oh, hmm. I got confused about what ESRogs' hypothesis actually implied. Never mind. Anyway, I agree with your interpretation but still think the original phrasing was quite confusing.
Replies from: wedrifid↑ comment by Shmi (shminux) · 2013-02-18T23:10:01.572Z · LW(p) · GW(p)
Even God ...
You mean the Devil, surely.
Replies from: Manfred↑ comment by Manfred · 2013-02-18T23:41:50.049Z · LW(p) · GW(p)
Potato potato.
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-19T16:15:57.730Z · LW(p) · GW(p)
Huh, it works even better in text with undifferentiated spelling. I'll have to remember that one.
↑ comment by Eugine_Nier · 2013-02-19T03:00:40.623Z · LW(p) · GW(p)
Even God can quote Bayes when it suits him.
That would require Him to exist. ;)
Replies from: ewang↑ comment by JRMayne · 2013-02-18T20:49:31.001Z · LW(p) · GW(p)
Ha!
I think the post is excellent, and I appreciated shminux's sharing his mental walkthrough.
On that same front, I find the Never-Trust-A-[Fill-in-the-blank] idea just bad. The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.
One quibble with the OP: I don't think a computer can pass a Turing Test, and I don't think it's close. The main issues with some past tests are that some of the humans don't try hard to be human; there should be a reward for a human who gets called a human in those tests.
Finally, I no longer understand the divide between Discuss and Main. If this isn't Main-worthy, I don't get it. If we're making Main something different... what is it?
Replies from: palladias, ESRogs, None↑ comment by palladias · 2013-02-18T22:17:18.214Z · LW(p) · GW(p)
There is a reward for Most Human Human (and a book by that same title I cite from in the longer talk I gave linked at the top). The computers can pass sometimes, and the author makes basically the same argument as you do -- the humans aren't trying hard enough to steer the conversation to hard topics.
↑ comment by ESRogs · 2013-02-19T05:18:49.849Z · LW(p) · GW(p)
The difference between Discussion and Main is that Main is hard to find.
If it's in Main and not Recently Promoted, I don't know how you're supposed to ever see it -- is everybody else using RSS feeds or something?
Replies from: John_Maxwell_IV, palladias↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-02-19T10:03:04.902Z · LW(p) · GW(p)
I look at the sidebar on the right or visit http://lesswrong.com/r/all/recentposts/
↑ comment by [deleted] · 2013-02-18T20:52:40.880Z · LW(p) · GW(p)
The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.
It remains evidence, however; to ignore such is the fallacy of gray.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-18T22:51:44.123Z · LW(p) · GW(p)
Yes, but it's almost certainly evidence that people on LW overweight relative to other evidence because atheism is an excessively salient feature of the local memeplex.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-02-19T03:01:35.770Z · LW(p) · GW(p)
Interesting, I was under the impression that most people around here were fairly good about not doing this. However, it's possible I haven't been paying attention recently.
comment by Scott Alexander (Yvain) · 2013-02-19T02:17:10.436Z · LW(p) · GW(p)
I'm having trouble determining the best strategy in these kinds of games, but I'm worried it's not quite actually sounding like a member of the group you're pretending to be.
For example, a liberal Christian complained that her (honest!) Christian answer did very poorly, because people associated liberalism with atheism. This suggests that the best strategy isn't necessarily to honestly list what you believe, but to list what you think a typical member of the group involved believes.
And If (for example) atheists know that the average Christian is writing about what they think the average Christian believes, than atheists in their fake entries will also write about what they think the average Christian believes.
Yes, if overdone, this is a sign of dishonesty; for example, anyone who was too stereotypical ("Yeah, I get up each day, read a selection from the Bible, check the Pope's Twitter account, then go to church, then go bomb an abortion clinic..." would be obviously fake.) So the best strategy seems to write something kind of stereotypical, but to depart from stereotype in a few places so as to signal you're talking about a real person rather than a straw man.
But this strategy is identical for real Christians and sham Christians, which sort of defeats the purpose of the Ideological Turing Test. We're not testing whether atheists can talk like a real Christian any more as much as whether atheists can talk like a real Christian pretending to be a stereotypical Christian, which seems like a lower bar.
I'd be interested in seeing differences between this test and one in which, say, Christians were just asked to discuss their opinions on some topics without it being part of a Turing Test, and then atheists were asked to fake Christian opinions on those same topics (also interested in how those same just-discuss entries would do against Christians-writing-for-a-Turing-test entries).
Interestingly, the entry that I was most convinced was Christian - and I was right - was one that included the phrase "and when I was in seminary...". I didn't expect any atheist to have the chutzpah to fake a priest, whereas I did expect some actual priests to read Leah's blog. This suggests that a winning strategy is to be stereotypical in unexpected ways fakers wouldn't think of, and possibly to be unstereotypical in unexpected ways fakers wouldn't think of (although obviously I can't think of any examples of this).
Replies from: palladias, ChristianKl, None, Eugine_Nier, DanArmak, Luke_A_Somers, metatroll↑ comment by palladias · 2013-02-19T02:35:47.851Z · LW(p) · GW(p)
The atheists and Christians were told to be honest when writing their own responses. So they shouldn't have been trying to game it in this way.
For year three, I've been thinking of doing just this:
I'd be interested in seeing differences between this test and one in which, say, Christians were just asked to discuss their opinions on some topics without it being part of a Turing Test, and then atheists were asked to fake Christian opinions on those same topics
On the topic of marriage, since people conceive of the institution of having really different purposes but usually get bogged down of the question of what laws should exist. I thought the question of "How should a couple decide whether to get married?" would provoke interesting responses.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2013-02-20T00:03:24.192Z · LW(p) · GW(p)
The atheists and Christians were told to be honest when writing their own responses. So they shouldn't have been trying to game it in this way.
"Honest" leaves a lot of wiggle room. If I were trying to write my honest atheist entry, what do I emphasize? That I hate scholastic philosophy and think religion set ethics back five hundred years? Or how I love C.S. Lewis and G.K. Chesterton and find many religious works to be among the most sublime creations of humankind? Both would be "honest".
Even if someone genuinely sets out not to present themselves at all, I still would expect presentation to be their main concern. There's a certain class of things which are impossible to do naturally. For example, if you try to count your natural respiratory rate, you will fail miserably; the fact that you're thinking about your breath immediately shifts it to consciously deciding what it is going to be. In my case, it makes it slower than normal. I can try to then consciously adjust by speeding it up, but since I don't know how much to speed it up, attempting to breathe naturally is basically just me trying to fake my natural breathing rate, probably badly.
I think self-presentation attempts of this sort raise some of the same problems.
↑ comment by ChristianKl · 2013-02-19T18:53:50.609Z · LW(p) · GW(p)
For example, a liberal Christian complained that her (honest!) Christian answer did very poorly, because people associated liberalism with atheism. This suggests that the best strategy isn't necessarily to honestly list what you believe, but to list what you think a typical member of the group involved believes.
It depends how you define poorly. Her answer demostrated something useful about inaccurate stereotypes of Christianity. If the goal of the whole exercise is to convince others that Christianity is right, then her answer might be good because it teaches people about their misconceptions about Christianity.
Replies from: ciphergoth, BlazeOrangeDeer↑ comment by Paul Crowley (ciphergoth) · 2013-02-19T20:31:31.863Z · LW(p) · GW(p)
Yes. If you're faking it, the measure is how many people you fool. If you're guessing, the measure is how many you get right. But if you're writing honestly, there's no winning or losing; just write honestly, and if people guess you wrong more fool them.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-02-21T15:02:10.696Z · LW(p) · GW(p)
But if you're writing honestly, there's no winning or losing
I don't think you understand the point of the game. The goal of the game isn't to guess the teachers password. palladias converted to Catholism after running that game. That's a win for the catholics in the game who honestly explained catholicsm to her.
Of of the catholics wrote that he likes SMBC. That's one of the examples that struck out to palladias. Even when it reduced the judging scores of the answer, I think that answer likely increase the chances of "turning" palladias.
Replies from: ciphergoth, Ishaan, Kindly↑ comment by Paul Crowley (ciphergoth) · 2013-02-22T12:36:09.931Z · LW(p) · GW(p)
Ah, so you're saying that the goal of the honest participant is for the guessers to distinguish correctly, showing that their counterparts have a poor understanding of their beliefs?
↑ comment by Kindly · 2013-02-22T14:23:40.805Z · LW(p) · GW(p)
Your argument is too general: it applies to any game. If I play chess against a Catholic, who deliberately throws the game in order to make a clever argument that succeeds in converting me to Catholicism, that counts as a win of some sort... but not a win in chess.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-02-22T16:23:10.890Z · LW(p) · GW(p)
If I play chess against a Catholic, who deliberately throws the game in order to make a clever argument that succeeds in converting me to Catholicism, that counts as a win of some sort... but not a win in chess.
I think that this game is inherently about showing that your ideology is better than the one of the people on the other side. Chess is generally not played with that intent.
↑ comment by BlazeOrangeDeer · 2013-02-19T23:00:47.194Z · LW(p) · GW(p)
I think "poorly" in this case meant that it wasn't rated very believable by the judges.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-02-21T14:48:41.167Z · LW(p) · GW(p)
Yes, I think that's a bad definition of poorly. The goal of the game isn't only to get high ratings from the judges but to ultimately show people that your beliefs are better than the beliefs of the other side.
↑ comment by [deleted] · 2013-03-05T21:00:30.432Z · LW(p) · GW(p)
I'm having trouble determining the best strategy in these kinds of games
I had read this, when it was originally posted.
And then, I was referred to this, which was also written by you: http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/
Which was sufficiently good at espousing Reactionary philosophy that I was STARTLED when I got to the end, because I had forgotten that you were only pretending to be Reactionary for the sake of an Ideological turing test. You were well on your way to convincing me to take a hard look at my own progressive ideals and find out why I hadn't seen all of these obvious flaws and then you said:
Nevertheless, I hope that this has been a not-entirely futile exercise in trying to Ideological Turing Test an opposing belief.
Despite the fact, that you had literally said, at the beginning:
Much of this will be highly politically incorrect and offensive, because that’s what Reactionaries do. I have tried to be charitable towards these ideas, which means this post will be pushing politically incorrect and offensive positions. If you do not want to read it, especially the middle parts which are about race, I would totally understand that. But if you do read it and accuse me of holding these ideas myself and get really angry, then you fail at reading comprehension forever.
I seem to have forgotten that while reading the middle... So erm, yes, I understand that you don't hold those ideas, and I'm not angry at you. But I do apparently fail at reading comprehension. And at having justifications for my ideals.
But reading this IN LIGHT of you saying a short time ago
I'm having trouble determining the best strategy in these kinds of games.
That's just weird. I'm having a hard time visualizing room for there to even be a better strategy than what you just did.
It's rather embarrassing to admit that I failed at reading comprehension, but the contrast seems to great to not mention.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-03-06T00:29:34.511Z · LW(p) · GW(p)
Which was sufficiently good at espousing Reactionary philosophy that I was STARTLED when I got to the end, because I had forgotten that you were only pretending to be Reactionary for the sake of an Ideological turing test.
Yvain might be a brilliant doctor, now or some day, but what he writes is already genius. If only he realized that he could help more people and make more money if he seriously considered this as a career. The case of an altruistic lawyer volunteering in a soup kitchen comes to mind.
Replies from: gjm↑ comment by gjm · 2013-05-14T10:35:12.302Z · LW(p) · GW(p)
It isn't at all obvious to me that he could help more people and make more money by making his career in writing. (I mean, obviously it's possible that he would, but you can't mean that because it's pretty much always true for any pair of careers.)
Just what sort of writing career do you envisage for him that's more lucrative and more world-enhancing than medicine?
(For the avoidance of doubt: I agree that his writing is excellent.)
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-14T15:06:26.740Z · LW(p) · GW(p)
Actually, I take it back. It's not a dichotomy. He can be both and he will probably be a better writer if he is also a practicing psychiatrist. He might decide to write professionally at some point, though.
↑ comment by Eugine_Nier · 2013-02-19T03:29:01.755Z · LW(p) · GW(p)
One think you're analysis neglected is how the judges will adjust their strategy in response to these developments.
↑ comment by DanArmak · 2013-02-20T19:52:17.828Z · LW(p) · GW(p)
I'd be interested in seeing differences between this test and one in which, say, Christians were just asked to discuss their opinions on some topics without it being part of a Turing Test, and then atheists were asked to fake Christian opinions on those same topics
In other words, the test should have blinded the participants.
↑ comment by Luke_A_Somers · 2013-02-19T18:00:31.344Z · LW(p) · GW(p)
One of my only two errors on the christian side of year 2 was to suspect that a stereotypical Christian was a faker who was aiming for dead center. The other was an atheist who nailed the periphery.
So, the strategy of lying or selectively choosing topics to seem more typical within your group would not have worked on me.
I do think the whole 'I went to seminary' thing might best in the future be ruled out. It's one thing to create a fictional persona. It's another to give them a position of authority.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2013-02-20T00:06:25.391Z · LW(p) · GW(p)
I don't even think it's about authority. Another person talked about how Christianity helped them through their drug addiction. Because there really are Christians who have been helped through drug addictions, but most contestants would have respected the spirit of the test too much to try the somewhat different exercise of making up a completely fake personality with a fake life history, this provided strong evidence of real Christianity.
Replies from: DanArmak↑ comment by DanArmak · 2013-02-20T19:50:47.437Z · LW(p) · GW(p)
most contestants would have respected the spirit of the test too much to try the somewhat different exercise of making up a completely fake personality with a fake life history
Isn't the spirit of the test to be as convincing as possible? Imagining and imitating a fake persona in detail is exactly what the test asks for.
comment by Oligopsony · 2013-02-18T23:50:48.011Z · LW(p) · GW(p)
I seem to be simultaneously freakishly good and bad at this game - I have, on multiple occasions and for multiple mappings of "green" and "blue," been accused of being a green pretending to be a blue (I am in fact blue,) and somehow I regularly find myself discussing the finer shades of green with greens who assume I am green. (It is hard for me to think of things that are funner than this.)
On Will Newsome's IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.) Make it happen, somebody.
Replies from: marchdown, Viliam_Bur, orthonormal, Will_Newsome↑ comment by marchdown · 2013-02-19T00:06:05.306Z · LW(p) · GW(p)
On Will Newsome's IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.) Make it happen, somebody.
Ooh, this would be so great!
↑ comment by Viliam_Bur · 2013-02-19T09:39:25.167Z · LW(p) · GW(p)
Measuring the outcome is good, but I see a problem with the original data. How do you know who is really Green and who is really Blue?
By their self-reports, right?
Well, I see a problem here. What if someone insists on self-describing as a Blue, but most Blues disagree with him and say he is completely confused about what Blue-ness is? -- I know the definition of Blue is not exact, but it at least roughly corresponds to something in the idea-space, and a person can get it wrong and self-identify as a Blue despite being somewhere else. (Perhaps somewhere beyond both typical Blue and Green areas, so the person self-identifies as a Blue simply because they use Blue as a synonym for non-Green.) -- If other people fail to recognize such person as a Blue, is it really their fault?
The question is not exactly "whom to blame?", but rather "if we use noisy inputs and then get noisy outputs, does it tell us something beyond the fact that there was a noise in input?"
(To be specific, I remember someone in the ideological test saying that they self-identify as both Christian and Atheist. And it was 1 person in 13, so that has a non-trivial impact on the results. I don't think that majority of either Christians or Atheists would agree that an opinion like this is a valid representation of their opinions. So how exactly should guessing or not guessing this person's self-description influence the ratings? And should it influence the ratings if the same person would be forced to choose only one of the descriptions?)
Replies from: FeepingCreature, DanArmak, Oligopsony↑ comment by FeepingCreature · 2013-02-19T11:12:29.257Z · LW(p) · GW(p)
they self-identify as both Christian and Atheist
"Christ was not the Son of God, because there is no God, but we should follow his teachings anyways"?
Replies from: Eugine_Nier, Viliam_Bur↑ comment by Eugine_Nier · 2013-02-20T05:09:33.725Z · LW(p) · GW(p)
Maybe Christianity is hermeneuticly true.
↑ comment by Viliam_Bur · 2013-02-19T12:06:05.396Z · LW(p) · GW(p)
"Christ was not the Son of God, because there is no God, but we should follow his teachings anyways"?
I don't remember, but most likely something like this. (Maybe with some "cosmic law" or "cosmic energy" added for better effect.)
Now this completely does not represent Christian viewpoint (we should follow Christ precisely because he told us what God wants) or atheist viewpoint (even if Christ was a good and smart person, it is unlikely he got everything right; and even he got something right, we can discover and prove it independently).
↑ comment by DanArmak · 2013-02-20T19:56:28.432Z · LW(p) · GW(p)
What if someone insists on self-describing as a Blue, but most Blues disagree with him and say he is completely confused about what Blue-ness is?
Sometimes there are many tinges of Blues. And for almost every tinge you pick, most other Blues will claim people of that tinge are not really Blue. (Religious and ideological movements get like this a lot.) But Greens have no problem classifying people as Blue and non-Blue, so it's not a wholly useless concept.
↑ comment by Oligopsony · 2013-02-19T14:41:46.453Z · LW(p) · GW(p)
(To be specific, I remember someone in the ideological test saying that they self-identify as both Christian and Atheist. And it was 1 person in 13, so that has a non-trivial impact on the results. I don't think that majority of either Christians or Atheists would agree that an opinion like this is a valid representation of their opinions. So how exactly should guessing or not guessing this person's self-description influence the ratings? And should it influence the ratings if the same person would be forced to choose only one of the descriptions?)
Well, that depends on what the test is testing for. If it's about metaphysics, Atheist, if it's about practice, Christian.
Replies from: Will_Newsome, ChristianKl↑ comment by Will_Newsome · 2013-02-20T02:38:30.019Z · LW(p) · GW(p)
puts on Hanson hat Atheism/theism isn't about metaphysics.
↑ comment by ChristianKl · 2013-02-19T18:56:08.466Z · LW(p) · GW(p)
There a 50% chance that God exists?
↑ comment by orthonormal · 2013-03-01T03:14:41.463Z · LW(p) · GW(p)
On Will Newsome's IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.)
Forget trying to use people's actual beliefs anywhere in the process; it's simpler just to let people play the ITT for a lot of disjoint positions, so that they only get the bonus for their actual beliefs at most once. This mildly penalizes people with extremely idiosyncratic beliefs, but such people wouldn't even be able to play the current ITT.
↑ comment by Will_Newsome · 2013-02-19T00:58:46.934Z · LW(p) · GW(p)
The somebody could only be a few programmers hired/recruited by CFAR working with direction from Leah. Basically Leah would have to get some people Anna respects to agree the idea is good and then talk to Anna about it. But presumably Anna and CFAR generally are really busy, so, it probably won't go anywhere in any case.
Replies from: Pavitracomment by TGM · 2013-02-18T20:38:37.575Z · LW(p) · GW(p)
A concern regarding this kind of test when applied to groups (Christians vs Atheists, for instance) rather than individuals is that one umbrella term may take more views than another, making the guessing game more/less tricky.
Nevertheless, this is a neat idea, particularly for particular people rather than groups as a whole.
Replies from: Manfredcomment by Luke_A_Somers · 2013-02-19T18:17:07.673Z · LW(p) · GW(p)
Considering the different sizes of the targets, I'm not sure what all this means. Like, while there are hundreds of denominations of christianity (though that seriously overrepresented Catholicism), atheism is barely more specific than 'Other' in terms of moral foundations and systems.
As long as you're comparing groups with different degrees of dispersion, it is going to be trickier for one side than the other. The more degrees of definition, the more opportunities to miss one as an outsider and slip up.
comment by garethrees · 2013-02-22T15:22:22.034Z · LW(p) · GW(p)
I find it very plausible that Christians are better able to pretend to be atheists than vice versa. But what follows from that?
Caplan claimed in his original piece:
the ability to pass ideological Turing tests—to state opposing views as clearly and persuasively as their proponents—is a genuine symptom of objectivity and wisdom.
Caplan gives little in the way of argument in support of this claim, and I'm not at all sure that it's true. "Genuine symptom of objectivity and wisdom", really? My objections follow.
First, there's only one way to be right but there are many ways to be wrong. So if you are right it is likely that you have only a broad survey-level view of the different varieties of wrongness. Take, for example, climate change. The scientific consensus view is narrow and everyone in the debate knows what it is. But as far as I know there are many different skeptical positions (there's no such thing as the greenhouse effect; there may be a greenhouse effect but CO₂ is not a greenhouse gas; CO₂ may be a greenhouse gas but concentrations are not increasing; CO₂ concentrations may be increasing, but they are not anthropogenic; global temperatures are not rising; temperatures may be rising but not because of CO₂; temperatures may be rising but there is no need to do anything because the net result will be beneficial; climate change may be harmful but it's too late to do anything about it; it may not be too late but there are still better things to spend money on). I think I know enough about each of these positions to be confident that it's wrong but in order to impersonate one of these positions well enough to fool people I would have to know it inside out. Exactly which wrong assumptions and wrong authorities does each of these positions depend on?
Second, the criterion of being able to state views "as clearly and persuasively as their proponents" is not as neutral as it seems. If you're right you may have been happy to rely on the facts to do your persuading for you. But if you're wrong then you have probably needed to employ a lot of rhetoric, salesmanship, fallacies and argumentation. These techniques take skill and practice and aren't easy to imitate. For example, there's no way that I would be able to imitate the dense texture of sneering and insinuation in the rhetoric of someone like Moldbug.
Third, in the specific case under discussion here, Christianity has a number of cultural properties that make it hard to imitate. If you are Christian, then you probably know the Bible in detail, you are probably familiar with a range of theological and apologetic texts, and you are probably embedded in a subculture with its own rules, rituals, and mores. These kinds of details take a lot of work to imitate. But the typical atheist has probably never read The God Delusion or attended any kind of atheist event, so there's nothing there that needs to be invented.
Replies from: Salivanth, orthonormal, Eugine_Nier, wedrifid↑ comment by Salivanth · 2013-03-01T06:54:01.168Z · LW(p) · GW(p)
"If you are Christian, then you probably know the Bible in detail, you are probably familiar with a range of theological and apologetic texts"
I'll admit I don't have any statistics here, but from what I've seen heard, both first-hand and second-hand, Christians tend to be quite poor on average at knowing the Bible. I've never heard any evidence suggesting the average Christian has a detailed knowledge of the contents of the Bible, even if the kind of Christians who like to argue Christianity are more informed than most. (Similarly, argumentative atheists tend to have a better knowledge of the atheistic arguments than the average atheist.)
Replies from: ygert↑ comment by ygert · 2013-06-12T07:08:04.794Z · LW(p) · GW(p)
But it's exactly the type that likes to argue religion that participates in such a test. The test is comparing argumentative atheists to argumentative theists. Non-argumentative atheist and non-argumentative atheists are simply not involved. It is hard to test what non-argumentative folk believe, simply by the fact that they are not argumentative, and thus very unlikely to look at such tests.
↑ comment by orthonormal · 2013-03-01T03:20:08.658Z · LW(p) · GW(p)
I find it very plausible that Christians are better able to pretend to be atheists than vice versa.
I instantly did a double-take at this statement. It depends a lot on context.
I'd find it likely that the Christian readers of Patheos blogs are better at the Ideological Turing Test than the atheist readers of Patheos blogs. However, I'd find it incredibly unlikely if the samples were drawn from, say, all American Christians and all American atheists. (The typical Christian in America has listened to fewer atheists about atheism than vice versa.)
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-01T06:07:06.291Z · LW(p) · GW(p)
(The typical Christian in America has listened to fewer atheists about atheism than vice versa.)
Sure — if only for the same reason that the typical left-handed tennis player has played with more right-handed tennis players than vice versa. There are a lot more Christians!
Replies from: orthonormal↑ comment by orthonormal · 2013-03-03T23:17:53.135Z · LW(p) · GW(p)
Yes, that's all I was saying.
↑ comment by Eugine_Nier · 2013-02-23T06:07:04.169Z · LW(p) · GW(p)
I find it very plausible that [Blues] are better able to pretend to be [Greens] than vice versa. But what follows from that?
That Blues understand Green arguments but aren't persuaded by them (presumably because they have counterarguments), whereas Greens don't understand Blue arguments and this makes it unlikely they have counterarguments.
Now let's look at your three objections, near as I can tell your first objection amounts to "sometimes the people defending the incorrect position are heterogeneous, this gives them a large advantage in the test", and your third objection amounts to "sometimes the people defending the incorrect position are homogeneous, this gives them a large advantage in the test".
Now let's look at you second objection: much as it may seem that way your opponents are not evil mutants whose position has no logic to it whatsoever, most position actually held by humans, especially intelligent humans have a certain logic to them. (And if you're opponents' position really has no logic to it beyond saying anything plausible sounding that backs up their conclusion, that's very easy to imitate). Thus, the two positions have different logic to them and it will be hard for a person only familiar with one of those logics to imitate the other. On the other hand, if someone is familiar with the logic of both positions A and B, the fact that he nevertheless holds position A is evidence that A is in fact correct.
Replies from: garethrees↑ comment by garethrees · 2013-02-23T10:38:50.864Z · LW(p) · GW(p)
Blues understand Green arguments but aren't persuaded by them (presumably because they have counterarguments), whereas Greens don't understand Blue arguments and this makes it unlikely they have counterarguments.
This is a restatement of the hypothesis under discussion. (That inability to imitate convincingly is caused by lack of understanding.)
your third objection amounts to "sometimes the people defending the incorrect position are homogeneous, this gives them a large advantage in the test".
You've failed to imitate my position. My third objection is about irrelevant detail, not homogeneity. (Perhaps you can suggest a better way I could have put it?)
your opponents' position really has no logic to it beyond saying anything plausible-sounding that backs up their conclusion
Again, you've failed to imitate my position. For concreteness, let's take Christopher Monckton as an example. It's not that I think he's saying "anything plausible-sounding". His arguments have a logical structure which is imitable but they are embedded in a rhetorical structure that I would find very hard to imitate convincingly due to lack of practice. (I guess you could characterize this as a form of irrelevant detail and merge it with my objection 3 but I think these two sources of irrelevant detail are sufficiently different in origin and aim to be worth separating.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-02-24T21:36:51.777Z · LW(p) · GW(p)
His arguments have a logical structure which is imitable but they are embedded in a rhetorical structure that I would find very hard to imitate convincingly due to lack of practice.
I'm not sure where you're drawing the line between logical and rhetorical structure. The most obvious rhetorical structure is that he acts like he alieves his position in addition to believing it.
↑ comment by wedrifid · 2013-03-01T19:21:26.769Z · LW(p) · GW(p)
I find it very plausible that Christians are better able to pretend to be atheists than vice versa.
On the other hand, any Christian who pretends to be an atheist better than an atheist isn't a very good Christian. By doing so they are violating the teachings of their God.
comment by Elithrion · 2013-02-18T21:15:52.867Z · LW(p) · GW(p)
Some of the response posts talk about "attractiveness scores", but I didn't find those in the data summaries. Did those ever happen? I think it'd be more interesting if people wrote their genuinely best arguments for each side, and we measured how much the reader is persuaded, instead of many participants (as far as I can tell) trying to pretend that they're average and are persuaded by average arguments.
Of course, it's already pretty interesting as-is, and it's nice that somone actually tried out the excercise!
ETA: the other thing is that I suspect the participants in your blog are wildly unrepresentative (except perhaps of participants in your blog). It might be interesting to have some people who can't argue both sides that well both to provide more diversity for the reader/scorer and provide some representation of other communities (for example, I'm not optimistic about my ability to effectively pass off as a Christian, since I can't think of any moment in my life when I actually believed or paid much attention to church teachings).
comment by Michael Wiebe (Macaulay) · 2013-02-22T20:02:43.919Z · LW(p) · GW(p)
What's a good term for "being able to pass an ideological Turing test"? (Being able to pass an ITT is related to being able to argue both sides of a debate, being able to accurately explain your opponent's position, being able to summarize the strongest counterargument to your position, etc.)
Following the original analogy, is there a term for "a machine that's able to pass a Turing test"? My googling didn't turn up anything. But if there was ("a machine is called Turing-(blank) if it can pass a Turing test"), then it seems we could adapt it fairly easily to the ITT: someone is ideologically Turing-(blank) if they can pass an ITT.
Any suggestions to fill in the blank?
Replies from: Dan_Moore, Macaulay↑ comment by Michael Wiebe (Macaulay) · 2013-02-22T20:52:40.548Z · LW(p) · GW(p)
Turing-capable?
comment by Pentashagon · 2013-02-19T18:11:43.715Z · LW(p) · GW(p)
I think the ITT may test more for personal experience than ability to model. For instance how well would this group of Christians and Atheists do trying to imitate Muslims, Buddhists, Hindus, Shintoists, Zoroastrianism, and other even less well-known religions in the Western world? Most English speakers have some familiarity with a branch of Christianity. How many have explored Shaamanism (I haven't)? Many atheists have gone through the motions of Christianity or some other religion in the past and have a comparably easier time writing about how that religious behavior and belief feels from the inside.
How do we separate the ability to understand and model other beliefs from the measurement of prior experiences? How do we eliminate the effect of writing/storytelling ability? MOAR SCIENCE NEEDED (imho). I love the idea of this project and hope you continue it. I may volunteer next time.
Replies from: Randy_Mcomment by wadavis · 2013-05-17T20:51:56.687Z · LW(p) · GW(p)
I want to highlight the use of the above approach to argument for resolving mundane conflicts as a Bayesian.
- Step 1: Run your Turing Test on the conflict. (This does need to run on both sides, It is your own missing info we will focus on)
- Step 2: Compare your results to their actual model. Highlight the individual differences.
- Step 3: Item by item quiz them for info on the differences. Why chose this? What alternatives did you consider? Why not take those alternatives? The goal is is to identify their goals, values and beliefs that created their model.
- Step 4: Repeat until your Turing Test matches their model.
- Step 5: Update your model with the newly acquired info.
- Step 6: Admire the mutually beneficial solution to the conflict. (Convincing them this is the mutually beneficial solution is outside the scope of this seven step program.)
- Step 7: Profit.
A step by step checklist on how to decide which widget to use in the new product line, or where to go for supper.
comment by wadavis · 2013-05-17T20:49:25.092Z · LW(p) · GW(p)
I want to highlight the use of the above approach to argument for resolving mundane conflicts as a Bayesian.
Step 1: Run your Turing Test on the conflict. (This does need to run on both sides, It is your own missing info we will focus on) Step 2: Compare your results to their actual model. Highlight the individual differences. Step 3: Item by item quiz them for info on the differences. Why chose this? What alternatives did you consider? Why not take those alternatives? *The goal is is to identify their goals, values and beliefs that created their model. Step 4: Repeat until your Turing Test matches their model. Step 5: Update your model with the newly acquired info. Step 6: Admire the mutually beneficial solution to the conflict. (Convincing them this is the mutually beneficial solution is outside the scope of this seven step program.) Step 7: Profit.
A step by step checklist on how to decide which widget to use in the new product line, or where to go for supper.
comment by A1987dM (army1987) · 2013-02-19T17:56:17.922Z · LW(p) · GW(p)
BTW, I accidentally passed the gender Turing Test (which ISTR pre-dated and inspired the actual Turing Test). :-)
comment by buybuydandavis · 2013-02-19T14:25:03.219Z · LW(p) · GW(p)
I think we've had discussions before on argumentative techniques, and usually the first step is being able to state the other person's argument as convincingly as they can - pass a Turing Test on imitating them.
comment by Epiphany · 2013-02-19T02:28:23.571Z · LW(p) · GW(p)
I've read this three times now and am still not sure how to interpret:
I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error.
I have concluded that it's vague, for the following reasons:
A. I don't know if "this particular argument" refers to your argument for atheism, for Christianity, or your arguments to convince the opponents that you are an atheist or a Christian.
B. Your phrase "the rationalist weak-point" is unspecified. At first, I thought you were referring to the fact that the atheists did less well at detecting the fakes, but you admitted yourself in the paragraph about that that the problem was the questions you wrote.
C. Your phrase "the error" is unspecified.
Also, it's unclear whether the following statement:
(Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").
...should be interpreted to mean "how can my opponents be so dumb?" or "how can myself and my side be so dumb?" I originally interpreted the latter but now am wondering if the correct interpretation should be the former.
Replies from: palladias↑ comment by palladias · 2013-02-19T04:00:15.889Z · LW(p) · GW(p)
'This particular argument' was meant to be unspecific since I was talking about an aspect of Ideological Turing Tests (or fights generally) that doesn't hinge on what you're fighting about.
If you think your interlocutor is obviously wrong, and there's nothing for you to learn by trying to model him more accurately, you may be wrong about that! The flaw in his thinking that's causing him to ignore data is probably native to you as well. Putting in the work to spot it and to observe what defensive strategies he's using to avoid spotting it may cause a queasy feeling of recognition that you used the same kinds of language/flinches/etc in a different recent argument, and now you should go back and check your data.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-02-19T11:17:20.608Z · LW(p) · GW(p)
I think the flaw is that humans copy from the culture they immerse themselves in. Do you think you would have come to convert to Catholicism without engaging with Catholicism over a large time span? Assuming this, shouldn't I avoid confronting/studying any religion in depth, that I'm not already immunized to? How is a rationalist to act when every intense engagement with a belief, true or not, makes it more likely they'll adopt that belief?
PS: unrelatedly, would you let a FAI convince you that Catholicism is false? I think I'd let a FAI convince me that Catholicism is true, assuming it was built by an uninterested third party (neither the Catholic church nor, say, Dawkins). Should MIRI hire religious people to assure people that their FAI was not built unduly biased towards atheism?
Replies from: Larks↑ comment by Larks · 2013-02-19T11:28:28.970Z · LW(p) · GW(p)
to assure people that their FAI was not built unduly biased
If it's 'unduly' anything it's not FAI.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-02-19T11:32:53.174Z · LW(p) · GW(p)
Of course, but people may not believe out of hand that a superintelligence built by atheists saying atheism is correct is not just parroting its creators. Might be important in the take-off phase to have that extra bit of public trust.
Replies from: Larks↑ comment by Larks · 2013-02-19T15:41:52.361Z · LW(p) · GW(p)
Once you have FAI, you're set. There is nothing left you need to do. If something needs to be done, the FAI will know better than who what has to be done and how to do it. If it turns out it should have been written by Christians, it will tell some Christians how to write an FAI and make sure they do it correctly. Worrying about what to do after* running the program is like taking a cup of water with you as you flee your burning house, so that when the fire department arrive you can help out.
*except those things the FAI judges it would be good for you to need to do, which are not relevant here.
comment by hankx7787 · 2013-02-19T13:31:17.542Z · LW(p) · GW(p)
I like this a lot. It's often said by conservative commentators that conservatives completely understand liberals, but liberals do not understand conservatives at all. I think there's truth to that, I would love to see some experiments like this, hehehe...
Replies from: None↑ comment by [deleted] · 2015-09-14T12:37:50.791Z · LW(p) · GW(p)
This reminds me of Yvain2's post
I like to listen to various commentators and try to guess whether they are conservative or liberal. Recently I listened heard the charismatic and articulate Chris Uhlmann on Television who sounds like a conservative. Apparently, he's a political editor for the ABC (and presumably nonpartisan), so I wonder what that says about my ability to understand either ideologues if I mislabel that cluster of beliefs.