Argument Maps Improve Critical Thinking
post by Johnicholas · 2009-08-30T17:34:09.150Z · LW · GW · Legacy · 18 commentsContents
18 comments
Charles R. Twardy provides evidence that a course in argrument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!
Also, his paper is the best, methodologically, that I've seen in the field of "individual rationality augmentation research".
To summarize my (clumsy) understanding of the activity of argument mapping:
One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a "claim" node. Every claim node has approximately one sentence of english text associated. The children of a claim are "reasons", which do NOT have english text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim's child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.
In elaborating the argument, it's often necessary to insert implicit claims. This should be done abiding by the "Principle of Charity", that you should interpret the argument in such a way as to make it the strongest argument possible.
There are two syntactic rules which can easily find flaws in argument maps:
The Rabbit Rule: Informally, "You can't conclude something about rabbits if you haven't been talking about rabbits". Formally, "Every meaningful term in the conclusion must appear at least once in every reason."
The Holding Hands Rule: Informally, "We can't be connected unless we're holding hands". Formally, "Every meaningful term in one premise of a reason must appear at least once in another premise of that reason, or in the conclusion".
I have tried the Rationale tool, and it seems afflicted with creeping featurism. My guess is the open-source tool Freemind could support argument mapping as described in Twardy's article, if the user is disciplined about it.
I'd love comments offering alternative rationality-improvement tools. I'd prefer tools intended for solo use (that is, prediction markets are awesome but not what I'm looking for) and downloadable rather than web services, but anything would be great.
18 comments
Comments sorted by top scores.
comment by ctwardy · 2009-11-16T23:39:17.453Z · LW(p) · GW(p)
Thanks for the vote of confidence. I should say that while I think my paper presents things well, I cannot take credit for the statistics or experimental design. Tim van Gelder already had the machinery in place to measure pre-post gains, and had done so for several semesters. The results were published in Donohue et al. 2002. The difference here was that I took over teaching, and we continued the pre-post tests.
Although argument maps are usually used to map existing natural language arguments, one could start with the map. I like to think that the more people use these maps, the more their thinking naturally follows such a structure. I'm sure I could use more practice myself.
Just a note on terminology: the tree does have two kinds of nodes, but by virtue of being a tree, it is not a bipartite graph.
I think arguments in argument maps can be made probabilistic and converted to Bayesian networks. But as it is, it takes long enough just to make an argument map. I've recently discovered Gheorghe Tecuci's work. He's just down the hall from me, but I didn't know his work until I heard him give a talk recently. He has an elaborate system that helps analysts create structures very much like argument maps by filling in schemas, and then reasons quantitatively with them. The tree structure and the simplicity of the combination rules (min, max, average, etc.) are more limited than a full Bayesian network, but it seems to be a very nice extension of argument maps.
comment by John_Maxwell (John_Maxwell_IV) · 2009-08-30T19:30:20.460Z · LW(p) · GW(p)
Here is a tool that will help figure out whether you are overconfident, underconfident, or neither.
Here is a good puzzle involving rationality. See the discussion here on Less Wrong once you've completed it.
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2009-08-31T17:29:06.876Z · LW(p) · GW(p)
those set kind of a low bar. don't we have a test that more rigorously examines whether you infer correctly from data or not?
if anyone wants some fun (not making any claims about whether solving these indicate rationality, just for fun) try the blue eyes problem http://www.xkcd.com/blue_eyes.html and the "hardest logic puzzle in the world" http://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever
the blue eyes puzzle in particular I enjoyed immensely.
comment by James_K · 2009-08-31T05:57:31.092Z · LW(p) · GW(p)
This looks like a formalised general version of an Intervention Logic, a tool used in government to explain how a proposed policy will achieve a desired policy goal.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2009-08-31T10:34:43.015Z · LW(p) · GW(p)
Tell us more about this Intervention Logic.
Replies from: James_K↑ comment by James_K · 2009-09-01T06:17:43.139Z · LW(p) · GW(p)
What I've described below is the ideal, naturally as soon as politics gets involved in anything you can move away from the ideal rapidly and there's no way of getting politics out of policy formation.
Say you have a policy problem to solve or a policy goal to meet (reducing road fatalities, improving high school graduation rates etc.) and you have a policy you think will work to solve the problem, but you want to check your reasoning or develop a formalised explanation so you can convince another analyst or agency. One way to do this is develop an intervention logic.
The basic format of an intervention logic is a flowchart that outlines the causal relationship between your policy and the desired outcome "This policy will cause A, which causes B, which causes C, which results in outcome Z".
Its not a perfect system, it violates one of the cardinal rules of rationality since its generally used to justify a pre-reasoned position rather than reasoning from scratch and there's inevitably a certain amount of handwaving involved since the causal factors involved in most policy work are very hard to get a grip on, but at least it forces the person using it to state their assumptions and logic explictily.
comment by SilasBarta · 2009-08-30T20:40:01.988Z · LW(p) · GW(p)
Semi-OT: what about software for drawing Pearlean causal graphs that permit counterfactual surgery?
Replies from: Johnicholas↑ comment by Johnicholas · 2009-08-30T21:22:16.150Z · LW(p) · GW(p)
Oddly enough, Twardy's research in philosophy (as opposed to philosophy education) is related to Pearlean counterfactuals. He worked somewhat with Lucas Hope on a tool called "Causal Reckoner".
See:
http://portal.acm.org/citation.cfm?id=1082172&dl=GUIDE&coll=GUIDE&CFID=49488126&CFTOKEN=18921351
And:
http://www.springerlink.com/content/30m8ac6uafkxu9k3/
However, my google-fu is not strong enough to find the software itself. Possibly contacting the various individuals involved would be necessary.
Replies from: ctwardy, gwern, gwern↑ comment by gwern · 2009-09-06T10:38:38.859Z · LW(p) · GW(p)
However, my google-fu is not strong enough to find the software itself. Possibly contacting the various individuals involved would be necessary.
I think so. The intervention paper links to the providing site, but the straight link is down with a server error; poking around the site reveals no other mirrors or mentions of the causal reckoner. The Internet Archive shows the pages fine, but the only download page is about getting the source via CVS - and not anonymous CVS! The CVS server doesn't seem to expose any HTTP files either (SSH seems to be the onlyway in).
Replies from: gwern↑ comment by gwern · 2009-09-07T12:25:56.791Z · LW(p) · GW(p)
It's too bad - I kept thinking about Eliezer's old post about 'what would ordinary things like sight be like if they were RPG powers/abilities', and it seems to me like a cool concept to try out would be a game where you can literally see the causal decision graphs governing the actions of characters. Perhaps another power could be snipping branches or modifying weights to manipulate characters into doing your bidding or simply getting out of the way. (One could start off trapped in a jail cell... :)
But I've tried a couple ways to view the PDF and I can't seem to see the screenshot of the GUI! Now that's annoying.
↑ comment by gwern · 2009-08-31T05:06:14.144Z · LW(p) · GW(p)
I don't suppose there are any non-paywall versions?
Replies from: jhl↑ comment by jhl · 2009-08-31T20:20:21.427Z · LW(p) · GW(p)
Let me google that for you:
The 1st
http://crpit.com/confpapers/CRPITV38Marriott.pdf
The 2nd
http://www.csse.monash.edu.au/~korb/pubs/intervene.pdf
Replies from: mattcomment by Richard_Kennaway · 2009-08-30T19:43:05.387Z · LW(p) · GW(p)
I've heard of the game WFF'n'Proof ever since it was invented, but I've never had any closer knowledge of it. However, the publisher's website claims dramatic improvements in IQ and mathematical performance from playing that and their other games.
Replies from: PhilGoetzcomment by yboris · 2012-08-14T04:58:04.173Z · LW(p) · GW(p)
Here's an absolutely phenomenal tool for creating diagrams of whatever level complexity. yEd Graph Editor even has auto-layout which creates graphs with least overlap. http://www.yworks.com/en/products_yed_about.html
comment by anonym · 2009-08-31T00:56:24.274Z · LW(p) · GW(p)
I think focusing on the tool here is misleading. It is the process of creating something like a dependency graph representing the entire argument (and knowing how to analyze that graph) that is the important point, and people have been doing that for almost as long as philosophy has been around. Every critical thinking class teaches such techniques.