Cognitive Load and Effective Donation
post by Neotenic · 2013-03-10T03:11:14.161Z · LW · GW · Legacy · 20 commentsContents
Moral Games/Dilemmas Personal Policy Effective use of Cognitive Load None 20 comments
We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.
20 comments
Comments sorted by top scores.
comment by Error · 2013-03-10T04:58:53.454Z · LW(p) · GW(p)
We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.
This sounds awfully like endorsing the use of Dark Arts to counter the same. Not that I'd throw it out out of hand, but wouldn't it be better to find a way to reduce the effectiveness of said arts to begin with? It seems to me that's the primary purpose of most of the Sequences, in fact.
Replies from: Neotenic, AspiringRationalist↑ comment by Neotenic · 2013-03-10T15:45:16.921Z · LW(p) · GW(p)
I think Tegmark's claim is unequivocally that we should endorse Dark Artsy subsets of scientific knowledge to promote science and whatever needs promotion (rationality perhaps). So yes, the thing being claimed is the thing you are emotionally inclined to fear/dislike. By him and by me.
Though just to be 100% sure, I'd like to have a brief description of your meaning of "dark arts" to avoid the double transparency fallacy.
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-03-11T03:55:20.657Z · LW(p) · GW(p)
The post is endorsing the use of the Dark Arts. From a purely deontological perspective, that's objectionable. From a virtue ethics perspective, it could be seen as stooping (close) to the level of our enemies. From a consequentialist perspective, we need to compare the harm done by using them against the benefits.
To make that comparison, we need to determine what harm the Dark Arts, in and of themselves, cause. It seems to me (though I could certainly be convinced otherwise) that essentially all the harm they comes from their use in convincing people to believe falsehoods and to do stupid things. Does anyone have any significant examples of the Dark Arts being harmful independent of what they're being used to convince people of?
Replies from: prase↑ comment by prase · 2013-03-12T23:19:04.054Z · LW(p) · GW(p)
Does anyone have any significant examples of the Dark Arts being harmful independent of what they're being used to convince people of?
Dark Arts have externalities. Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run. Using Dart Arks is a Prisoner's dilemma defection with all associated problems - a world full of Dark Artists is worse than a world full of honest truth sayers, ceteris paribus. Heavy use of Dark Arts may be risky for the performer himself and compromise his own rationality, as it is much easier to use a manipulative technique persuasively if one believes no deception is happening.
These aren't actually examples, but it's hard to come up with a specific example under "independent of what they're being used to" clause.
Replies from: wedrifid, TheOtherDave, DanArmak↑ comment by TheOtherDave · 2013-03-13T05:01:16.627Z · LW(p) · GW(p)
Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run.
The very long run, perhaps.
In the shorter run of, say, 10-100 years, it isn't in the least clear to me that the advantage of being considered (accurately or not) a skilled manipulator, in terms of the willingness of powerful agents to ally with me, is fully offset (let alone overpowered) by the disadvantage of it, in terms of people being less influenceable by me. Add to that the advantages of actually being a skilled manipulator, and that's even less clear.
Admittedly, if I anticipate having a significantly longer effective lifespan than that, I may prefer not to risk it.
↑ comment by DanArmak · 2013-03-14T19:31:58.500Z · LW(p) · GW(p)
Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run.
But it seems that people who use the Dark Arts profit from it. If the Dark Arts were self-defeating as you suggest, we wouldn't be having this discussion.
Using Dart Arks is a Prisoner's dilemma defection with all associated problems - a world full of Dark Artists is worse than a world full of honest truth sayers, ceteris paribus.
Continuing to cooperate in a world where most players defect is a poor strategy. I also doubt that it strongly influences the defectors to stop defecting.
comment by [deleted] · 2013-03-10T06:57:36.017Z · LW(p) · GW(p)
We can't trust brains when taken as a whole.
We are made of brains. A nice swirl off to the side of the brain mistrusting the brain is mistrusting that mistrust.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-10T16:36:08.432Z · LW(p) · GW(p)
True, but so what? It's still not trustworthy.
Replies from: Tyrrell_McAllister, None, None↑ comment by Tyrrell_McAllister · 2013-03-10T19:11:43.574Z · LW(p) · GW(p)
The "so what" is to beware the skepticism fallacy: The notion that, if you always set your credence to "very low", then you have attained the proper level of belief in everything, and so you have discharged your duty to be rational.
↑ comment by [deleted] · 2013-03-10T19:04:37.434Z · LW(p) · GW(p)
Mistrust of mistrust means not occasional possible trustworthiness, but occasional actuated trustworthiness. My brain being trustworthy on occasion is not a so-what conclusion for me. Out of that comes attempts to identify when those occasions might be and when they are not happening but appear to be happening. I'm using the flaws to identify the strengths.
'My brain always trusts my brain to never be trustworthy' - I think that is what EY just said, but I could be mistaken.
↑ comment by [deleted] · 2013-03-10T18:29:11.991Z · LW(p) · GW(p)
Is there anything, except brains, that is (non-metaphorically) trustworthy? Nothing else in the universe has any care for the truth.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-03-10T19:06:07.859Z · LW(p) · GW(p)
If we have nothing more seaworthy than an old rotten canoe, the canoe doesn't thereby become a safe means of sailing.
Replies from: None↑ comment by [deleted] · 2013-03-10T19:58:07.725Z · LW(p) · GW(p)
Point taken, but if the only oceangoing thing we've ever encountered or, in any real detail imagined, is this old rotten canoe, one might be excused for finding the notion that 'this canoe is not seaworthy' a little strange. At that point, we don't even have reason to think that seaworthyness admits of variation, nor do we have any way of disentangling the capacities of this canoe from properties of the ocean.
Though perhaps I've gotten this backwards. Maybe 'the brain is not trustworthy' is intended to be metaphorical language.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-03-10T20:15:06.944Z · LW(p) · GW(p)
one might be excused
This doesn't seem like a relevant concern.
Replies from: None↑ comment by [deleted] · 2013-03-10T20:29:28.228Z · LW(p) · GW(p)
I'm sorry, I was speaking elliptically. I meant that your canoe metaphor is misleading, because you're suggesting a world in which the only seagoing vessel I know of is this canoe, while at the same time trading on my actual knowledge of much more seaworthy vessels. This is a problem, given that my whole point is 'what meaning can a term like 'trustworthyness' have if we deny generally to the only thing capable of being trustworthy?'
But I think I've decided to take Neotenic, and EY's comment as a metaphor, so I drop my objection.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-03-10T20:51:52.557Z · LW(p) · GW(p)
By "trustworthiness" I understand something like probability of error, or accuracy or results, just as "seaworthiness" refers to capability of surviving trips of given difficulty. These properties don't depend on availability of better tools, and so absence of better tools is not a relevant consideration in deciding the state of these properties. The absence of better tools might mislead one to overestimate the quality of available tools, but now that we've noticed that, let's stop being misled.
Replies from: None↑ comment by [deleted] · 2013-03-10T21:08:27.431Z · LW(p) · GW(p)
These properties don't depend on availability of better tools
The properties themselves do not, but that's not the problem. Our ability to identify errors in our reasoning hangs on our ability to get that very reasoning right at some point. And getting it right some of the time isn't enough; we have to know that we got it right in order to know that we previously made an error. Since all we have are brains, we can only say that brains are untrustworthy if some other brains, or the same brain at some other time, are trustworthy (not just correct).
What I mean is that the idea of 'trustworthyness' only has meaning in the sentence 'brains in general are untrustworthy' if that sentence is false. Some brains must be trustworthy some of the time, or else we'd never know the difference. EDIT: And in fact everything we know about trustworthyness, we learned from trustworthy brains.
We can of course wish that brains in general were more trustworthy than they are and that's what I take the original comment to mean.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-11T01:30:15.007Z · LW(p) · GW(p)
Our ability to identify errors in our reasoning hangs on our ability to get that very reasoning right at some point.
Careful with "identify" there. If I come up with a proof that 1=2, I can recognize it's not right without thereby also knowing which step is wrong.
Replies from: None
(previous title: Very low cognitive load)
Sean Thomason
We can't trust brains when taken as a whole. Why should we trust their subareas?
Cognitive load is the load related to the executive control of working memory. Depending on what you are doing, the more parallel/extraneous cognitive load you have, the worse you'll do it. (The process may be the same as what the literature calls "Ego Depletion" or "system 2 depletion", the jury is still up on that)
If you go here and enter 0 as lower limit and 1.000.000 as upper limit, and try to keep the number in mind until you are done reading post and comments, you'll get a bit of load while you read this post.
Now you may process numbers verbally, visually, or both. More generally, for anything you keep in mind, you are likely allocating it in a part of the brain that is primarily concerned with a sensory modality, so it will have some "flavour","shape", "location", "sound", or "proprioceptual location". It is harder to consciously memorize things using odours, since those have shortcuts within the brain.
Let us in turn examine two domains in which understanding cognitive load can help you win: Moral Dilemmas and Personal Policy
Moral Games/Dilemmas
In Dictator game (you're given $20 and you can give any amount to a stranger and keep the rest) the effect of load is negligible.
In the tested versions of the Trolley problems (kill/indirectly kill/let die one to save five) people are likely to become less utilitarian when under non-visual load. It is assumed that higher functions of the brain (in VMPF cortex) - which integrate higher moral judgement with emotional taste buttons - fails to integrate, making the "fast thinking", emotional mode be the only one reacting.
Visual information about the problem brings into salience the gory aspect of killing someone, and other lower level features that incline non-utilitarian decisions. So when visual load requires you to memorize something else, like a bird drawing, you become more utilitarian since you fail to visualize the one person being killed (which we do more than the five) in as much gory detail. (Greene et al,2011)
(Bednar et al.2012) show that when playing two games simultaneously, the strategy of one spills over to the other one. Critically, heuristics that are useful for both games were used, increasing the likelihood that those heuristics will be suboptimal in each case.
In altruistic donation scenarios, with donations to suffering people at stake, (Small et al. 2007) more load increased scope insensitivity, so less load made the donation more proportional to how many people are suffering. Contrary to load, priming increases the capacity of an area/module, by using it and not keeping the information stored, leaving free usable space. (Dickert et al.2010) shows that priming for empathy increases donation amount (but not decision to donate), whereas priming calculation decreases it.
Taken together, these studies indicate that to make people donate more it is most effective to, after being primed for thinking about how they will feel about themselves, and for empathic feelings, make them feel empathically and non-visually someone from their own race. After all that you make them keep a number and a drawing in mind, and this is the optimal time to donate.
Personal Policy
If given a choice between a high carb food, and a low carb one, people undergoing diets are substantially more likely to choose the high carb one if they are keeping some information in mind.
Forgetful people, and those with ADHD know that, for them, out of sight means out of mind. Through luck, intelligence, blind error or psychological help, they learn to put things, literally, in front of them, to avoid 'losing them' in their minds corner somewhere. They have a lower storage size for executive memory tasks.
Positive psychologists advise us to make our daily tasks, specially the ones we are always reluctant to start, in very visible places. Alternatively, we can make the commitment to start them smaller, but this only works if we actually remember to do them.
Marketing appropriates cognitive load in a terrible way. They know if we are overwhelmed with information, we are more likely to agree. They'll inform us more than what we need, and we aren't left with enough brain to decide well. One more reason to keep advertisement out of sight and out of mind.
Effective use of Cognitive Load
Once you understand how it works, it is simple to use cognitive load as a tool: