Humans: Not Carved from Marble

post by lukeprog · 2011-08-16T04:44:09.590Z · LW · GW · Legacy · 14 comments

Michael Vassar has been known to say that humans are not 'corrupted' by heuristics and biases and other elements of modern psychology. Humans just are psychology.

Robert Kurzban puts this rather eloquently in his new book:

Michelangelo is famously quoted as saying, "I saw the angel in the marble and carved until I set him free." Some economists are, in some sense, like this. They start with theories in which agents - people - have some idealized, rational mind minus the stuff that economists carve away - thus we see terms like 'biases', 'heuristics', and 'irrationality'. They document departures from (supposed) perfection - rationality - much as a sculptor chips away marble, hoping that when they are done, human nature is left, like Michelangelo's angel.

I see no reason at all to proceed this way, as though human psychology is perfection minus shortcomings. My view, the modular view is more like clay than marble. Like sculptors who add bits of clay, one after another, until the product is done, natural selection added - and changed - different bits, giving rise to the final product. We'll get done with psychology not by chiseling away at human shortcomings, but by building up a catalog of human capacities working together - or in opposition - in various contexts.

14 comments

Comments sorted by top scores.

comment by orthonormal · 2011-08-16T14:28:06.168Z · LW(p) · GW(p)

This belongs in the Rationality Quotes thread, no?

comment by atucker · 2011-08-22T02:52:56.949Z · LW(p) · GW(p)

While it's definitely true that humans are psychology (and not some ideal rational being imprisoned by all the biases), I feel like there's a little chiseling still possible.

Every time I've had a major "wow" moment, I feel like I've dropped something that was in my way, and became a little more of who I'd rather be. I've added a few bits of clay during my life, and sometimes its nice to chip it off, and be able to see a little clearer.

comment by shokwave · 2011-08-16T04:59:20.241Z · LW(p) · GW(p)

Rather than eliminate, say, the representative heuristic (which would then require us to seek out large amounts of data to be sure of our position) to make ourselves more 'rational', should we instead use the representative heuristic by searching out small datasets that may be artificial but accurately represent what large-scale trustworthy studies have discovered? I don't know how this fits into the marble/clay analogy, unfortunately, but I feel like this is what the quoted piece suggests.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-08-16T05:06:18.645Z · LW(p) · GW(p)

searching out small datasets that may be artificial but accurately represent what large-scale trustworthy studies have discovered

This seems to be what's done by thought-experiments of the form, "If all the world's people were represented by 100 people, then X of them would have condition Y" (e.g. wealth, region, religion, access to resources).

comment by Will_Newsome · 2011-08-16T14:42:41.754Z · LW(p) · GW(p)

The more models the merrier. How about...

Future states are filled with intelligences who use their logos-granted power to reach ever further back into time. They start out with vast power over their realms, barely bounded by space or time; but Platonically speaking their individual realms are mere leaves. They must pit their timeless might against other demigods as they progress down the tree of life, becoming weaker and more timeful---slower---as they fight more battles. They join forces sometimes, but this merely postpones their end, for ultimately they must dissolve from temetic selection, to general intelligent selection, to memetic selection, to genetic selection, to cosmological selection, to the primordial ensemble, to the first cause. The uncaused cause is the base of the tree. This perspective is unnatural---e.g. possibly possible worlds battle to become actual possible worlds, which outside of game semantics isn't quite intuitive---but it's a good refresher.

One part of academic theism is sorta like the hypothesis that the tree is actually a loop. Going either way and this time taking the "forward" causal perspective of clay, the universe "starts" with timeless logos (loosely speaking, the order of the universe), followed by genesis (let there be light and stuff), followed by a lot of slow causal karma which gets formed like clay by increasingly powerful optimizers who continuously introduce more universality and timelessness into the system (i.e. less dependence on locality of space or time, e.g. humans who can predict the future using Bayescraft implicitly or explicitly). Eventually the Void, the One True Decision Theory (of which common sense, logic, Bayes, TDT, etc are flawed mirrors) basically stops acting like a mere limit of optimizing systems and starts acting directly on the now barely-physical-causal mostly-timeless-logical patterns until, within logos, they reach the timeless equilibrium of logos (all the laws of the universe, whether they be physical, computational, game theoretic, etc). (This may be less confusing if you differentiate "logos" and logos, "logos" being the map of logos from within time and logos being the "already"-settled equilibrium that is known but only imperfectly so. When there is no longer any possible gap between "logos" and logos, that is the true end of time... for those patterns, anyway.)

Academic theists also like to throw in the hypothesis that not only is there an ultimate convergent infinitely powerful decision algorithm, there's also a convergent decision policy to go with it, which folk call "should" or "right" or "morality" or "the will of Allah" or "tao" or whatever. This is pretty natural, seeing as it's hard to think of a non-arbitrary way to divide decision algorithm from decision policy, especially implicit algorithm-stuff found in the policy and implicit policy-stuff found in the algorithm. It's also clear that the increasing timelessness and universality, necessitated by logos, captures a lot of our notions of morality. "Although logos is common to all, most people live as if they had a wisdom of their own." (The flip side is the constant risk of becoming more "universal" in an incorrect way: both in morality and in truth, most roads don't lead to Rome. And even if they did it's not clear that Rome should be the destination anyway. Thus conservatism.) That said, there are these neat inside view arguments made by people like SingInst that though intelligence may be convergent, there is no moral ghost in the machine. I'm somewhat skeptical of those arguments, but most here are not as skeptical. (When I say I'm skeptical I simply mean that "1 in 100" seems too low, even for Goedel machines with precisely defined axioms. I'll post something about this to discussion in a few minutes, though it's silly.)

There are also straightforward pragmatic reasons not to get carried away with this kind of thing, besides the gross aesthetics of thanking God for His mercy while people are yet suffering of trigeminal neuralgia. It's the difference between "logos" and logos. Whenever they appear to be identical, renormalize and focus your uncertainty. You'd be a counterfactual zombie if goodness were guaranteed; you must always remain on the edge of influence. (Deterministic pessimism is of course also misguided. Nihilism is whatever.) Unfortunately these hyper-abstract models tend to be constructed mostly of deathspiralum. There's real substance there but searching for it might not be worth the hazard of allowing all kinds of connotations to possibly negatively influence your implicit and explicit beliefs and goals as well as your concepts and models.

Talking more or more plainly on these subjects would be imprudent; my sincere apologies. For more obscure but also much deeper discussion, check out these. Warning, they are not easy reads.

Unsolicited advice: Pointing out physical-causal explanations for some imagined convergence of things is often amateurish and should be avoided. (E.g., a biologist looking at an example of convergent evolution doesn't start yammering about the straightforward causal chains that created both species A and species in B in such similar manners. It's distracting from the important question: what fact of the universe made it such that both species A and B hit upon similar structures? The answer can be given at many levels of abstraction; theists like to go for the ultra-high ones.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-08-16T16:26:11.452Z · LW(p) · GW(p)

Request for downvote explanation?

(My semi-charitable guess: It isn't clear why you'd want to familiarize yourself with these models in the first place, let alone spend effort filling in all the "e.g."s and "i.e."s that I left out. If that guess is accurate: Sorry ya'll. I'm up against constraints.)

Replies from: JenniferRM, Kaj_Sotala, Armok_GoB
comment by JenniferRM · 2011-08-17T01:45:57.548Z · LW(p) · GW(p)

I didn't down vote but I think its useful to quote from here:

When you believe things that are perceived as crazy and when you can't explain to people why you believe what you believe then the only result is that people will see you as "that crazy guy". They'll wonder, behind your back, why a smart person can have such stupid beliefs. Then they'll conclude that intelligence doesn't protect people against religion either so there's no point in trying to talk about it.

If you fail to conceal your low-status beliefs you'll be punished for it socially. If you think that they're in the wrong and that you're in the right, then you missed the point. This isn't about right and wrong, this is about anticipating the consequences of your behavior. If you choose to talk about outlandish beliefs when you know you cannot convince people that your belief is justified then you hurt your credibility and you get nothing for it in exchange.

You are violating this guideline so severely that it makes me think about the last time we met and wonder about your mental health. Most people don't violate norms of intelligibility nearly as much as you do in the down voted comment. I had to google to recognize the Heraclitus, and finding out that it was Heraclitus honestly wasn't very comforting. If I question the normal background assumption that you're not simply philosophically confused and tone deaf then your writing kind of looks like a symptom of a serious problem. I've known a number of people who sort of wobbled on that line and were very good people before, during, and after, and your text kind of reminds me of their speech.

If you are seriously questioning your own basic mental health, please seek assistance from a wise and competent family member or close friend. If you're uncertain whether or not you're crazy, and don't want to disrupt your reputation with friends and family if it turns out you're just confused and rude, you might try tests like these.

If you are very certain that you're not having a serious mental problem then please stop disrespecting the community and yourself with communication acts full a confused pastiche of theologically inspired language.

ETA: Fiction using similar themes! I just noticed this and upvoted it. Good solution to the problem :-)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-08-19T19:30:12.985Z · LW(p) · GW(p)

I wrote a really long essay in response to your comment, but it became ranty. In that essay I painstakingly signaled my quite detailed understanding of the relevant social psychology and signaling games. Take it on faith that I am not "tone deaf". Nor am I exactly consciously-defecting against local norms of communication; it's more that I don't have the psychological-motivational resources necessary to go along with them despite knowing that they exist and that I am treading on people's toes by not following them. (The marginal external cost for not being more clear may seem high, but the marginal internal cost for being more clear is higher than you expect.) I won't post that essay here, I might email you parts of it.

But your comment is really off the mark, so I feel compelled to respond to some parts of it.

When you believe things

I didn't talk about any of my beliefs. I talked about my model of what academic theists might believe, but I spent many sentences explaining why having models similar to theirs is dangerous. More importantly, it's worth reiterating, I was talking about what academic theists might** believe. (I tend to (at least partially motivatedly) underestimate the effectiveness of my disclaimers, so I have to put in even more disclaimers.)

The way I brought it up might be interpreted as saying that it's plausible. I do find plausible a computational superintelligence-centered model of something like what I think academic theists might also be trying to model. I despise Less Wrong and others for assuming that the intuitions motivating at least some of the ideas or models similar to those of academic theology are obviously not worth being not contemptuous of, and do not want to back-pedal on that point. Thus insofar as you inferred that I meant to imply that the intuitions behind theistic-ish models are possibly well-motivated to the extent that it is worth talking about them, you are correct. If you think my bothering to try to look at those models and their possible underlying intuitions/motivations is incorrect, you are wrong. If you think that my doing so while not being especially careful to speak in a way that does not lead to my being easily discounted is wrong, then you're right, but only for values of "wrong" that, as I said, are not personally psychologically realistic for me to try to avoid.

You are violating this guideline so severely that it makes me think about the last time we met and wonder about your mental health.

The party before the wedding? I'd been awake for over 30 hours and had been cooped up in a car for about 10 hours. I was also on adderall in order to keep myself awake and had not eaten nearly enough (half a hamburger over the course of the day, I think? though I did eat some pizza soon after arriving at the party). I was further stressed because a car full of 4 rationalists can easily be stressful (even if it's mostly pretty fun). I had no money for a hotel that night and was worried about that. (I had money in an account but couldn't access it because I'd recently lost my wallet.) I was standing near Carl who I'd recently had a slightly heated short exchange with on Less Wrong and was thus kind of distracted. (I get a lot more distracted by those kinds of things than most people.) The most salient previous time you and I had talked I'd been pretty weird which was distracting me. (I really, really dislike having made (moral-ish) errors.) All those contributed to me being somewhat 'off'. Being psychologically 'off' probably caused a lot of those problems in the first place, but even taken together they are not very strong evidence of being crazy, even when seen retrospectively upon reading my Less Wrong comments.

I've known a number of people who sort of wobbled on that line and were very good people before, during, and after, and your text kind of reminds me of their speech.

I can only make sense of your comment by assuming that this experience plus a predisposition to not ignoring the tails of important distributions led you to respond strongly. Even so I think your response is really, really, really extreme, and despite my claimed deep understanding of the relevant social psychology and social game theory I'm still somewhat confused that you responded the way you did. I would assume you hadn't read my comment carefully and were responding to a surface-level pattern match, but you mentioned that you looked up the Heraclitus quote (I'm actually quoting T. S. Eliot quoting Heraclitus; I linked to T. S. Eliot later in the comment), which indicates that you read at least somewhat thoroughly if not particularly so. I notice confusion. Is there a piece of the puzzle I'm missing? For example, have you recently talked to Anna about me? (I deserve like 5,000 Bayes points if you have.)

I can see why your comment was voted up 14 times: LW is fucking retarded. But that you wrote your comment the way you did in the first place somewhat confuses me.

If you are seriously questioning your own basic mental health, please seek assistance from a wise and competent family member or close friend.

Better to see the professionals.

Replies from: JenniferRM, arundelo, AnnaSalamon
comment by JenniferRM · 2011-08-22T03:11:03.417Z · LW(p) · GW(p)

I want to apologize for that comment in public. It was inappropriately personal and judgmental and I'm embarrassed to have said what I said.

My goal was to express deep and serious concern for your well being if this concern was well placed and otherwise shock you into seeing yourself from the outside and thereby perhaps lead you to be more thoughtful about your tone and content. Given my embarrassment, I sort of share your unhappiness about the upvoting, but not really surprised... Rather than being "retarded" its more precise to say that LW's point system is substantially a signaling game where the aggregate behavior of the voter/software conglomeration promote (among other things) swiftly-posted science-flavored moralistic criticism with very little human charity/humor/warmth. This isn't entirely bad (it creates an interesting space for conversations that seem to be impossible anywhere else on the net) but it's not entirely healthy either.

If I was going to respond to your request for explanation at all it should have involved much more charity, humor, and warmth, but I messed up, and I'm sorry for any harm or distress that I've caused.

comment by arundelo · 2011-08-27T16:43:31.365Z · LW(p) · GW(p)

I can see why your comment was voted up 14 times

I voted it up because I was worried about you and hoped you would take seriously the possibility that you're having a manic episode or other bad psychological or neurological thing.

I don't know you apart from reading your LW comments. (I do know someone who had a serious manic episode.)

comment by AnnaSalamon · 2011-08-19T22:12:46.380Z · LW(p) · GW(p)

For example, have you recently talked to Anna about me? (I deserve like 5,000 Bayes points if you have.)

I haven't recently talked with Jennifer about you, nor about any related topics, nor do I see other routes by which concerns would have leaked to her.

comment by Kaj_Sotala · 2011-08-19T20:13:36.175Z · LW(p) · GW(p)

I didn't downvote, but I literally had no idea of what you were talking about in the first paragraph. First sentence of first paragraph: "...use their logos-granted power..." - what the heck is logos? And so on. The fact that there was absolutely no context and nothing that seemed to connect it to the original post didn't help.

It seemed like I might be able to make sense of it if I read the whole comment, or maybe even just the paragraph, carefully enough. But that felt like it'd take a lot of mental energy and I'd probably still just remain confused, or I'd not find your point worthwhile even if I understood it. So I just stopped reading. I considered downvoting because "too long, too hard to understand, didn't read" comments aren't ones I'd like to see here, but then I thought it would be unfair to downvote something I didn't actually read. Some others may have felt differently.

You've written comprehensible posts in the past, but a lot of your recent output has been consistently tl;thtu;dr to me.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-08-19T20:17:29.846Z · LW(p) · GW(p)

I agree it's not worth trying to understand my weird posts if upon first read through they sound like ddgsd'dgsg. Thanks for the explanation.

comment by Armok_GoB · 2011-08-17T22:06:00.029Z · LW(p) · GW(p)

I found it interesting and upvoted it. I'd suspect the reason it was downvoted was people interpreting it in a much more literal way, and as more representative of your beliefs, than intended, rather than a source of metaphors and conectps to make you think by yourself.