Comment by bruce_britton on Why is the Future So Absurd? · 2007-09-07T16:30:12.000Z · LW · GW


I think that if you have a project of working through the cognitive biases for which we have evidence, considering each one separately, it is an excellent project, and likely to lead to cumulative effects on this blog, if anything is. I applaud what you are doing.

Comment by bruce_britton on Two More Things to Unlearn from School · 2007-07-15T12:15:01.000Z · LW · GW

On the propositon that 'knowing that you are confused is essential for learning' there is a structural equation model, tested empirically on 200+ subjects, that concludes that the ability of knowing-that-you-don't-understand is an essential prerequisite for learning, in the sense that people who have that ability learn much better than those who do not. Three other individual difference variables are also involved, but only come into play after the person realizes that they don't understand something. Its called 'Learning from instructional text: Test of an individual differences model' and is in the Journal of Educational Psychology (1998), 90, 476-491.

Another well-known study was of students learning a computer language from a computer tutoring program, in which all their keystrokes during learning were captured for analysis, and the biggest correlation with successful learning was the number of times they pushed a button labeled 'I don't understand.' (John Anderson's of Carnegie-Mellon)

Another famous result was from the notorious California State Legislature-mandated study of self-esteem: in high school seniors, i it was found that students with the highest self-esteem when they graduated -- they thought they already knew everything -- were those with the lowest self esteem the next year-- they couldn't keep a job because -- they thought they already knew everything.

Comment by bruce_britton on Consolidated Nature of Morality Thread · 2007-04-16T01:38:50.000Z · LW · GW

On the difference between moral judgements and factual beliefs, I find it helpful to think like this:

To give some plausibility to 'idealist' philosophies like Plato's, we can point to certain things which, while they certainly exist, would not exist if there were not minds, like humor.

In the same way, moral judgements certainly exist, but they would not exist if there were not minds. Moral judgements do not correspond to things in the outside world.

Facts, on the other hand, correspond to things outside minds, and factual beliefs are things inside minds that correspond to things outside minds.

This relates to a few of your points as follows:

Your point 1: There is a difference between your factual belief and your moral judgement in that the first corresponds to things in the outside world and the second corresponds to things in the mind.

Your point 5: You can truly assert that the car is green by referring to the outside world, and you can truly assert that human deaths are bad by referring to your own mind. Also, you can not truly assert that the car is green by referring only to your own mind, nor can you truly assert that human deaths are bad by referring only to the outside world.

Your point 8: The place in the environment where moral judgements are stored is in your mind.

The cognitive bias that confuses us about the difference between moral judgements and factual beliefs is a version of the 'notational bias,' namely the 'reification error,' which causes us to think that because moral judgements are nouns, stated in sentences like factual statements, that they have an existence as objects.

Comment by bruce_britton on Tsuyoku vs. the Egalitarian Instinct · 2007-04-02T02:24:30.000Z · LW · GW

Richard, you say you do not know if there is or is not a way to settle this particular disagreement. I too believe there may be a way to settle it, but only if we are explicitly specific about what we mean, and only if we agree to agree about what we mean; then there may be a way to settle it to the satisfaction of both of us. But if we can't be explicitly specific etc. then we can't settle it. My view is quite a standard one; I don't claim it's original. I agree that disagreements that involve values are difficult to settle; I think we would have to agree about values, or agree to disagree.

The arguement I gave was from Aristotle, and I have been unable to find any flaw in it. It seems to compel my assent.

However, others have denied it, most famously Augustine, who said that seeking and finding God is above happiness. That is, God is the ultimate goal, not happiness.

This makes sense to me if I could believe in the existence of God, but usually I can't, whereas I can believe in the terms of my application of Aristotle's arguement ( 'maximizing perceiving reality correctly' and 'maximizing happiness').

And it seems possible that you are seeking something that is kind of like what other people mean by God, specifically I get this from your 'if there is not an objecively valid proper ultimate goal, then life has no meaning' and your willingness to 'step outside the causal chain' and your concern with suffering; these all remind me of talk about God. The suffering of sentient beings has a Buddhist flavor.

So maybe our disagreement has to do with you being able to believe in this God-type idea, and me not. Could this be it?

Comment by bruce_britton on Tsuyoku vs. the Egalitarian Instinct · 2007-03-30T15:12:59.000Z · LW · GW

Richard and Robin: I wonder if it is possible to settle this disagreement between Richard and I. (I realize this is to change the subject from the disagreement itself to ways of settling it, but it does have some relevance to settling it.)

For it to be possible to settle it, we would have to both desire to settle it. Then we could take various routes.

The Scienceoid route would require us to formulate the question, definition of terms, etc, so that some set of operations could be agreed between us to settle it, and then we'd just have to do the operations.

Or we could take various routes that lead to concluding that solving it is impossible, such as referring to the fundamental privateness of mental life, such that even if I said that my policy was to make happiness be only a means to an end, and that I was successful in doing so, I might be lying or deluded or a robot, etc.

Or we could require unanimity, like the Polish Parliament at one time, with only 1 person being enough to sink the proposition.

Or we could take an 'ordinary language' philosophy route, asking 'what do we mean when we say...' etc.

I'd guess both of us know how to do each of these routes, but how do we choose which route(s) to take?

I'm now going to go into the Confessional Mode, which might be a route for this blog to take, that is, everyone focuses on their own biases, observed introspectively, instead of trying to identify other's biases.

My guess is that I would have a strong tendency to take the route that makes it most likely that I would win, just like a child, a myside bias.

My first impulse was to pick out parts of your comment that seem to favor my side, such as your 'in unrehearsed situations without enough cognitive resources for deliberation...' etc. or 'I will grant you that for almost all people, happiness is...' Then I would say that you really agree with me, and I would refer back to my caveat about 'scenario-making.'

I also considered doing my 'multiple selves' schtick, but rejected it because it didn't seem to 'fit', by which I think I meant it would sound silly.

I do think that using natural language to state and argue about this is making it less likely for us to be able to solve it, because we (I guess by 'we' I mean 'I') are likely to fiddle with the meanings of words, etc., but what is the alternative?

Out of Confessional mode, into Meta mode.

But none of these ways of resolving the dispute satisfies me. I'm wondering if disputes on this blog ever do get resolved. I do think they can sometimes get resolved in science.

But success for this blog seems dependent on being able to make progress, and this seems to require that we can settle things, so we can move on and build on the things we have settled.

Maybe the Confessional Mode is the way to go?

Comment by bruce_britton on Tsuyoku vs. the Egalitarian Instinct · 2007-03-29T20:19:18.000Z · LW · GW

Robin, it's easy to see that of the two goals of maximizing either happiness or one's own ability to perceive reality correctly,

Anyone can easily imagine wanting to maximize perceiving reality correctly IN ORDER TO maximize one's happiness.

But one can't imagine wanting to maximize one's happiness IN ORDER TO maximize perceiving reality correctly.

The latter statement makes no sense, or if you force some sense upon it by scenario-making, it still makes a very limited kind of sense.

It seems to me that this proves that maximizing happiness is a higher goal than perceiving reality correctly.

Not one of my own, Aristotle's.