Comment by poke on Is That Your True Rejection? · 2008-12-06T15:37:45.000Z · LW · GW

Most transhumanist ideas fall under the category of "not even wrong." Drexler's Nanosystems is ignored because it's a work of "speculative engineering" that doesn't address any of the questions a chemist would pose (i.e., regarding synthesis). It's a non-event. It shows that you can make fancy molecular structures under certain computational models. SI is similar. What do you expect a scientist to say about SI? Sure, they can't disprove the notion, but there's nothing for them to discuss either. The transhumanist community has a tendency to argue for its positions along the lines of "you can't prove this isn't possible" which is completely uninteresting from a practical viewpoint.

If I was going to depack "you should get a PhD" I'd say the intention is along the lines of: you should attempt to tackle something tractable before you start speculating on Big Ideas. If you had a PhD, maybe you'd be more cautious. If you had a PhD, maybe you'd be able to step outside the incestuous milieu of pop sci musings you find yourself trapped in. There's two things you get from a formal education: one is broad, you're exposed to a variety of subject matter that you're unlikely to encounter as an autodidact; the other is specific, you're forced to focus on problems you'd likely dismiss as trivial as an autodidact. Both offer strong correctives to preconceptions.

As for why people are less likely to express the same concern when the topic is rationality; there's a long tradition of disrespect for formal education when it comes to dispensing advice. Your discussions of rationality usually have the format of sage advice rather than scientific analysis. Nobody cares if Dr. Phil is a real doctor.

Comment by poke on Competent Elites · 2008-09-27T22:40:22.000Z · LW · GW

It's interesting that you mention Rodney Brooks. I've always found his work poorly written and lacking in clarity despite being sympathetic to his views. He must come across better in person. As Shane points out though, Brooks' work has the rare quality in AI that it is productive and has found widespread application in industry.

As for the Venture Capitalists, I don't find it surprising that Silicon Valley VCs share some of your interests. It's like discovering that software engineers share an interest in AD&D and collectibles. All these guys are enthusiastic about evolutionary psychology and cognitive science and such. I wonder if your perception of competence is a product of the "keyword search" approach to assessing other people that you frequently apply here; if they mention "evolution" and "probability" enough they get to be smart.

Comment by poke on Excluding the Supernatural · 2008-09-12T19:04:49.000Z · LW · GW


Most sane and intelligent people with religious tendencies (and there are many, although they don't seem to get much press) understand that if "god" means anything, it is a pointer towards something unknown and perhaps unknowable, and arguing about whether it exists in the physical sense is missing the point completely.

This is just a version of my second option available to the theist. There's a knowable "physical" world and an unknowable one beyond it. There's no reason to believe this is the case. Moreover, if you believed something like this, you would be able to say "I'm an atheist about the physical world" and we could all agree on that and discuss whether talk of "something beyond the physical world" is coherent. You would also agree that science has established atheism about the physical world. Which is just my claim.

Matthew C. - I'm referring to neuroscience.

Comment by poke on Excluding the Supernatural · 2008-09-12T14:29:00.000Z · LW · GW

This is why I claim that atheism is an established scientific result. One of the strongest lines of evidence is, indeed, that we have successfully reduced minds and shown the notion of an irreducible mind to be incoherent. Mind as an irreducible simple is basic to all monotheistic religions. Demonstrating something once thought coherent to be incoherent is, of course, one of the strongest lines of evidence in science. Other avenues through which atheism has been established by science include conservation in physics, chemistry and biology (which led directly to materialism), evolution, and the development of plausible sociological accounts of religion. I would argue that atheism is as well established as Plate Tectonics and Natural Selection. What I think is telling is that most contemporary approaches to religious apologetics implicitly recognize that science has established atheism.

The theist has three avenues of response. The first is to attack specific parts of science. This is what Fundamentalist Christians do. The second, by far the most popular, is to attack the very possibility of scientific knowledge. This is what nearly all "liberal" religious believers who claim there is no conflict between science and religion do. They generally adopt a skeptical epistemology, holding that no knowledge claim can be true, or instrumentalism about science, holding that scientific claims are nonfactual, or a quasi-Kantian constructivist metaphysics wherein "true" reality is forever out of reach. The weird thing is that this position, which essentially rejects all of science, is considered more "sophisticated" and acceptable than the Fundamentalist position which rejects only select parts of science but remains realist about the rest. The third approach is to adopt some sort of nonfactualism about religious claims; essentially to hold that your religious practice is merely tradition. I think this nearly exhausts contemporary positions on religious apologetics and is therefore evidence that people implicitly accept that science has established atheism.

Comment by poke on Against Modal Logics · 2008-08-27T23:44:50.000Z · LW · GW

It's true that contemporary philosophy is still very much obsessed with language despite attempts by practioners to move on. Observation is talked about in terms of observation sentences. Science is taken to be a set of statements. Realism is taken to be the doctrine that there are objects to which our statements refer. Reductionism is the ability to translate a sentence in one field into a sentence in another. The philosophy of mind concerns itself with finding a way to reconcile the lack of sentence-like structures in our brain with a perverse desire for sentence-like structures. But cognitive science is itself a development of this odd way of thinking about the world; sentences become algorithms and everything carries on the same. I don't think you're really too far removed from this tradition.

Comment by poke on When Anthropomorphism Became Stupid · 2008-08-17T01:50:23.000Z · LW · GW

If you look through a microscope you'll notice the only major difference between the nervous system and other tissues is that the nervous system exhibits network connectivity. Cells in tissues are usually arranged in such a way that they only connect to their nearest neighbor. Many tissues exhibit electrical activity, communication between cells, coordinated activity, etc, in the same way as neurons. If networks of neurons can be said to be performing computations then so can other tissues. I'm not familar with the biology of trees but I don't see why they couldn't be said to be 'thinking' if we're going to equate thinking with computation.

Comment by poke on Hot Air Doesn't Disagree · 2008-08-16T14:57:05.000Z · LW · GW

This demonstrates quite nicely the problem with the magical notion of an "internal representation." (Actually, there's two magical notions, since both "internal" and "representation" are separately magical.) You could easily replace "internal representation" with "soul" in this essay and you'd get back the orthodox thinking about humans and animals of the last two thousand years. Given that there is both no evidence nor any chance of evidence either for "internal representations" or "souls" and neither is well-defined (or defined at all), you might as well go ahead and make the substitution. This entire essay is pure mysticism.

Comment by poke on No Logical Positivist I · 2008-08-04T11:55:54.000Z · LW · GW

It's not clear that you're a verificationist but you're clearly an Empiricist. I think that's problematic. Unless you believe something magical happens at the retina, then there's no more reason to privilege what happens at the retina or in the brain, than there is the wire connecting the dial to the voltmeter. It's all causal linkage. We can use the same standards of reliability for people as we do wires. The sensory periphery is just not particularly interesting.

Comment by poke on The Comedy of Behaviorism · 2008-08-03T13:51:03.000Z · LW · GW

Richard Kennaway,

Whatever views his belief may be compatible with, it is not compatible with reality. That is, it is false. Intentionality is explainable in physical terms. An intention is the reference signal of a control system, and a control system is something that acts so as to maintain a perceptual signal close to its reference signal.

Everybody has a pet theory of intentionality. The problem isn't intentionality but why anybody would want to explain intentionality in the first place. You're either explaining: (a) a part of what you take to be your experience of the world; or (b) a part of our folk psychological explanations of behavior. Either way, you're not doing science to begin with, so it's unlikely you'd stumble upon science along the way.

Comment by poke on The Comedy of Behaviorism · 2008-08-03T13:17:13.000Z · LW · GW

Methodological behaviorism took private mental events to be off limits but (most) behaviorists still believed they existed. Skinner took introspection and self-knowledge to by types of behavior and explicitly denied the mental. Eliezer's analysis is correct insofar as Skinner denied the mental but the passages about not being able to account for complex behavior are wrong. Skinner took behavior to be a product of environmental conditioning and evolved physiology.

Here's Skinner explaining radical behaviorism in the opening of About Behaviorism:

"Mentalism kept attention away from the external antecedent events which might have explained behavior, by seeming to supply an alternative explanation. Methodological behaviorism did just the reverse; by dealing exclusively with external antecedent events it turned attention away from self-observation and self-knowledge. Radical behaviorism restores some kind of balance. ... It does not call these events unobservable, and it does not dismiss them as subjective. It simply questions the nature of the object observed and the reliability of the observations."

(Note how similar the final sentence is to eliminativists like Chruchland and Dennett who emphasize that introspection is fallible.)

"The position can be stated as follows: what is felt or introspectively observed is not some nonphysical world of consciousness, mind or mental life but the observer's own body. ... An organism behaves as it does because of its current structure, but most of this is out of reach of introspection."

"[W]e can look at those features of behavior which have led people to speak of an act of will, of a sense of purpose, of experience as distinct from reality, of innate or acquired ideas, of memories, meanings, and the personal knowledge of the scientist, and of hundreds of other mentalistic things or events. Some can be 'translated into behavior,' others discarded as unnecessary or meaningless." (Emphasis mine.)

Skinner was essentially an eliminative materialist who relied too heavily on the tools of his time (operant conditioning). He denied that the brain had the structure of folk psychology (what the behaviorists called mentalism) and emphasized conditioning and evolved physiology (he talked about evolution explicitly).

Comment by poke on The Comedy of Behaviorism · 2008-08-02T21:32:17.000Z · LW · GW

Skinner was correct that mind, intentionality, thought, desire, etc, are unscientific. Where behaviorism went wrong was ascribing behavior to conditioning and underplaying the role of biology (although Skinner never denied the importance of biology; unlike Chomsky and the computationalists). I'd accuse computationalism of being "cryptodualism" except that Chomsky's project was explicitly Cartesian and was only non-dualistic in the sense that he believed the laws of physics would have to change to incorporate non-biological computational models of the mind.

If your view is simply that the brain is performing computations and that it makes sense to talk about them in terms of algorithms then that's fine. I have no problem with that. If you're going to argue, as some philosophers do, that this somehow vindicates "the mind" and the posits of folk psychology then you're making a very different argument altogether. Skinner's belief that intentionality is on par with Aristotelian teleological physics is perfectly compatible with the first view. The notion that calling the brain a computer and talking about algorithms naturalizes dualism (i.e., the algorithms are the mind and the brain is the implementation), on the other hand, is pure mysticism.

Comment by poke on Existential Angst Factory · 2008-07-19T23:05:49.000Z · LW · GW

I wouldn't want to be a wirehead. I do things like exercise to keep my mood up now but I think of it terms of wanting to be productive rather than happy. (I find that exercise, health and regular sleep/wake cycles are essential for this.) If you could wire me up to be smarter and more productive (intellectually), but the cost was chronic pain, I'd probably sign up for that. (I can't really imagine how you could reconcile higher productivity with chronic pain though; the experience of pain seems to necessarily involve restricted attention.)

Comment by poke on Existential Angst Factory · 2008-07-19T19:37:44.000Z · LW · GW

Andy Wood,

I'm just curious - was the despair about anything? Did it have no referent at all? You had a stable environment, good relationship with parents, self-confidence, social success, and yet still despaired? Was there no consistent content in your despairing thoughts?

I had all those things. Before I became depressed I stopped being sociable and started having problems with school attendance; I don't know if that was the cause of my depression or just an early development of it. I was certainly very bored at school and my home environment didn't offer any alternative intellectual stimulation. But, whether these things caused my depression or not, I can honestly say that I never felt it had any content. Even when the depression caused problems in my life I found the depression itself more overwhelming than the problems. I tend to be very unaffected by life events even now actually. I've considered that perhaps I had two problems and one of them was/remains an inability to fully appreciate consequences.

Comment by poke on Existential Angst Factory · 2008-07-19T16:53:03.000Z · LW · GW

I've suffered from clinical depression with absolutely zero correlation to social factors and life circumstances. Between onset at age 11 and my early 20s I experience pervasive, uninterrupted despair. Oddly enough, it never affected my goals or terminal values, just my ability to achieve them. Then again, many people (perhaps the majority) die with many of the same goals they had in their youth, having done absolutely no work toward achieving them; so I'm not convinced explicitly held goals have a strong causal relation to behavior; perhaps having a goal is like getting a tattoo. But I digress. Biology matters a lot. I wouldn't say clinical depression is the same as being unhappy about something; even at the most basic level, there's obviously a lot more going on when someone's unhappy about a life event than if they have wonky receptors for some neurotransmitter or another. (I never experienced the sort of confabulation that makes the clinically depressed try to attach their depression to life events though; perhaps because I was young.) I think we could achieve some working simulacrum of happiness biologically though.

Comment by poke on Existential Angst Factory · 2008-07-19T14:42:34.000Z · LW · GW

What could be more exciting than embracing nihilism?

Comment by poke on Could Anything Be Right? · 2008-07-18T14:09:37.000Z · LW · GW

"Should" has obvious non-moral uses: you should open the door before attempting to walk through it. "Right" and "better" too: you need the right screwdriver; it's better to use a torque driver. We can use these words in non-problematic physical situations. I think this makes it obvious that morality is in most cases just a supernatural way of talking about consequences. "You shouldn't murder your rival" implies that there will be negative consequences to murdering your rival. If you ask the average person they'll even say, explicitly, that there will be some sort of karmic retribution for murdering your rival; bad things will happen in return. It's superstition and it's no more difficult to reject than religious claims. Don't be fooled by the sophisticated secularization performed by philosophers; for most people morality is magical thinking.

So, yes, I know something about morality; I know that it looks almost exactly like superstition exploiting terminology that has obvious real world uses. I also know that many such superstitions exist in the world and that there's rarely any harm in rejecting them. I know that we're a species that can entertain ideas of angry mountains and retributive weather, so it hardly surprises me that we can dream up entities like Fate and Justice and endow them with properties they cannot possibly have. We can find better ways for talking about, for example, the revulsion we feel at the thought of somebody murdering a rival or the sense of social duty we feel when asked to give up our seat to a pregnant woman. We don't have to accept our first attempt at understanding these things and we don't have to make subsequent theories to conform to it either.

Comment by poke on Whither Moral Progress? · 2008-07-16T14:24:40.000Z · LW · GW

As I said previously, I think "moral progress" is the heroic story we tell of social change, and I find it unlikely that these changes are really caused by moral deliberation. I'm not a cultural relativist but I think we need to be more attuned to the fact that people inside a culture are less harmed by its practices than outsiders feel they would be in that culture. You can't simply imagine how you would feel as, say, a woman in Islam. Baselines change, expectations change, and we need to keep track of these things.

As for democracy, I think there are many cases where democracy is an impediment to economic progress, and so causes standards of living to be lower. I doubt Singapore would have been better off had it been more democratic and I suspect it would have been much worse off (nowadays it probably wouldn't make a lot of difference either way). Likewise, I think Japan, Taiwan and South Korea probably benefited from relative authoritarianism during their respective periods of industrialization.

My own perspective on electoral democracy is that it's essentially symbolic and the only real benefit for developing countries is legitimacy in the eyes of the West; it's rather like a modern form of Christianization. Westerners tend to use "democracy" as a catch-all term for every good they perceive in their society and imagine having an election will somehow solve a country's problems. I think we'd be better off talking about openness, responsiveness, lawfulness and how to achieve institutional benevolence rather than elections and representation.

Now, you could argue that because I value things like economic progress, I have a moral system. I don't think it's that clear cut though. One of the distinctive features of moral philosophy is that it's tested against people's supposed moral intuitions. I value technological progress and growth in knowledge but, importantly, I would still value them if they were intuitively anti-moral. If technological progress and growth in knowledge were net harms for us as human beings I would still want to maximize them. I think many people here would agree (although perhaps they've never thought about it): if pursuing knowledge was somehow painful and depressing, I'd still want to do it, and I'd still encourage the whole of society to be ordered towards that goal.

Comment by poke on Rebelling Within Nature · 2008-07-13T15:31:00.000Z · LW · GW

I remember first having this revelation as something along the lines of: "You know when you're in love or overcome by anger, and you do stupid things, and afterward you wonder what the hell you were thinking? Well, your 'normal' emotional states are just like that, except you never get that moment of reflection to wonder what the hell you were thinking." I tried to resolve it with the kind of reflective deliberation that I think you're prescribing here. Later I adopted a sort of happy fatalism: We're trapped inside our own psychology and that's fine!

Not long after, I read the obscurantist French philosopher Alain Badiou (who I do not recommend!), and was inspired by his account of truth. Badiou takes truth to be "fidelity to the event." We are witness to a transformative event and take it upon ourselves to alter the world in its name. What I realized was (and not to disappoint my fans) the only thing that can interrupt business-as-usual for us is science. Science is the only thing truly alien to us; it's the only thing that can rupture the fatalistic clockwork playing-out of our psychology on our environment. The potential of science lies in its ability to transform us. So I adopted a sort of utilitarianism where the goal is to maximize the amount of science being done and maximize the degree to which it transforms our lives.

That's enough morality for me.

Comment by poke on Fundamental Doubts · 2008-07-13T02:47:27.000Z · LW · GW

Unknown and Hopefully Anonymous, If basing your beliefs on established science and systematically rejecting every incompatible methodology is "religion" then stick a ridiculous hat on my head and call me the Pope of Reality.

Comment by poke on Fundamental Doubts · 2008-07-12T18:03:03.000Z · LW · GW

I don't buy this sort of skepticism at all. Yes, we can imagine that the external world in an illusion, but the basic flaw is (like so much in philosophy) privileging our ability to imagine something over science. Whether we can be deceived in this way is an empirical matter. Yes, you can say "everything you learned about empirical science is part of the illusion," but all you've done is taken your ability to imagine an outcome and privileged that above scientific experiment. Science always trumps imagination. It is therefore, I think, impossible to formulate the skeptical thesis.

This is difficult to think about. Philosophy has given us a view of the world where perception is essentially a subset of imagination. We have pictures in our head and sometimes, if we're lucky, they correspond to the world. The scientific view of perception, however, is that it's just physics-as-usual. The philosophical story is an a priori psychology; if you reject the a priori, yet still buy that story, then you haven't doubted "all the branches and leaves of that root" sufficiently. The scientific story of perception involves photons and receptors and neurons and macromolecules and all that good stuff. It can't be used to call those things into doubt.

The correct view of all this is a (restricted) Quinean one: You have to accept the ontology of science as basic ontology. Science undermines our methods of determining other (metaphysical) ontologies (i.e., a priori reasoning); in everything we do, beginning with thought and perception, science should be our starting point. Nothing we know about thought and perception can undermine what we know about physics and chemistry and molecular biology because thought and perception are high-level areas of biology: everything we know about them is based on scientific ontology. No skeptical theses that undermines science can be constructed (science, however, can still undermine our common sense view of the world) and without skepticism epistemology reduces to neurobiology and sociology.

This is the difficult part: Even if it's empirically possible to totally deceive somebody, to run a simulation of them inside a supercomputer and manipulate their entire life and history, we still have no reason to doubt science. Personally I doubt that this is possible. I think the whole concept of a "subjectively real" simulation is a basic error of reasoning and I doubt that cognition and memory can be so arbitrarily manipulated anyway. Regardless, if my doubts turn out to be unfounded, it will be empirical science that proves them unfounded and the argument itself will only be as strong as empirical science itself. We cannot formulate this argument based on what we can merely imagine happening to ourselves.

Descartes had it backwards. If he'd thrown out "I think therefore I am" and taken the new physics and mathematics as his starting point he would have had a very powerful form of naturalism on his hands. A naturalism that doubts common sense and accepts science as the starting point of all reason. As I like to say, there's no distance between ourselves and the world, what happens at the retina is no more privileged than what happens at the microscope or the voltage clamp. We can just as easily take those as our starting point.

Comment by poke on The Genetic Fallacy · 2008-07-12T00:05:24.000Z · LW · GW

Douglas Knight, I'm not sure what predictions you're referring to. Statistical methods have a good pedigree. I take a correlation to be a correlation and try not to overinterpret it.

Comment by poke on The Genetic Fallacy · 2008-07-11T16:10:33.000Z · LW · GW

I'm very strict about this. I only accept claims that come out of science. I have a narrow definition of science based on lineage: you have to be able to trace it back to settled physics. Physics, chemistry, biochemistry, biology, molecular biology, neural biology, etc, all have strict lines of descent. Much of theoretical psychology, on the other hand (to give an example), does not; it's ab initio theorizing. Anything that is not science (so narrowly defined) I take to be noise. Systematic and flagrant abuse of the "genetic fallacy" is probably the quickest way to truth.

Comment by poke on Where Recursive Justification Hits Bottom · 2008-07-08T18:01:14.000Z · LW · GW

I think the best way to display the sheer mind-boggling absurdity of the "problem of induction" is to consider that we have two laws: the first law is the law science gives us for the evolution of a system and the second law simply states that the first law holds until time t and then "something else" happens. The first law is a product of the scientific method and the second law conforms to our intuition of what could happen. What the problem of induction is actually saying is that imagination trumps science. That's ridiculous. It's apparently very hard for people to acknowledge that what they can conceive of happening holds no weight over the world.

The absurdity comes in earlier on though. You have to go way back to the very notion that science is mediated by human psychology; without that nobody would think their imagination portends the future. Let's say you have a robotic arm that snaps Lego pieces together. Is the way Lego pieces can snap together mediated by the control system of the robotic arm? No. You need the robotic arm (or something like it) to do the work but nothing about the robotic arm itself determines whether the work can be done. Science is just a more complex example of the robotic arm. Science requires an entity that can do the experiments and manipulate the equations but that does not mean that the experiments and equations are therefore somehow "mediated" by said entity. Nothing about human psychology is relevant to whether the science can be done.

You need to go taboo crazy, throw out "belief," "knowledge," "understanding," and the whole apparatus of philosophy of science. Think of it in completely physical terms. What science requires is a group of animals that are capable of fine-grained manipulation both of physical objects and of symbol systems. These animals must be able to coordinate their action, through sound or whatever, and have a means of long-term coordination, such as marks on paper. Taboo "meaning," "correspondence," etc. Science can be done in this situation. The entire history of science can be carried out by these entities under the right conditions given the right dynamics. There's no reason those dynamics have to include anything remotely resembling "belief" or "knowledge" in order to get the job done. They do the measurements, make the marks on a piece of paper that have, by convention, been agreed to stand for the measurements, and some other group can then use those measurements to make other measures, and so forth. They have best practices to minimize the effect of errors entering the system, sure, but none of this has anything to do with "belief."

The whole story about "belief" and "knowledge" that philosophy provides us is a story of justification against skepticism. But no scientist has reason to believe in the philosophical tale of skepticism. We're not stuck in our heads. That makes sense if you're Descartes, if you're a dualist and believe knowledge comes from a priori reasoning. If you're a scientist, we're just physical systems in a physical world, and there's no great barrier to be penetrated. Physically speaking, we're limited by the accuracy of our measurements and the scale of the Universe, but we're not limited by our psychology except by limitations it imposes on our ability to manipulate the world (which aren't different in kind from the size of our fingers or the amount of weight we can lift). Fortunately our immediate environment has provided the kind of technological feedback loop that's allowed us to overcome such limitations to a high degree.

Justification is a pseudo-problem because skepticism is a pseudo-problem. Nothing needs to be justified in the philosophical sense of the term. How errors enter the system and compound is an interesting problem but, beyond that, the line from an experiment to your sitting reading a paper 50 years later in an unbroken causal chain and if you want to talk about "truth" and "justification" then, beyond particular this-worldy errors, there's nothing to discuss. There's no general project of justifying our beliefs about the world. This or that experiment can go wrong in this or that way. This or that channel of communication can be noisy. These are all finite problems and there's no insurmountable issue of recursion involved. There's no buck to be passed. There might be a general treatment of these issues (in terms of Bayes or whatever) but let's not confuse such practical concerns with the alleged philosophical problems. We can throw out the whole philosophical apparatus without loss; it doesn't solve any problems that it didn't create to begin with.

Comment by poke on Is Morality Given? · 2008-07-06T18:14:43.000Z · LW · GW

If somebody said to me "morality is just what we do." If they presented evidence that the whole apparatus of their moral philosophy was a coherent description of some subset of human psychology and sociology. Then that would be enough for me. It's just a description of a physical system. Human morality would be what human animals do. Moral responsibility wouldn't be problematic; moral responsibility could be as physical as gravity if it were psychologically and sociologically real. "I have a moral responsibility" would be akin to "I can lift 200 lbs." The brain is complicated, sure, but so are muscles and bones and motor control. That wouldn't make it a preference or a mere want either. That's probably where we're headed. But I don't think metaethics is the interesting problem. The deeper problem is, I think, the empirical one: Do humans really display this sort of morality?

Comment by poke on Moral Complexities · 2008-07-04T17:33:07.000Z · LW · GW

My response to these questions is simply this: Once the neurobiology, sociology and economics is in, these questions will either turn out to have answers or to be the wrong questions (the latter possibility being the much more probable outcome). The only one I know how to answer is the following:

Do the concepts of "moral error" and "moral progress" have referents?

The answer being: Probably not. Reality doesn't much care for our ways of speaking.

A longer (more speculative) answer: The situation changes and we come up with a moral story to explain that change in heroic terms. I think there's evidence that most "moral" differences between countries, for example, are actually economic differences. When a society reaches a certain level of economic development the extended family becomes less important, controlling women becomes less important, religion becomes less important, and there is movement towards what we consider "liberal values." Some parts of society, depending on their internal dynamics and power structure, react negatively to liberalization and adopt reactionary values. Governments tend to be exploitative when a society is underdeveloped, because the people don't have much else to offer, but become less exploitative in productive societies because maintaining growth has greater benefits. Changes to lesser moral attitudes, such as notions of what is polite or fair, are usually driven by the dynamics of interacting societies (most countries are currently pushed to adopt Western attitudes) or certain attitudes becoming redundant as society changes for other reasons.

I don't give much weight to peoples' explanations as to why these changes happen ("moral progress"). Moral explanations are mostly confabulation. So the story that we have of moral progress, I maintain, is not true. You can try to find something else and call it "moral progress." I might argue that people are happier in South Korea than North Korea and that's probably true. But to make it a general rule would be difficult: baseline happiness changes. Most Saudi Arabian women would probably feel uncomfortable if they were forced to go out "uncovered." I don't think moral stories can be easily redeemed in terms of harm or happiness. At a more basic level, happiness just isn't the sort of thing most moral philosophers take it to be, it's not something I can accumulate and it doesn't respond in the ways we want it too. It's transient and it doesn't track supposed moral harm very well (the average middle-class Chinese is probably more traumatized when their car won't start than they are by the political oppression they supposedly suffer). Other approaches to redeeming the kinds of moral stories we tell are similarly flawed.

Comment by poke on The Bedrock of Fairness · 2008-07-03T16:50:22.000Z · LW · GW

This dialogue leads me to conclude that "fairness" is a form of social lubricant that ensures our pies don't get cold while we're busy arguing. The meta-rule for fairness rules would then be: (1) fast; (2) easy to apply; and (3) everybody gets a share.

Comment by poke on I'd take it · 2008-07-02T14:41:49.000Z · LW · GW

(1) Buy a country. You could probably bribe your way into becoming dictator of North Korea or Myanmar or somewhere similar.

(2) Build a huge army.

(3) Crash the US economy.

(4) Take over the world.

(5) Profit.

Comment by poke on Created Already In Motion · 2008-07-01T16:54:20.000Z · LW · GW

You can fully describe the mind/brain in terms of dynamics without reference to logic or data. But you can't do the reverse. I maintain that the dynamics are all that matters and the rest is just folk theory tarted up with a bad analogy (computationalism).

Comment by poke on What Would You Do Without Morality? · 2008-06-30T01:13:00.000Z · LW · GW


For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?

Sure. I could get away with doing all sorts of things. No doubt the initial novelty and power rush would cause me to do some things that would be quite perverted and that I'd feel guilty about. I don't think that's the same as a world without morality though. You seem to view morality as a constraint whereas I view it as a folk theory that describes a subset of human behavior. (I take Eliezer to mean that we're rejecting morality at an intellectual level rather than rewiring our brains.)

Comment by poke on What Would You Do Without Morality? · 2008-06-29T15:00:15.000Z · LW · GW

I'd do everything I do now. You can't escape your own psychology and I've already expressed my skepticism about the efficacy of moral deliberation. I'll go further and say that nobody would act any differently. Sure, after you shout in from the rooftops, maybe there will be an upsurge in crime and the demand for black nail polish for a month or so but when the dust settled nothing would have changed. People would still cringe at the sight of blood and still react to the pain of others just as they react to their own pain. People would still experience guilt. People would still find it hard to lie to loved ones. People would still eat when they got hungry and drink when they got thirsty. We vastly overestimate our ability to alter our own behavior.

Comment by poke on [deleted post] 2008-06-28T17:52:57.000Z

I think you have to be careful when you say,

trying to use your brain to understand something that is not like your brain.

We can't use our brains to understand brains that are like our brains. We don't have that kind of access. Empathy is a function and not something you just get for free on account of similarity. Where we have obvious faculties in this area - understanding the emotional state of another person - I don't see any strong differences between same sex and opposite sex empathy. We can all tell when a member of the opposite sex is distressed; the hard part is figuring out why. Where there are such differences - as with motivations - I don't see much evidence that we're particular talented at getting it right with members of the same sex either.

Anecdotally, the few times I've had to wrestle with the motivations of a member of the same sex to the same degree one does in relationships on a regular basis, they've been completely opaque to me. But it's rare that a member of the same sex is in the position to really screw with you to the point that you dwell on their motivations. Nor are we particularly concerned with pleasing them or self-conscious about how they perceive us. If you listen to a man or woman talk about the motivations of a problematic same sex family member, an area where we often do have volatile relationships, it can be quite similar to how men and women talk about their partners (i.e., total confusion, disbelief, etc). Even the way people talk about their bosses can be similar.

So while I'd never claim to understand women, I'd challenge the claim that I understand men.

Comment by poke on The Design Space of Minds-In-General · 2008-06-25T15:44:47.000Z · LW · GW

So is the reason I should believe this space of minds-in-general exists at all going to come in a later post?

Comment by poke on Surface Analogies and Deep Causes · 2008-06-22T15:02:56.000Z · LW · GW

I can certainly agree that you rely on this sort of reasoning a lot. But I don't think what you do is much of an improvement over what you're criticizing. You just take words and make "surface analogies" with "cognitive algorithms." The useful thing about these "cognitive algorithms" is that, being descriptions of "deep causes" (whatever those are) rather than anything we know to actually exist in the world (like, say, neurons), you can make them do whatever you please with total disregard for reality.

Saying that a neural network never gets at "intelligence" is little different from saying the descriptions of biology in textbooks never capture "life." Without a theory of "life" how will we ever know our biological descriptions are correct? The answer is as blatantly obvious as it is for neural networks by comparing them to actual biological systems. We call this "science." You may have heard of it. Of course, you could say, "What if we didn't have biology to compare it too, how then would you know you have the correct description of life?" But... well, what to say about that? If there were no biology nobody would talk about life. Likewise, if there were no brains, nobody would be talking about intelligence.

Comment by poke on The Ultimate Source · 2008-06-15T17:16:21.000Z · LW · GW

You essentially posit a "decision algorithm" to which you ascribe the sensations most people attribute to free will. I don't think this is helpful and it seems like a cop-out to me. What if the way the brain makes decisions doesn't translate well onto the philosophical apparatus of possibility and choice? You're just trading "suggestively named LISP tokens" for suggestively named algorithms. But even if the brain does do something we could gloss in technical language as "making choices among possibilities" there still aren't really possibilities and hence choices.

What it all comes down to, as you acknowledge (somewhat), is redefining terms. But if you're going to do that, why not say, "none of this really matters, use language how you will"? Actually, a lot of your essays have these little disclaimers at the end, where you essentially say "at least that's how I choose to use these words." Why not headline with that?

There are basically three issues with any of these loaded terms - free will, choice, morality, consciousness, etc - that need to be addressed: (1) the word as a token and whether we want to define it and how; (2) matters the "common folk" want reassurance on, such as whether they should assume a fatalistic outlook in the face of determinism, whether their neighbors will go on killing sprees if morality isn't made out of quarks, etc; (3) the philosophical problem of free will, problem of morality, etc.

Philosophers have made a living trying to convince us that their abstract arguments have some relevance to the concerns of the common man and that if we ignore them we're being insensitive or reductionist and are guilty of scientism and fail to appreciate the relevance of the humanities. That's egregious nonsense. Really these are three entirely separate issues. I get the impression that you actually think these problems are pseudo-problems but at the same time you tend to run issues 2 and 3 together in your discussions. Once you separate them out, though, I think the issues become trivial. It's obvious determinism shouldn't make us fatalistic because we weren't fatalistic before and nothing has changed, it's obvious we won't engage in immoral behavior if morals aren't "in the world" since we weren't immoral before and nothing has changed, etc.

Comment by poke on Causality and Moral Responsibility · 2008-06-14T15:33:57.000Z · LW · GW

michael vassar,

I think you misunderstand me. I'm not being cynical; I'm trying to demonstrate that moral dilemmas and moral deliberation aren't empirically established. I tried to do this, first, by pointing out that what most people consider the subject of morality differs substantially from the subject of academic philosophers and, second, by arguing that the type of moral reasoning found in philosophy isn't found in society at large and doesn't influence it. People really do heroically rescue orphans from burning buildings in real life and they do it without viewing the situation as a moral dilemma and without moral deliberation. I don't think a world where moral philosophy turns out to be perfectly worthless is necessarily a bad one.

Comment by poke on Possibility and Could-ness · 2008-06-14T15:16:17.000Z · LW · GW

The type of possibility you describe is just a product of our ignorance about our own or others psychology. If I don't understand celestial mechanics I might claim that Mars could be anywhere in its orbit at any time. If somebody then came along and taught me celestial mechanics I could then argue that Mars could still be anywhere if it wanted to. This is just saying that Mars could be anywhere if Mars was different. It gets you exactly nothing.

Comment by poke on Causality and Moral Responsibility · 2008-06-13T16:53:27.000Z · LW · GW

michael vassar,

I'm skeptical as to whether the affirmed moralities play a causal role in their behavior. I don't think this is obvious. Cultures that differ in what we call moral behavior also differ in culinary tastes but we don't think one causes the other; it's possible that they have their behaviors and they have their explanations of their behaviors and the two do not coincide (just as astrology doesn't coincide with astronomy). I'm also therefore skeptical that changes over time are caused by moral deliberation; obviously if morality plays no causal role in behavior it cannot change behavior.

What anthropologists call moral behavior and what most non-philosophers would recognize as moral behavior tends to coincide with superstitions more than weighty philosophical issues. Most cultures are very concerned with what you eat, how you dress, who you talk to, and so forth, and take these to be moral issues. Whether one should rescue a drowning child if one is a cancer researcher is not as big a concern as who you have sex with and how you do it. How much genuine moral deliberation is really going on in society? How much influence do those who engage in genuine moral deliberation (i.e., moral philosophers) have on society? I think the answers are close to "none" and "not at all."

Comment by poke on Causality and Moral Responsibility · 2008-06-13T16:14:18.000Z · LW · GW

I agree that determinism doesn't undermine morality in the way you describe. I remain, however, a moral skeptic (or, perhaps more accurately, a moral eliminativist). I'm skeptical that moral dilemmas exist outside of thought experiments and the pages of philosophy books and I'm skeptical that moral deliberation achieves anything. Since people are bound to play out their own psychology, and since we're inherently social animals and exist in a social environment, I find it unlikely that people would behave substantially different if we eliminated "morality" from our concept space. In that respect I think morality is an epiphenomenon.

Some people want to take part of our psychology and label it "morality" or take the sorts of diplomacy that lead us to cooperate for our mutual benefit and label it "morality" but they're essentially moral skeptics. They're just flexible with labels.

Comment by poke on Against Devil's Advocacy · 2008-06-09T15:36:39.000Z · LW · GW
I picked up an intuitive sense that real thinking was that which could force you into an answer whether you liked it or not, and fake thinking was that which could argue for anything.

This is very dangerous. I think a great example of its danger is Colin McGinn (popularizer of mysterianism) in his The Making of a Philosopher. He says that what attracted him to philosophy was the ability to reason ones way to contrarian opinions. Being forced to an answer itself has an appeal. This is a major problem in the transhumanist and libertarian communities, for example, where bullet biting is much more highly regarded than having your facts straight.

Comment by poke on Thou Art Physics · 2008-06-08T17:37:00.000Z · LW · GW


Got a better one?

Biology and physics. Google Tim Van Gelder for a philosophical perspective on the benefits of using dynamics to explain cognition. I think he has papers online.

Presumably your brain is processing symbols right now, as your read this.

I think there's an important distinction between being able to manipulate symbols and engaging in symbol processing. After all, I can use a hammer, but nobody thinks there's hammers in my brain.


But computer programmers don't need to understand the hardware, either. Do you think they crack open metallurgy, electronics, and applied physics textbooks to accomplish their goals?

Computers are specifically designed so that we don't have to understand the hardware. That's why I said it's spurious to call anything but an artifact a computer. You don't need to understand the underlying physics because engineers have carefully designed the system that way. You don't have to understand how your washing machine or your VCR works either.

If you don't need to understand every level of hardware to manipulate electronic computational devices, why do you think anyone would need to understand the physics all the way down to deal with the mind?

I don't think we need to understand the physics all the way down in a practical sense. We've already built our way up from physics through chemistry to molecular biology and the behavior of the cell. We can talk about the behavior of networks of cells too. The difference is that it's the underlying physical properties that make this abstraction possible whereas, in a computer, the system has been specifically designed to have implementation layers with reference to a set of conventions. In a loose sense, it's accurate to say we understand the physics all the way down in a biological system, because the fact of abstraction is a part of the system (i.e., the molecules interact in a way that allows us to treat them statistically).

Comment by poke on Bloggingheads: Yudkowsky and Horgan · 2008-06-08T01:25:31.000Z · LW · GW

Eliezer, serious question, why don't you re-brand your project as designing an Self-Improving Automated Science Machine rather than a Seed AI and generalize Friendly AI to Friendly Optimization (or similar)? It seems to me that: (a) this would be more accurate since it's not obvious (to me at least) that individual humans straightforwardly exhibit the traits you describe as "intelligence"; and (b) you'd avoid 90% of the criticism directed at you. You could, for example, avoid the usual "people have been promising AI for 60 years" line of argument.

Comment by poke on Thou Art Physics · 2008-06-07T23:48:00.000Z · LW · GW

mtraven, The computer started as an attempt to mechanize calculation. There's a tradition in mathematics, going back to the Greeks and popular with mathematicians, that mathematics is exemplary reasoning. It's likely that identifying computation and thought builds off that. If calculation/mathematics is exemplary thought and computers mechanize calculation then computers mechanize thought.

I would argue instead that mathematics is actually exemplary (albeit creative) tool-use. This is especially stark if you look at the original human computers Caledonian mentioned: they worked from rules and lacked knowledge of the overall calculation they were taking part in. I think computers mechanized precisely what they mechanized and nothing more: the calculation and not the person performing it.

I disagree that it's our best model; I find it too misleading. I think you identify why it's popular though: computationalism lets us sneak dualism through the back door. Supposedly one can now be a materialist and hold that the mind is software instantiated on the hardware of the brain. That's an extremely useful premise if you're a philosopher or a psychologist who doesn't want to crack open a biology textbook. Also, the evidence that the brain engages in symbol processing is very weak, so I don't think it's necessary to invoke computationalism there.

I don't mean to imply that computer science only applies to computers though. We can apply the tools of computer science to the real world. We can talk about the computational limits of physical systems and so forth.

Comment by poke on Thou Art Physics · 2008-06-07T19:03:00.000Z · LW · GW

mtraven, I think your example demonstrates well why computationalism rests on a basic error. The type-token relationship between A-ness and instances of the letter "A" is easily explained: what constitutes A-ness is a social convention and the various diverse instances of "A" are produced as human artifacts with reference to that convention. They all exhibit A-ness because we made them that way. Computers are like this too. Computers can be made from different substrates because they only have to conform to our conventions of how a computer should operate.

The brain is not a computer. Nothing that is not an artifact can possibly be a computer in any meaningful sense (just like a bunch of stones that fall into a pattern resembling the letter "A" aren't the letter "A" in any meaningful sense). It's completely meaningless to call something a "computer" in the way computationalists do. It would make as much sense for me to call the coffee cup resting on my desk an "equation" as it does to call a brain a computer. The coffee cup can be described by an equation. If I throw the coffee cup, for example, I can describe its motion using the standard equations of rigid body dynamics. But the equations I wrote out would not be a coffee cup. The equations are just marks that by convention stand for the motion of a coffee cup.

For some reason, which can probably only be explained through some mix of historical contingency and malicious intentions, people have come up with the idea that when I take that equation and use numerical methods to step through it in a computer program it suddenly becomes the thing it describes. This is rather like thinking a drawing becomes the object it depicts if I turn it into a flip book. Actually, this analogy is very accurate, because as computer program is essentially an equation in flip book form. Anything that can be said about a computer program can also be said of an equation scrawled on a napkin. So, no, you're not a computer or a computation or an equation, you're a physical object.

Comment by poke on Timeless Control · 2008-06-07T17:32:24.000Z · LW · GW

Eliezer, you're spot on with the "Determinator." The modern free will debate has its roots not in the clockwork universe of Newtonianism but the supposed problem of God's omnipotence and omniscience. The problem of free will was originally formulated in terms of a Determinator - God - who chose and imminently caused the future. The question was "How can we also have free will?" and free will was, of course, also an important concept in Christian theology (we're made in God's image and therefore chose and cause our futures just like God does). As is often the case in philosophy the current debate is just a secularization of the theological debate; they just switch "God" for "Universe", "soul" for "essential property," etc, and carry on having the same arguments.

And second, you can't compute the Future from the Past, except by also computing something that looks exactly like the Present; which computation just creates another copy of the Block Universe (if that statement even makes any sense), it does not affect any of the causal relations within it.

I'm not sure that statement does make sense. It sounds a bit too mystical to me. But it'd be interesting to look at it from a thermodynamic perspective. You can't predict the future from the past without doing work in the present. Perhaps the work needed would always be greater than or equal to that required for the system you're predicting to just play out regardless?

Comment by poke on Thou Art Physics · 2008-06-06T21:03:17.000Z · LW · GW

Crush on Lyle,

But "if your actions are determined by prior causes" then whether or not you think those actions are blameworthy is determined by prior causes too. The act of punishing criminals is subject to the same physics that crime is. So is talking about the act of punishing criminals. And so on.

I agree. But no philosopher is going to bite that bullet. They'd be out of a job.

Comment by poke on Thou Art Physics · 2008-06-06T19:20:59.000Z · LW · GW

The "Why punish criminals?" question has a long history. The idea is that if your actions are determined by prior causes then you're no longer blameworthy. I think for most people deterrence would be morally unacceptable if they did not also consider criminals blameworthy. Why not punish their friends and families if that would also act as an effective deterrent? Actually this question - how can we delimit external and internal causes - is more interesting to me than general concepts of free will (short answer: we can't). If you want a nice example of bullet-biting in this area check out Pereboom's Living without Free Will. He argues that we should reject blameworthiness and praiseworthiness and considers it a good thing.

Comment by poke on Thou Art Physics · 2008-06-06T16:00:23.000Z · LW · GW

"Free will" is one of those concepts in philosophy where I have absolutely no idea what it's supposed to be about. I've read a few works on the subject and they all assure me that everyone is convinced they have it. I think the lesson to be learned is that words and concepts have histories of their own and frequently fall out of touch with reality completely. I think "free will" is like that.

Comment by poke on Why Quantum? · 2008-06-04T16:17:54.000Z · LW · GW

Lots of physicists don't believe in many-worlds because they believe in some other theory or interpretation. Parsimony is often used to dismiss many-worlds; mainly because many-worlds doesn't make any predictions so it's difficult to refute on other grounds. That doesn't make it true of course. If you have reason to believe that some other theory or interpretation is worth pursuing then you probably won't spend much time refuting many-worlds. So parsimony will be the lazy way to dismiss many-worlds but not the reason you hold another view.

The reason most physicists working in the foundations of quantum mechanics don't believe in many-worlds is because they take a different view of one or more of the assumptions you made (locality, hidden variables, the wave-function collapse, etc) and not because they don't understand parsimony. They're also in a far better position to judge those assumptions than you are (even by your own admissions). So even if I had no opinion on the subject I wouldn't accept your argument. Your argument for many-worlds relies on claims of why physicists reject many-worlds that have no supporting evidence.

If I could level a general criticism about your essays it would be this: Your focus on other people's modes of reasoning and biases makes you excessively prone to straw men arguments.

Comment by poke on Timeless Identity · 2008-06-03T16:39:38.000Z · LW · GW

I knew this was where we were headed when you started talking about zombies and I knew exactly what the error would be.

Even if I accept your premises of many-worlds and timeless physics, the identity argument still has exactly the same form as it did before. Most people are aware that atomic-level identity is problematic even if they're not aware of the implications of quantum physics. They know this because they consume and excrete material. Nobody who's thought about this for more than a few seconds thinks their identity lies in the identity of the atoms that make up their bodies.

Your view of the world actually makes it easier to hold a position of physical identity. If you can say "this chunk of Platonia is overlapping computations that make up me" I can equally say "this chunk of Platonia is overlapping biochemical processes that make up me." Or I can talk about the cellular level or whatever. Your physics has given us freedom to choose an arbitrary level of description. So your argument reduces to to the usual subjectivist argument for psychological identity (i.e., "no noticeable difference") without the physics doing any work.

Comment by poke on A Premature Word on AI · 2008-06-01T00:54:53.000Z · LW · GW

Eliezer: As I said, there are plenty of circular definitions of intelligence, such as defining it as an "powerful optimization process" that hones in on outcomes you've predefined as being the product of intelligence (which is what your KnowabilityOfAI appears to do). Perhaps for your needs such a (circular) operational definition would suffice: take the set of artifacts and work backwards. That hardly seems helpful in designing any sort of workable software system though.

Re: modeling the human brain. Modeling the human brain would involve higher levels of organization. The point is that those higher levels of organization would be actual higher levels of organization that exist in real life and not the biologically implausible fantasies "AI researchers" have plucked out of thin air based on a mixture of folk psychology, introspection and wishful thinking.