The scourge of perverse-mindedness
post by simplicio · 2010-03-21T07:08:28.304Z · LW · GW · Legacy · 255 commentsContents
255 comments
This website is devoted to the art of rationality, and as such, is a wonderful corrective to wrong facts and, more importantly, wrong procedures for finding out facts.
There is, however, another type of cognitive phenomenon that I’ve come to consider particularly troublesome, because it militates against rationality in the irrationalist, and fights against contentment and curiousity in the rationalist. For lack of a better word, I’ll call it perverse-mindedness.
The perverse-minded do not necessarily disagree with you about any fact questions. Rather, they feel the wrong emotions about fact questions, usually because they haven’t worked out all the corollaries.
Let’s make this less abstract. I think the following quote is preaching to the choir on a site like LW:
“The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
-Richard Dawkins, "God's Utility Function," Scientific American (November, 1995).
Am I posting that quote to disagree with it? No. Every jot and tittle of it is correct. But allow me to quote another point of view on this question.
“We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”
This quote came from an ingenious and misguided man named Alan Watts. You will not find him the paragon of rationality, to put it mildly. And yet, let’s consider this particular statement on its own. What exactly is wrong with it? Sure, you can pick some trivial holes in it – life would not have arisen without the sun, for example, and Homo sapiens was not inevitable in any way. But the basic idea – that life and consciousness is a natural and possibly inevitable consequence of the way the universe works – is indisputably correct.
So why would I be surprised to hear a rationalist say something like this? Note that it is empirically indistinguishable from the more common view of “mankind confronted by a hostile universe.” This is the message of the present post: it is not only our knowledge that matters, but also our attitude to that knowledge. I believe I share a desire with most others here to seek truth naively, swallowing the hard pills when it becomes necessary. However, there is no need to turn every single truth into a hard pill. Moreover, sometimes the hard pills also come in chewable form.
What other fact questions might people regard in a perverse way?
How about materialism, the view that reality consists, at bottom, in the interplay of matter and energy? This, to my mind, is the biggie. To come to facilely gloomy conclusions based on materialism seems to be practically a cottage industry among Christian apologists and New Agers alike. Since the claims are all so similar to each other, I will address them collectively.
“If we are nothing but matter in motion, mere chemicals, then:
- Life has no meaning;
- Morality has no basis;
- Love is an illusion;
- Everything is futile (there is no immortality);
- Our actions are determined; we have no free will;
- et
- cetera.”
The usual response from materialists is to say that an argument from consequences isn’t valid – if you don’t like the fact that X is just matter in motion, that doesn’t make it false. While eminently true, as a rhetorical strategy for convincing people who aren’t already on board with our programme, it’s borderline suicidal.
I have already hinted at what I think the response ought to be. It is not necessarily a point-by-point refutation of each of these issues individually. The simple fact is, not only is materialism true, but it shouldn’t bother anyone who isn’t being perverse about it, and it wouldn’t bother us if it had always been the standard view.
There are multiple levels of analysis in the lives of human beings. We can speak of societies, move to individual psychology, thence to biology, then chemistry… this is such a trope that I needn’t even finish the sentence.
However, the concerns of, say, human psychology (as distinct from neuroscience), or morality, or politics, or love, are not directly informed by physics. Some concepts only work meaningfully on one level of analysis. If you were trying to predict the weather, would you start by modeling quarks? Reductionism in principle I will argue for until the second coming (i.e., forever). Reductionism in practice is not always useful. This is the difference between proximate and ultimate causation. The perverse-mindedness I speak of consists in leaping straight from behaviour or phenomenon X to its ultimate cause in physics or chemistry. Then – here’s the “ingenious” part – declaring that, since the ultimate level is devoid of meaning, morality, and general warm-and-fuzziness, so too must be all the higher levels.
What can we make of someone who says that materialism implies meaninglessness? I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was. Matter is what we’re made of, in the same way as a painting is made of dried pigments on canvas. Big deal! What would you prefer to be made of, if not matter?
It is only by the contrived unfavourable contrast of matter with something that doesn’t actually exist – soul or spirit or élan vital or whatever – that somebody can pull off the astounding trick of spoiling your experience of a perfectly good reality, one that you should feel lucky to inhabit.
I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism. There really is no hard pill to swallow.
What are some other examples of perversity? Eliezer has written extensively on another important one, which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme. It can be easily spotted by tuning your ears to the words “just” and “merely.” By saying, for example, that sexual attraction is “merely” biochemistry, you are telling the truth and deceiving at the same time. You are making a (more or less) correct factual statement, while Trojan-horsing an extraneous value judgment into your listener’s mind as well: “chemicals are unworthy.” On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?
What about the final fate of the universe, to take another example? Many of us probably remember the opening scene of Annie Hall, where little Alfie tells the family doctor he’s become depressed because everything will end in expansion and heat death. “He doesn’t do his homework!” cries his mother. “What’s the point?” asks Alfie.
Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.
The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense. It is the joy of learning something that’s been known for centuries; it is appreciating the consilience of knowledge without moaning about reductionism; it is accepting nature on her own terms, without fatuous navel-gazing about how unimportant you are on the cosmic scale. If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.
255 comments
Comments sorted by top scores.
comment by PhilGoetz · 2010-03-21T19:47:35.260Z · LW(p) · GW(p)
I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism.
There actually is a way in which they're right.
My first thought was, "You've got it backwards - it isn't that materialism isn't gloomy; it's that spiritualism is even gloomier." Because spiritual beliefs - I'm usually thinking of Christianity when I say that - don't really give you oughtness for free; they take the arbitrary moral judgements of the big guy in the sky and declare them correct. And so you're not only forced to obey this guy; you're forced to enjoy obeying him, and have to feel guilty if you have any independent moral ideas. (This is why Christianity, Islam, communism, and other similar religions often make their followers morally-deficient.)
But what do I mean by gloomier? I must have some baseline expectation which both materialism and spirituality fall short of, to feel that way.
And I do. It's memories of how I felt when I was a Christian. Like I was a part of a difficult but Good battle between right and wrong.
Now, hold off for a moment on asking whether that view is rational or coherent, and consider a dog. A dog wants to make its master happy. Dogs have been bred for thousands of years specifically not to want to challenge their master, or to pursue their own goals, as wolves do. When a dog can be with its master, and do what its master tells it to, and see that its master is pleased, the dog is genuinely, tail-waggingly happy. Probably happier than you or I are even capable of being.
A Christian just wants to be a good dog. They've found a way to reach that same blissful state themselves.
The materialistic worldview really is gloomy compared to being a dog.
And we don't have any way to say that we're right and they're wrong.
Factually, of course, they're wrong. But when you're a dog, being factually wrong isn't important. Obeying your master is important. Judged by our standards of factual correctness, we're right and they're wrong. Judged by their standards of being (or maybe feeling like) a good dog, they're right and we're wrong.
One of the problems with CEV, perhaps related to wireheading, is that it would probably fall into a doglike attractor. Possibly you can avoid it by writing into the rules that factual correctness trumps all other values. I don't think you can avoid it that easily. But even if you could, by doing so, you've already decided whose values you're going to implement, before your FAI has even booted up; and the whole framework of CEV is just a rationalization to excuse the fact that the world is going to end up looking the way you want it to look.
Replies from: MichaelVassar, orthonormal, simplicio↑ comment by MichaelVassar · 2010-03-21T22:10:14.894Z · LW(p) · GW(p)
I disagree with most of this but vote it up for being an excellent presentation of a complex and important position that must be addressed (though as noted, I think it can be) and hasn't been adequately addressed to satisfy (or possibly even to be understood by) all or most LW readers.
Phil, I suggest, that you try to look at Christian and secular children (and possibly those of some other religions) and decide empirically whether they really seem to differ so much in happiness or well being. Looking at people in a wide range of cultures in situations would in general be helpful, but especially that contrast or mostly, I suspect, lack of contrast.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-21T23:16:30.622Z · LW(p) · GW(p)
Phil, I suggest, that you try to look at Christian and secular children (and possibly those of some other religions) and decide empirically whether they really seem to differ so much in happiness or well being.
Children are where not to look. Dogs psychologically resemble wolf-pups; they are childlike. Religion, like the breeding of dogs, is neotenous; it allows retention of childlike features into adulthood. To see the differences I'm talking about, you therefore need to look at adults.
Anyway, if you're asking me to judge based on who is the happiest, you've taken the first step down the road to wireheading. Dogs have been genetically reprogrammed to develop in a way that wires their value system to getting a pat on the head from their master.
The basic problem here is how we can simultaneously preserve human values, and not become wireheads, when some people are already wireheads. The religious worldview I spoke of above is a kind of wireheading. Would CEV dismiss it as wireheading? If so, what human values aren't wireheading? How do we walk the tightrope between wireheads and moral realists? Is there even a tightrope to walk there?
↑ comment by orthonormal · 2010-03-21T20:28:59.295Z · LW(p) · GW(p)
IAWYC except for the last paragraph. While CEV isn't guaranteed to be a workable concept, and while it's dangerous to get into the habit of ruling out classes of counterargument by definition, I think there's a problem with criticizing CEV on the grounds "I think CEV will probably go this way, but I think that way is a big mistake, and I expect we'd all see it as a mistake even if we knew more, thought faster, etc." This is exactly the sort of error the CEV project is built to avoid.
Replies from: Rain, PhilGoetz↑ comment by Rain · 2010-03-21T21:24:45.124Z · LW(p) · GW(p)
I was a strong proponent of CEV as the most-correct theory I had heard on the topic of what goals to set, but I've become more skeptical as Eliezer started talking about potential tweaks to avoid insane results like the dog scenario above.
It seems similar in nature to the rule-building method of goal definition, where you create a list, and which has been roundly criticized as near impossible to do correctly.
Replies from: Strange7, MichaelVassar↑ comment by Strange7 · 2010-03-21T22:04:33.670Z · LW(p) · GW(p)
That's why I prefer the 'would it satisfy everyone who ever lived?' strategy over CEV. Humanity's future doesn't have to be coherent. Coherence is something that happens at evolutionary choke-points, when some species dies back to within an order of magnitude of the minimum sustainable population. When some revolutionary development allows unprecedented surpluses, the more typical response is diversification.
Consider the trilobites. If there had been a trilobite-Friendly AI using CEV, invincible articulated shells would comb carpets of wet muck with the highest nutrient density possible within the laws of physics, across worlds orbiting every star in the sky. If there had been a trilobite-engineered AI going by 100% satisfaction of all historical trilobites, then trilobites would live long, healthy lives in a safe environment of adequate size, and the cambrian explosion (or something like it) would have proceeded without them.
Most people don't know what they want until you show it to them, and most of what they really want is personal. Food, shelter, maybe a rival tribe that's competent enough to be interesting but always loses when something's really at stake. The option of exploring a larger world, seldom exercised. It doesn't take a whole galaxy's resources to provide that, even if we're talking trillions of people.
Replies from: orthonormal, PhilGoetz↑ comment by orthonormal · 2010-03-21T22:20:53.082Z · LW(p) · GW(p)
I realized a pithy way of stating my objection to that strategy: given how unlikely I think it is that the test could be passed fairly by a Friendly AI, an AI passing the test is stronger evidence that the AI is cheating somehow than that the AI is Friendly.
Replies from: Strange7↑ comment by Strange7 · 2010-03-21T23:26:09.178Z · LW(p) · GW(p)
If the AI is programmed so that it genuinely wants to pass the test (or the closest feasible approximation of the test) fairly, cheating isn't an issue. This isn't a matter of fast-talking it's way out of a box. A properly-designed AI would be horrified at the prospect of 'cheating,' the way a loving mother is horrified at the prospect of having her child stolen by fairies and replaced with a near-indistinguishable simulacrum made from sticks and snow.
Replies from: PhilGoetz, orthonormal↑ comment by PhilGoetz · 2010-03-21T23:37:27.385Z · LW(p) · GW(p)
It is probably possible to pass that test by exploiting human psychology. It is probably impossible to do well on that test by trying to convince humans that your viewpoint is right.
You're talking past orthonormal. You're assuming a properly-designed AI. He's saying that accomplishing the task would be strong evidence of unfriendliness.
↑ comment by orthonormal · 2010-03-22T00:07:37.018Z · LW(p) · GW(p)
What Phil said, and also:
Taboo "fairly"— this is another word the specification of which requires the whole of human values. Proving that the AI understands what we mean by fairness and wants to pass the test fairly is no easier than proving it Friendly in the first place.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T01:33:55.196Z · LW(p) · GW(p)
"Fairly" was the wrong word in this context. Better might be 'honest' or 'truthful.' A truthful piece of information is one which increases the recipient's ability to make accurate predictions; an honest speaker is one whose statements contain only truthful information.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-22T02:23:10.840Z · LW(p) · GW(p)
the recipient's ability to make accurate predictions
About what? Anything? That sounds very easy.
Remember Goodhart's Law - what we want is G, Good, not any particular G* normally correlated with Good.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T02:50:52.114Z · LW(p) · GW(p)
That sounds very easy.
Walking from Helsinki to Saigon sounds easy, too, depending on how it's phrased. Just one foot in front of the other, right?
Humans make predictions all the time. Any time you perceive anything and are less than completely surprised by it, that's because you made a prediction which was at least partly successful. If, after receiving and assimilating the information in question, any of your predictions is reduced in accuracy, any part of that map becomes less closely aligned with the territory, then the information was not perfectly honest. If you ignore or misinterpret it for whatever reason, even when it's in some higher sense objectively accurate, that still fails the honesty test.
A rationalist should win; an honest communicator should make the audience understand.
Given the option, I'd take personal survival even at the cost of accurate perception and ability to act, but it's not a decision I expect to be in the position of needing to make: an entity motivated to provide me with information that improves my ability to make predictions would not want to kill me, since any incoming information that causes my death necessarily also reduces my ability to think.
Replies from: orthonormal, RobinZ↑ comment by orthonormal · 2010-03-22T03:16:11.608Z · LW(p) · GW(p)
What Robin is saying is, there's a difference between
"metrics that correlate well enough with what you really want that you can make them the subject of contracts with other human beings", and
"metrics that correlate well enough with what you really want that you can make them the subject of a transhuman intelligence's goals".
There are creative avenues of fulfilling the letter without fulfilling the spirit that would never occur to you but would almost certainly occur to a superintelligence, not because xe is malicious, but because they're the optimal way to achieve the explicit goal set for xer. Your optimism, your belief that you can easily specify a goal (in computer code, not even English words) which admits of no undesirable creative shortcuts, is grossly misplaced once you bring smarter-than-human agents into the discussion. You cannot patch this problem; it has to be rigorously solved, or your AI wrecks the world.
↑ comment by RobinZ · 2010-03-22T02:55:43.047Z · LW(p) · GW(p)
Given the option, I'd take personal survival even at the cost of accurate perception and ability to act, but it's not a decision I expect to be in the position of needing to make: an entity motivated to provide me with information that improves my ability to make predictions would not want to kill me, since any incoming information that causes my death necessarily also reduces my ability to think.
Sure, but I don't want to be locked in a box watching a light blink very predictably on and off.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T03:07:09.469Z · LW(p) · GW(p)
Building the box reduces your ability to predict anything taking place outside the box. Even if the box can be sealed perfectly until the end of time without killing you (which would in itself be a surprise to anyone who knows thermodynamics), cutting off access to compilations of medical research reduces your ability to predict your own physiological reactions. Same goes for screwing with your brain functions.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-22T03:10:16.443Z · LW(p) · GW(p)
I do not think you should be as confident as you are that your system is bulletproof. You have already had to elaborate and clarify and correct numerous times to rule out various kinds of paperclipping failures - all it takes is one elaboration or clarification or correction forgotten to allow for a new one, attacking the problem this way.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T03:33:05.461Z · LW(p) · GW(p)
How confident do you think I am that my plan is bulletproof?
Replies from: RobinZ↑ comment by RobinZ · 2010-03-22T03:35:26.421Z · LW(p) · GW(p)
Given that you asked me the question, I reckon you give it somewhere between 1:100 and 2:1 odds of succeeding. I reckon the odds are negligible.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T03:45:52.777Z · LW(p) · GW(p)
That's our problem right there: you're trying to persuade me to abandon a position I don't actually hold. I agree that an AI based strictly on a survey of all historical humans would have negligible chance of success, simply because a literal survey is infeasible and any straightforward approximation of it would introduce unacceptable errors.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-22T04:01:29.693Z · LW(p) · GW(p)
...why are you defending it, then? I don't even see that thinking along those lines is helpful.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T04:22:35.181Z · LW(p) · GW(p)
For everyone else, it was a chance to identify flaws in a proposition. No such thing as too much practice there. For me, it was a chance to experience firsthand the thought processes involved in defending a flawed proposition, necessary practice for recognizing other such flawed beliefs I might be holding; I had no religious upbringing to escape, so that common reference point is missing.
Furthermore, I knew from the outset that such a survey wouldn't be practical, but I've been suspicious of CEV for a while now. It seems like it would be too hard to formalize, and at the same time, even if successful, too far removed from what people spend most of their time caring about. I couldn't be satisfied that there wasn't a better way to do it until I'd tried to find such a way myself.
Replies from: orthonormal↑ comment by orthonormal · 2010-03-22T04:27:50.887Z · LW(p) · GW(p)
It's polite to give some signal that you're playing devil's advocate if you know you're making weak arguments.
I couldn't be satisfied that there wasn't a better way to do it until I'd tried to find such a way myself.
This is not a sufficient condition for establishing the optimality of CEV. Indeed, I'm not sure there isn't a better way (nor even that CEV is workable), just that I have at present no candidates for one.
Replies from: Strange7↑ comment by Strange7 · 2010-03-22T04:45:11.074Z · LW(p) · GW(p)
I apologize. I thought I had discharged the devil's-advocacy-signaling obligation by ending my original post on the subject with a request to be proved wrong.
I agree that personal satisfaction with CEV isn't a sufficient condition for it being safe. For that matter, having proposed and briefly defended this one alternative isn't really sufficient for my personal satisfaction in either CEV's adequacy or the lack of a better option. But we have to start somewhere, and if someone did come up with a better alternative to CEV, I'd want to make sure that it got fair consideration.
↑ comment by PhilGoetz · 2010-03-21T23:36:00.076Z · LW(p) · GW(p)
Your trilobite example is at odds with your everyone-who-lived strategy. The impact of the trilobite example is to show that CEV is fundamentally wrong, because trilobite cognition, no matter how far you extrapolate it, would never lead to love, or value it if it arose by chance.
Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.
Replies from: orthonormal, orthonormal, Vladimir_Nesov↑ comment by orthonormal · 2010-03-22T03:48:08.517Z · LW(p) · GW(p)
Let me expand upon Vladimir's comment:
Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.
You have not yet learned that a certain argumentative strategy against CEV is doomed to self-referential failure. You have just argued that "exploring the landscape of possible worlds" is a good thing, something that you value. I agree, and I think it's a reflectively consistent value, which others generally share at some level and which they might share more completely if they knew more, thought faster, had grown up farther together, etc.
You then assume, without justification, that "exploring the landscape of possible worlds" will not be expressed as a part of CEV, and criticize it on these grounds.
Huh? What friggin' definition of CEV are you using?!?
EDIT: I realized there was an insult in my original formulation. I apologize for being a dick on the Internet.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-22T19:21:44.643Z · LW(p) · GW(p)
You then assume, without justification, that "exploring the landscape of possible worlds" will not be expressed as a part of CEV, and criticize it on these grounds.
Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous. I don't think there's any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums.
Most likely, all the best optimums lie in places that CEV is designed to keep us away from, just as trilobite CEV would keep us away from human values. So CEV is worse than random.
Replies from: Mitchell_Porter, Nick_Tarleton↑ comment by Mitchell_Porter · 2010-03-23T09:43:10.101Z · LW(p) · GW(p)
Most likely, all the best optimums lie in places that CEV is designed to keep us away from, just as trilobite CEV would keep us away from human values.
That a "trilobite CEV" would never lead to human values is hardly a criticism of CEV's effectiveness. The world we have now is not "trilobite friendly"; trilobites are extinct!
CEV, as I understand it, is very weakly specified. All it says is that a developing seed AI chooses its value system after somehow taking into account what everyone would wish for, if they had a lot more time, knowledge, and cognitive power than they do have. It doesn't necessarily mean, for example, that every human being alive is simulated, given superintelligence, and made to debate the future of the cosmos in a virtual parliament. The combination of better knowledge of reality and better knowledge of how the human mind actually works may make it extremely clear that the essence of human values, extrapolated, is XYZ, without any need for a virtual referendum, or even a single human simulation.
It is a mistake to suppose, for example, that a human-based CEV process will necessarily give rise to a civilizational value system which attaches intrinsic value to such complexities as food, sex, or sleep, and which will therefore be prejudiced against modes of being which involve none of these things. You can have a value system which attributes positive value to human beings getting those things, not because they are regarded as intrinsically good, but because entities getting what they like is regarded as intrinsically good.
If a human being is capable of proposing a value system which makes no explicit mention of human particularities at all (e.g. Ben Goertzel's "growth, choice, and joy"), then so is the CEV process. So if the worry is that the future will be kept unnecessarily anthropomorphic, that is not a valid critique. (It might happen if something goes wrong, but we're talking about the basic idea here, not the ways we might screw it up.)
You could say, even a non-anthropomorphic CEV might keep us away from "the best optimums". But let's consider what that would mean. The proposition would be that even in a civilization making the best, wisest, most informed, most open-minded choices it could make, it still might fall short of the best possible worlds. For that to be true, must it not be the case that those best possible worlds are extremely hard to "find"? And if you propose to find them by just being random, must there not be some risk of instead ending up in very bad futures? This criticism may be comparable to the criticism that rational investment is a bad idea, because you'd make much more money if you won the lottery. If these distant optima are so hard to find, even when you're trying to find good outcomes, I don't see how luck can be relied upon to get you there.
This issue of randomness is not absolute. One might expect a civilization with an agreed-upon value system to nonetheless conduct fundamental experiments from time to time. But if there were experiments whose outcomes might be dangerous as well as rewarding, it would be very foolish to just go ahead and do them because if we get lucky, the consequences would be good. Therefore, I do not think that unconstrained evolution can be favored over the outcomes of non-anthropomorphic CEV.
↑ comment by Nick_Tarleton · 2010-03-22T20:43:33.486Z · LW(p) · GW(p)
Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous.
That doesn't mean that you can't examine possible trajectories of evolution for good things you wouldn't have thought of yourself, just that you shouldn't allow evolution to determine the actual future.
I don't think there's any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums.
I'm not sure what you mean by "constrain" here. A process that reliably reaches an optimum (I'm not saying CEV is such a process) constrains future development to reach an optimum. Any nontrivial (and non-self-undermining, I suppose; one could value the nonexistence of optimization processes or something) value system, whether "provincially human" or not, prefers the world to be constrained into more valuable states.
Most likely, all the best optimums lie in places that CEV is designed to keep us away from
I don't see where you've responded to the point that CEV would incorporate whatever reasoning leads you to be concerned about this.
↑ comment by orthonormal · 2010-03-22T04:00:58.079Z · LW(p) · GW(p)
Or to take one step back:
It seems that you think there are two tiers of values, one consisting of provincial human values, and another consisting of the true universal values like "exploring the landscape of possible worlds". You worry that CEV will catch only the first group of values.
From where I stand, this is just a mistaken question; the values you worry will be lost are provincial human values too! There's no dividing line to miss.
Replies from: PhilGoetz, PhilGoetz↑ comment by PhilGoetz · 2010-03-22T19:02:29.393Z · LW(p) · GW(p)
I understand what you're saying, and I've heard that answer before, repeatedly; and I don't buy it.
Suppose we were arguing about the theory of evolution in the 19th century, and I said, "Look, this theory just doesn't work, because our calculations indicate that selection doesn't have the power necessary." That was the state of things around the turn of the century, when genetic inheritance was assumed to be analog rather than discrete.
An acceptable answer would be to discover that genes were discrete things that an organism had just 2 copies of, and that one was often dominant, so that the equations did in fact show that selection had the necessary power.
An unacceptable answer would be to say, "What definition of evolution are you using? Evolution makes organisms evolve! If what you're talking about doesn't lead to more complex organisms, then it isn't evolution."
Just saying "Organisms become more complex over time" is not a theory of evolution. It's more like an observation of evolution. A theory means you provide a mechanism and argue convincingly that it works. To get to a theory of CEV, you need to define what it's supposed to accomplish, propose a mechanism, and show that the mechanism might accomplish the purpose.
You don't have to get very far into this analysis to see why the answer you've given doesn't, IMHO, work. I'll try to post something later this afternoon on why.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-23T03:12:40.555Z · LW(p) · GW(p)
I won't get around to posting that today, but I'll just add that I know that the intent of CEV is to solve the problems I'm complaining about. I know there are bullet points in the CEV document that say, "Renormalizing the dynamic", "Caring about volition," and, "Avoid hijacking the destiny of humankind."
But I also know that the CEV document says,
Since the output of the CEV is one of the major forces shaping the future, I'm still pondering the order-of-evaluation problem to prevent this from becoming an infinite recursion.
and
It may be hard to get CEV right - come up with an AI dynamic such that our volition, as defined, is what we intuitively want. The technical challenge may be too hard; the problems I'm still working out may be impossible or ill-defined. I don't intend to trust any design until I see that it works, and only to the extent I see that it works. Intentions are not always realized.
I think there is what you could call an order-of-execution problem, and I think there's a problem with things being ill-defined, and I think the desired outcome is logically impossible. I could be wrong. But since Eliezer worries that this could be the case, I find it strange that Eliezer's bulldogs are so sure that there are no such problems, and so quick to shoot down discussion of them.
↑ comment by PhilGoetz · 2010-03-22T19:45:15.536Z · LW(p) · GW(p)
This is one of the things I don't understand: If you think everything is just a provincial human value, then why do you care? Why not play video games or watch YouTube videos instead of arguing about CEV? Is it just more fun?
(There's a longish section trying to answer this question in the CEV document, but I can't make sense of it.)
There's a distinction that hasn't been made on LW yet, between personal values and evangelical values. Western thought traditionally blurs the distinction between them, and assumes that, if you have personal values, you value other people having your values, and must go on a crusade to get everybody else to adopt your personal values.
The CEVer position is, as far as I can tell, that they follow their values because that's what they are programmed to do. It's a weird sort of double-think that can only arise when you act on the supposition that you have no free will with which to act. They're talking themselves into being evangelists for values that they don't really believe in. It's like taking the ability to follow a moral code that you know has no outside justification from Nietzsche's "master morality", and combining it with the prohibition against value-creation from his "slave morality".
Replies from: ata↑ comment by ata · 2010-03-22T20:13:32.147Z · LW(p) · GW(p)
There's a distinction that hasn't been made on LW yet, between personal values and evangelical values. Western thought traditionally blurs the distinction between them, and assumes that, if you have personal values, you value other people having your values, and must go on a crusade to get everybody else to adopt your personal values.
That's how most values work. In general, I value human life. If someone does not share this value, and they decide to commit murder, then I would stop them if possible. If someone does not share this value, but is merely apathetic about murder rather than a potential murderer themselves, then I would cause them to share this value if possible, so there will be more people to help me stop actual murderers. So yes, at least in this case, I would act to get other people to adopt my values, or inhibit them from acting on their own values. Is this overly evangelical? What is bad about it?
In any case, history seems to indicate that "evangelizing your values" is a "universal human value".
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-23T04:01:27.949Z · LW(p) · GW(p)
Groups that didn't/don't value evangelizing their values:
- The Romans. They don't care what you think; they just want you to pay your taxes.
- The Jews. Because God didn't choose you.
- Nietzschians. Those are their values, dammit! Create your own!
- Goths. (Angst-goths, not Visi-goths.) Because if everyone were a goth, they'd be just like everyone else.
We get into one sort of confusion by using particular values as examples. You talk about valuing human life. How about valuing the taste of avocados? Do you want to evangelize that? That's kind of evangelism-neutral. How about the preferences you have that make one particular private place, or one particular person, or other limited resource, special to you? You don't want to evangelize those preferences, or you'd have more competition. Is the first sort of value the only one CEV works with? How does it make that distinction?
We get into another sort of confusion by not distinguishing between the values we hold as individuals, the values we encourage our society to hold, and the values we want God to hold. The kind of values you want your God to hold are very different from the kind of values you want people to hold, in the same way that you want the referee to have different desires than the players. CEV mushes these two very different things together.
Replies from: ata↑ comment by Vladimir_Nesov · 2010-03-21T23:43:23.205Z · LW(p) · GW(p)
Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.
You never learn.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-22T03:40:19.074Z · LW(p) · GW(p)
Folks. Vladimir's response is not acceptable in a rational debate. The fact that it currently has 3 points is an indictment of the Less Wrong community.
Replies from: JGWeissman, thomblake↑ comment by JGWeissman · 2010-03-22T03:57:12.356Z · LW(p) · GW(p)
Normally I would agree, but he was responding to "Some degree of randomness is necessary". Seriously, you should know that isn't right.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-22T19:19:15.291Z · LW(p) · GW(p)
That post is about a different issue. It's about whether introducing noise can help an optimization algorithm. Sounds similar; isn't. The difference is that the optimization algorithm already knows the function that it's trying to optimize.
The basic problem with CEV is that it requires reifying values in a strange way so that there are atomic "values" that can be isolated from an agent's physical and cognitive architecture; and that (I think) it assumes that we have already evolved to the point where we have discovered all of these values. You can make very general value statements, such as that you value diversity, or complexity. But a trilobite can't make any of those value statements. I think it's likely that there are even more important fundamental value statements to be made that we have not yet conceptualized; and CEV is designed from the ground up specifically to prevent such new values from being incorporated into the utility function.
The need for randomness is not because random is good; it's because, for the purpose of discovering better primitives (values) to create better utility functions, any utility function you can currently state is necessarily worse than random.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-22T20:08:09.001Z · LW(p) · GW(p)
Since when is randomness required to explore the "landscape of possible worlds"? Or the possible values that we haven't considered? A methodical search would be better. How did you miss that lesson from Worse Than Random, when it included an example (the pushbutton combination lock) of exploring a space of potential solutions?
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-23T22:42:45.475Z · LW(p) · GW(p)
Okay, you don't actually need randomness, if you can work out a way of doing a methodical variation of all possible parameters.
(For problems of this nature, using random processes allows you to specify the statistical properties that you want the solution to have, which is often much simpler than specifying a deterministic process that has those properties. That's one reason randomness is useful.)
The point I'm trying to make is that you need not to limit yourself to "searching", meaning trying to optimize a function. You can only search when you know what you're looking for. A value system can't be evaluated from the outside. You have to try it on. Rationally, where "rational" means optimizing existing values, you wouldn't do that. So randomness (or a rationally-ordered but irrationally-pursued exploration of parameter space) will lead to places no rational agent would go.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-23T22:50:03.616Z · LW(p) · GW(p)
[EDIT: Wow, the parent comment completely changed since I responded to it. WTF?]
How do you plan to map a random number into a search a space that you could not explore systematically?
any utility function you can currently state is necessarily worse than random.
According to which utility function?
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-23T23:33:55.425Z · LW(p) · GW(p)
[EDIT: Wow, the parent comment completely changed since I responded to it. WTF?]
I have a bad habit of re-editing a comment for several minutes after first posting it.
How do you plan to map a random number into a search a space that you could not explore systematically?
Suppose you want to test a program whose input variables are distributed normally. You can write a big complicated equation to sample at uniform intervals from the cumulative distribution function for the gaussian distribution. Or you can say "x = mean; for i=1 to 10 { x += rnd(2)-1 }".
Very often, the only data you know about your space is randomly-sampled data. So you look at that randomly-sampled data, and come up with some simple random model that would generate data with similar properties. The nature of the statistics you've gathered, such as the mean, variance, and correlations between observed variables, make it very hard to construct a deterministic model that would reproduce those statistics, but very easy to build a random model that does.
Some people really do have the kinds of misconceptions Eliezer was talking about; but the idea that there are hordes of scientists who attribute magical properties to randomness just isn't true. This is not a fight you need to fight. And railing against all use of randomness in the simulation or study of complex processes just puts a big sticker on your head that says "I have no experience with what I'm talking about!"
We're having 2 separate arguments here. I hope you realize that my comment that you originally responded to was not claiming that randomness has some magical power. It was about the need, when considering the future of the universe, for trying things out not just because your current utility function suggests they will have high utility. I used "random" as shorthand for "not directed by a utility function".
According to which utility function?
According to the utility function that your current utility function doesn't like, but that you will be delighted with once you try it out.
Replies from: JGWeissman, Strange7↑ comment by JGWeissman · 2010-03-24T00:46:48.914Z · LW(p) · GW(p)
Suppose you want to test a program whose input variables are distributed normally. You can write a big complicated equation to sample at uniform intervals from the cumulative distribution function for the gaussian distribution. Or you can say "x = mean; for i=1 to 10 { x += rnd(2)-1 }".
Yes, I understand you can use randomness as an approximate substitute for actually understanding the implications of your probability distributions. That does not really address my point, the randomness does not grant you access to a search space you could not otherwise explore.
Very often, the only data you know about your space is randomly-sampled data. So you look at that randomly-sampled data, and come up with some simple random model that would generate data with similar properties.
If you analyze randomly-sampled data by considering the probability distribution of results for a random sampling, instead for the specific sampling you actually used, you are vulnerable to the mistake described here.
The nature of the statistics you've gathered, such as the mean, variance, and correlations between observed variables, make it very hard to construct a deterministic model that would reproduce those statistics, but very easy to build a random model that does.
You can deterministically build a model that accounts for your uncertainty. Having a probability distribution is not the same thing as randomly choosing results from that distribution.
And railing against all use of randomness in the simulation or study of complex processes just puts a big sticker on your head that says "I have no experience with what I'm talking about!"
First of all, I am not "railing against all use of randomness in the simulation or study of complex processes". I am objecting to your claim that "randomness is required" in an epistemilogical process. Second, you should not presume to warn me about stickers on my head.
I hope you realize that my comment that you originally responded to was not claiming that randomness has some magical power.
You should realize that "randomness is required" does sound very much like "claiming that randomness has some magical power", and if you mispoke, the correct response to the objection would be to admit that you made a mistake and apologize for the miscommunication, not to try to defend the wrong claim.
According to which utility function?
According to the utility function that your current utility function doesn't like, but that you will be delighted with once you try it out.
It appears that you don't understand the purpose of utility functions. I do not want to have a utility function U that maximizes U(U), that assigns to itself higher utility than any other utility function assigns to itself. I want to achieve states of the world that maximize my current utility function.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T20:43:39.713Z · LW(p) · GW(p)
You should realize that "randomness is required" does sound very much like "claiming that randomness has some magical power", and if you mispoke, the correct response to the objection would be to admit that you made a mistake and apologize for the miscommunication, not to try to defend the wrong claim.
You mean, for instance, by saying,
Okay, you don't actually need randomness, if you can work out a way of doing a methodical variation of all possible parameters.
I'm not defending the previous wrong claim about "needing randomness". I'm arguing against your wrong claim, which appears to be that one should never use randomness in your models.
It appears that you don't understand the purpose of utility functions. I do not want to have a utility function U that maximizes U(U), that assigns to itself higher utility than any other utility function assigns to itself. I want to achieve states of the world that maximize my current utility function.
It appears that you still don't understand what my basic point is. You can't improve your utility function by a search using your utility function. We have better utility functions than trilobites did. We could not have found them using trilobite utility functions. Trilobite CEV would, if performing optimally, have ruled them out. Extrapolate.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-24T21:42:06.535Z · LW(p) · GW(p)
You mean, for instance, by saying,
Okay, you don't actually need randomness, if you can work out a way of doing a methodical variation of all possible parameters.
Wow, you are actually compounding the rudeness of abusing the edit feature to completely rewrite your comment by then analyzing my response to the original version as if it were responding to the edited version.
I'm arguing against your wrong claim, which appears to be that one should never use randomness in your models.
How did you get from "randomness is never required" to "randomness is never useful"? I acknowledge that sometimes randomness can be a good enough approximate substitute for the much harder strategy of actually understanding the implications of a probability distribution.
It appears that you still don't understand what the argument we're having is about.
I understand your argument. It is wrong. You have not actually responded to my objection. To refute my objection, you would have to explain why I should want to give up my current utility function U0 in favor of some other utility function U such that
(1) U(U) > U0(U0)
even though
(2) U0(U0) > U0(U)
Since U0 is my current utility function, and therefore (2) describes my current wants, you will not be able to convince me that I should be persuaded by (1), which is a meaningless comparison. Adopting U as my utility function does not help me maximize U0.
To the extent that trilobites can even be considered to have utility functions, my utility function is better than the trilobite utility function according to my values. The trilobites would disagree. An optimal human CEV would be a human SUCCESS and a trilobite FAIL. Likewise, an optimal trilobite CEV would be a trilobite SUCCESS and a human FAIL. There is no absolute universal utilility function that says one of these is better than the others. It is my human values that cause me to say that the human SUCCESS is better.
Replies from: Strange7, PhilGoetz, PhilGoetz↑ comment by Strange7 · 2010-03-24T21:58:41.244Z · LW(p) · GW(p)
An optimal human CEV would be a human SUCCESS and a trilobite FAIL.
Unless, of course, it turns out that humans really like trilobites and would be willing to devote significant resources to keeping them alive, understanding their preferences, and carrying out those preferences (without compromising other human values). In that case, it's mutual success.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T23:05:35.260Z · LW(p) · GW(p)
I'm breaking this out into a separate reply, because it's its own sub-thread:
If no utility function, and hence no world state, is objectively better than any other, then all utility functions are wireheading. Because the only distinction between wireheading, and not wireheading, is that the wirehead only cares about his/her own qualia, not about states of the world. If the only reason you care about states of the world is because of how your utility function evaluates them - that is to say, what qualia they generate in you - you are a wirehead.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-24T23:14:16.505Z · LW(p) · GW(p)
If the only reason you care about states of the world is because of how your utility function evaluates them - that is to say, what qualia they generate in you - you are a wirehead.
You have it backwards. I do not care about things because of how my utility function evaluates them. Rather, my utility function evaluates things the way it does because of how I care about it. My utility function is a description of my preferences, not the source of them.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T23:18:22.679Z · LW(p) · GW(p)
I don't think the order of execution matters here. If there's no objective preference over states of the world, then there's no objective reason to prefer "not wireheading" (caring about states of the world) over "wireheading" (caring only about your percepts).
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-24T23:23:01.851Z · LW(p) · GW(p)
There is no "objective" reason to do anything. Knowing that, what are you going to do anyways? Myself, I am still going to things for my subjective reasons.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T23:49:55.079Z · LW(p) · GW(p)
Okay; but then don't diss wireheading.
Replies from: wnoise↑ comment by wnoise · 2010-03-25T20:58:29.774Z · LW(p) · GW(p)
You appear to have an overexpansive definition of wireheading. Having an arbitrary utility function is not the same as wireheading. Wireheading is a very specific sort of alteration of utility functions that we (i.e. most humans, with our current, subjective utility functions, nearly universally) see as very dangerous, because it throws away what we currently care about. Wireheading is a "parochial" definition, not universal. But that's OK.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-25T22:56:06.303Z · LW(p) · GW(p)
What's your definition of wireheading?
I didn't define it as having an arbitrary utility function. I defined it as a utility function that depends only on your qualia.
Replies from: wnoise, Strange7↑ comment by wnoise · 2010-03-26T00:40:58.666Z · LW(p) · GW(p)
What else can the utility function as implemented by your hardware depend on besides your qualia, and computations derived from your qualia?
Calling utility functions "wireheading" is a category error. Wireheading is either:
- Directly acting on the machinery that implements one's utility function to trivially satisfy this hardware, i.e. by directly injecting qualia rather than providing the qualia via what they are normally correlated with.
- More broadly, altering one's utility function to one that is trivial to broadly satisfy, such as by reinforcement via 1.
↑ comment by PhilGoetz · 2010-03-26T01:57:08.339Z · LW(p) · GW(p)
Calling utility functions "wireheading" is a category error.
If you read my original comment, it's clear that I meant wireheading is having a utility function that depends only on your qualia. Or maybe "choosing to have".
What else can the utility function as implemented by your hardware depend on besides your qualia, and computations derived from your qualia?
Huh? So you think there's nothing inside your head except qualia?
Beliefs aren't qualia. Subconscious information isn't qualia.
- Directly acting on the machinery that implements one's utility function to trivially satisfy this hardware, i.e. by directly injecting qualia rather than providing the qualia via what they are normally correlated with.
This sounds like a potentially good definition. But I'm unclear then why anyone using utility theory, and that definition, would object to wireheading. If you've got a utility function, and you can satisfy it, that's the thing to do, right? Why does it matter how you satisfy it? You seem to be saying that the hardware implementation isn't your real utility function, it's just an implementation of it. As if the utility function stood somewhere outside you.
Replies from: wnoise, RobinZ↑ comment by wnoise · 2010-03-26T11:10:35.983Z · LW(p) · GW(p)
Huh? So you think there's nothing inside your head except qualia?
Beliefs aren't qualia. Subconscious information isn't qualia.
Beliefs and subconcious information are derived from qualia and the information about the external world that they correlate with, no?
Utility functions are a convenient mathematical description to describe preferences of entities in game theory and some decision theories, when these preferences are consistent. It's useful as a metaphor for "what we want", but when used loosely like this, there are troubles.
As applied to humans, this flat-out doesn't work. Empirically and as a general rule, we're not consistent, and most of us can readily be money-pumped. We do not have a nice clean module that weighs outcomes and assigns real numbers to them. Nor do we feed outcome weights into a probability weighting module, and then choose the maximum utility. Our values change on reflection. Heck, we're not even unitary entities. Our consciousness is multi-faceted. There are the left and right brains communicating and negotiating through the corpus callosum. The information immediately accessible to the consciousness, what we identify with, is rather different than the information our subconscious uses. We are a gigantic hack of an intelligence built upon the shifting sands of stimulus-response and reinforcement conditioning. These joints in our selves make it easier to wirehead, and essentially kill our current selves, leaving only animal-level instincts, if that.
But I'm unclear then why anyone using utility theory, and that definition, would object to wireheading. If you've got a utility function, and you can satisfy it, that's the thing to do, right? Why does it matter how you satisfy it? You seem to be saying that the hardware implementation isn't your real utility function, it's just an implementation of it. As if the utility function stood somewhere outside you.
There are multiple utility functions running around here. The basic point was that what I consider important now matters to what choices I make now. The fact that I can make the future me have a new utility function, satisfied by wireheading, does not register positively on my current utility function. In fact, because it throws away almost everything I now care about, I am unlikely to do it now. My goals are "satisfy my current utility function", and are always that, because that's what we mean by the abstraction of utility function. My goals are not to satisfy what preferences I may later have. My goals are not to change my preferences to be easier to satisfy, because that means my current goals are less likely to be satisfied. If my goals change, than they will have changed, and only then will I choose differently. It's not that my utility function stands outside of me: my utility function is part of me. Changing it changes me. It so happens that my utility function would be easily changed if I started directly stimulating my reward center. The reward center is not my utility function, though it is part of the implementation of my decision function (which if it were coherent, could be summarized in a utility function, and sets of probabilities). If we wish to identify the reward circuitry of my brain with a utility function, we've also got to put a few other utility functions in, and entities having these utility functions that are in a non-zero sum game with the reward circuitry.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-26T13:11:11.870Z · LW(p) · GW(p)
Beliefs and subconcious information are derived from qualia and the information about the external world that they correlate with, no?
Not as far as I know, no. You may be equating "qualia" with "percepts". That's not right.
The fact that I can make the future me have a new utility function, satisfied by wireheading, does not register positively on my current utility function. In fact, because it throws away almost everything I now care about, I am unlikely to do it now. My goals are "satisfy my current utility function", and are always that, because that's what we mean by the abstraction of utility function. My goals are not to satisfy what preferences I may later have.
If that analysis were correct, there would be no difficulty about wireheading. It would simply be an error.
There is a difficulty about wireheading, and I'm trying to talk about it. I'm looking at static situations: Is there something objectively wrong with a person plugged into themselves giving themselves orgasms forever?
The LW community has a consensus that there is something wrong with that. Yet they also have a consensus that there are no objective values. These are inconsistent.
You're trying to say that wireheading is an error not because the final wirehead state reached is wrong, but because the path from here to there involved an error. That's not a valid objection, for the reasons you gave in your comment: Humans are messy, and random variation is a natural part of the human hardware and software. And humans have been messy for some time. So if you can become a wirehead by a simple error, many people must already have made that error. And CEV has to incorporate their wirehead preferences equally with everyone else's.
There's something inconsistent about saying that human values are good, but the process generating those values is bad.
Replies from: wnoise↑ comment by wnoise · 2010-03-26T18:55:18.283Z · LW(p) · GW(p)
Not as far as I know, no. You may be equating "qualia" with "percepts". That's not right.
Well, I'm still not convinced there is a useful difference, though I see why philosophers would separate the concepts.
There is a difficulty about wireheading, and I'm trying to talk about it. I'm looking at static situations: Is there something objectively wrong with a person plugged into themselves giving themselves orgasms forever?
There is nothing objectively wrong with that, no.
The LW community has a consensus that there is something wrong with that. Yet they also have a consensus that there are no objective values. These are inconsistent.
The LW community has a consensus that there is something wrong with that judged by our current parochial values that we want to maintain. Not objectively wrong, but widely held inter-subjective agreement that lets us cooperate in trying to steer the future away from a course where everyone gets wireheaded.
You're trying to say that wireheading is an error not because the final wirehead state reached is wrong,
No, I'm saying that the final state is wrong according to my current values. That's what I mean by wrong: against my current values. Because it is wrong, any path reaching it must have an error in it somewhere.
And humans have been messy for some time. So if you can become a wirehead by a simple error, many people must already have made that error.
We haven't had the technology to truly wirehead until quite recently, though various addictions can be approximations.
many people must already have made that error. And CEV has to incorporate their wirehead preferences equally with everyone else's.
Currently, there's not enough wireheads, or addicts for that matter, to make much of a difference. Those that are wireheads want nothing more than to be wireheads, so I'm not sure that they would effect anything else under CEV. That's one of the horrors of wireheading -- all other values become lost. What we would have to worry about is a proselytizing wirehead, who wishes everyone else would convert. That seems an even harder end-state to reach than a simple wirehead.
Personally, I don't want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
Replies from: Rain, PhilGoetz↑ comment by Rain · 2010-03-27T14:44:09.313Z · LW(p) · GW(p)
Personally, I don't want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
One of my intuitions about about human value is that it is highly diverse, and any extrapolation will be unable to find consensus / coherence in the way desired by CEV. As such, I've always thought that the most likely outcome of augmenting human value through the means of successful FAI would be highly diverse subpopulations all continuing to diverge, with a sort of evolutionary pressure for who receives the most resources. Wireheads should be easy to contain under such a scenario, and would leave expansion to the more active groups.
↑ comment by PhilGoetz · 2010-03-26T19:51:17.621Z · LW(p) · GW(p)
We haven't had the technology to truly wirehead until quite recently, though various addictions can be approximations.
I was reverting to my meaning of "wireheading". Sorry about that.
Personally, I don't want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
We agree on that.
I think one problem with CEV is that, to buy into CEV, you have to buy into this idea you're pushing that values are completely subjective. This brings up the question of why anyone implementing CEV would want to include anybody else in the subset whose values are being extrapolated. That would be an error.
You could argue that it's purely pragmatic - the CEVer needs to compromise with the rest of the world to avoid being crushed like a bug. But, hey, the CEVer has an AI on its side.
You could argue that the CEVer's values include wanting to make other people happy, and believes it can do this by incorporating their values. There are 2 problems with this:
They would be sacrificing a near-infinite expected utility from propagating their values over all time and space, for a relatively infinitessimal one-time gain of happiness on the part of those currently alive here on Earth. So these have to be CEVers with high discounting of the future. Which makes me wonder why they're interested in CEV.
Choosing the subset of people who manage to develop a friendly AI and set up CEV strongly selects for people who have the perpetuation of values as their dominant value. If someone claims that he will incorporate other peoples' values in his CEV at the expense of perpetuating his own values because he's a nice guy, you should expect that he has to date put more effort into being a nice guy than into CEV.
↑ comment by RobinZ · 2010-03-26T02:38:12.503Z · LW(p) · GW(p)
If you've got a utility function, and you can satisfy it, that's the thing to do, right? Why does it matter how you satisfy it? You seem to be saying that the hardware implementation isn't your real utility function, it's just an implementation of it. As if the utility function stood somewhere outside you.
I think I see your point: a wireheading utility function would value (1) for providing the reward with less effort, while a nonwireheading utility function would disvalue (1) for providing the reward without the desideratum.
↑ comment by Strange7 · 2010-03-25T23:08:35.952Z · LW(p) · GW(p)
You should define 'qualia,' then, in such a way that makes it clear how they're causally isolated from the rest of the universe.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-26T00:21:31.167Z · LW(p) · GW(p)
I didn't say they were causally isolated.
If you think that the notion of "qualia" requires them to be causally isolated from the universe (which is my guess at why you even bring the idea up), then the burden is on you to explain why everyone who discusses consciousness except Daniel Dennett is silly.
Replies from: Strange7↑ comment by Strange7 · 2010-03-26T01:03:17.923Z · LW(p) · GW(p)
I didn't say they were causally isolated.
In that case, nothing can be said to depend only on the qualia, because anything that depends on them is also indirectly influenced by whatever the qualia themselves depend on.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-26T01:53:59.377Z · LW(p) · GW(p)
When you say a function depends only on a set of variables, you mean that you can compute the function given the value of those variables.
Replies from: Strange7↑ comment by Strange7 · 2010-03-26T01:59:57.897Z · LW(p) · GW(p)
Emotional responses aren't independent variables, they're functions of past and present sensory input.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-26T02:06:16.394Z · LW(p) · GW(p)
Are there any independent variables in the real world? Variables are "independent" given a particular analysis.
When you say a function depends only on a set of variables, you mean that you can compute the function given the value of those variables. It doesn't matter whether those variables are dependent on other variables.
↑ comment by PhilGoetz · 2010-03-24T22:17:52.581Z · LW(p) · GW(p)
Wow, you are actually compounding the rudeness of abusing the edit feature to completely rewrite your comment by then analyzing my response to the original version as if it were responding to the edited version.
No. That statement is three comments above the comment in which you said I should acknowledge my error. It was already there when you wrote that comment. And I also acknowledged my misstatement in the comment you were replying to, and elaborated on what I had meant when I made the comment.
I acknowledge that sometimes randomness can be a good enough approximate substitute for the much harder strategy of actually understanding the implications of a probability distribution.
Good! We agree.
Since U0 is my current utility function, and therefore (2) describes my current wants, you will not be able to convince me that I should be persuaded by (1), which is a meaningless comparison. Adopting U as my utility function does not help me maximize U0.
Good! We agree again.
To the extent that trilobites can even be considered to have utility functions, my utility function is better than the trilobite utility function according to my values. The trilobites would disagree.
And we agree yet again!
Likewise, an optimal trilobite CEV would be a trilobite SUCCESS and a human FAIL. There is no absolute universal utilility function that says one of these is better than the others. It is my human values that cause me to say that the human SUCCESS is better.
And here is where we part ways.
Maybe there is no universal utility function. That's a... I won't say it's a reasonable position, but I understand its appeal. I would call it an over-reasoned position, like when a philosopher announces that he has proved that he doesn't exist. It's time to go back to the drawing board when you come up with that conclusion. Or at least to take your own advice, and stop trying to change the world when you've already said it doesn't matter how it changes.
But to believe that your utility function is nothing special, and still try to take over the universe and force your utility function on it for all time, is insane.
(Yes, yes, I know Eliezer has all sorts of disclaimers in the CEV document about how CEV should not try to take over the universe. I don't believe that it's logically possible; and I believe that his discussions of Friendly AI make it even clearer that his plans require complete control. Perhaps the theory is still vague enough that just maybe there's a way around this; but I believe the burden of proof is on those who say there is a way around it.)
It would be consistent with the theory of utility functions if, in promoting CEV, you were acting on an inner drive that said, "Ooh, baby, I'm ensuring the survival of my utility function. Oh, God, yes! Yes! YES!" But that's not what I see. I see people scribbling equations, studying the answers, and saying, "Hmm, it appears that my utility function is directing me to propagate itself. Oh, dear, I suppose I must, then."
That's just faking your utility function.
I think it's key that the people I'm speaking of who believe utility functions are arbitrary, also believe they have no free will. And it's probably also key that they assume their utility function must assign value to its own reproduction. They then use these two beliefs as an excuse to justify not following through on their belief about the arbitrariness of their utility function, because they think to do so would be logically impossible. "We can't help ourselves! Our utility functions made us do it!" I don't have a clean analysis, but there's something circular, something wrong with this picture.
Replies from: JGWeissman, Strange7↑ comment by JGWeissman · 2010-03-24T23:00:04.849Z · LW(p) · GW(p)
No. That statement is three comments above the comment in which you said I should acknowledge my error.
Let's recap. You made a wrong claim. I responded to the wrong claim. You disputed my response. I refuted your disputation. You attempted to defend your claim. I responded to your defense. You edited your defense by replacing it with the acknowledgment of your mistake. You responded to my response still sort of defending your wrong claim, and attacking me for refuting your wrong claim. I defended my refutation, pointing out the you really did make the wrong claim and continued to defend it. And now you attack my defense, claiming that you did in fact acknowledge your mistake, and this should somehow negate your continued defense after the acknowledgement. Do you see how you are wrong here? When you acknowledge your claim is wrong, you should not at the same time criticize me for refuting your point.
But to believe that your utility function is nothing special, and still try to take over the universe and force your utility function on it for all time, is insane.
I do believe my utility function is special. I don't expect the universe (outside of me, my fellow humans, and any optimizing processes we spawn off) to agree with me. But, like Eliezer says, "We'll see which one of us is still standing when this is over."
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T23:22:09.380Z · LW(p) · GW(p)
Let's recap. You made a wrong claim. I responded to the wrong claim. You disputed my response. I refuted your disputation. You attempted to defend your claim. I responded to your defense. You edited your defense by replacing it with the acknowledgment of your mistake.
No, that isn't what happened. I'm not sure which comment the last sentence is supposed to refer to, but I'm p > .8 it didn't happen that way. If it's referring to the statement, "Okay, you don't actually need randomness," I wrote that before I ever saw your first response to that comment. But that doesn't match up with what you just described; there weren't that many exchanges before that comment. It also doesn't match up with anything after that comment, since I still don't acknowledge any such mistake made after that comment.
When you acknowledge your claim is wrong, you should not at the same time criticize me for refuting your point.
We're talking about 2 separate claims. The wrong claim that I made was in an early statement where I said that you "needed randomness" to explore the space of possible utility functions. The right claim that I made, at length, was that randomness is a useful tool. You are conflating my defense of that claim, with defending the initial wrong claim. You've also said that you agree that randomness is a useful tool, which suggests that what is happening is that you made a whole series of comments that I say were attacking claim 2, and that you believe were attacking claim 1.
↑ comment by Strange7 · 2010-03-24T22:59:59.285Z · LW(p) · GW(p)
I'm not planning to tile the universe with myself, I just want myself or something closely isomorphic to me to continue to exist. The two most obvious ways to ensure my own continued existence are avoidance of things that would destroy me, particularly intelligent agents which could devote significant resources to destroying me personally, and making redundant copies. My own ability to copy myself is limited, and an imperfect copy might compete with me for the same scarce resources, so option two is curtailed by option one. Actual destruction of enemies is just an extension of avoidance; that which no longer exists within my light-cone can no longer pose a threat.
Your characterization of my utility function as arbitrary is, itself, arbitrary. Deal with it.
↑ comment by Strange7 · 2010-03-24T00:02:40.291Z · LW(p) · GW(p)
According to the utility function that your current utility function doesn't like, but that you will be delighted with once you try it out.
That description could apply to an overwhelming majority of the possible self-consistent utility functions (which are, last I checked, infinite in number), including all of those which lead to wireheading. Please be more specific.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T00:18:51.079Z · LW(p) · GW(p)
Utility function #311289755230920891423. Try it. You'll like it.
I have no solution to wireheading. I think a little wireheading might even be necessary. Maybe "wireheading" is a necessary component of "consciousness", or "value". Maybe all of the good places lie on a continuum between "wireheading" and "emotionless nihilism".
Replies from: Strange7↑ comment by Strange7 · 2010-03-24T03:07:23.247Z · LW(p) · GW(p)
Fallacy of moderation. Besides, wireheading and self-destructive nihilism aren't opposite extremes on a spectrum, they're just failure states within the solution space of possible value systems.
#311289755230920891423.
A string of random numbers is not an explanation.
I have a simple solution to wireheading... simple for me, anyway. I don't like it, so I won't seek it out, nor modify myself in any way that might reasonably cause me to like it or want to seek it out.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-24T17:09:58.825Z · LW(p) · GW(p)
Fallacy of moderation.
The fallacy of moderation is only a fallacy when someone posits that two things that are on a continuum, that aren't actually on a continuum. (If they are on a continuum, it's only a fallacy if you have independent means for finding a correct answer to the problem that the arguing groups have made errors on, rather than simply combining their utility functions.) The question I'm raising is whether wireheading is in fact just an endpoint on the same continuum that our favored states lie.
How do you define wireheading?
I define it as valuing your qualia instead of valuing states of the world. But could something that didn't value its qualia be conscious? Could it have any fun? Would we like to be it? Isn't valuing your qualia part of the definition of what a qualia is?
↑ comment by MichaelVassar · 2010-03-21T22:11:16.485Z · LW(p) · GW(p)
I also dislike tweaks, but I think that Eliezer does too. I certainly don't endorse any sort of tweak that I have heard and understood.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-03-22T00:24:38.744Z · LW(p) · GW(p)
FWIW, Eliezer seems to have suggested an anti-selfish-bastard tweak here.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-03-22T16:51:13.595Z · LW(p) · GW(p)
Thanks! I'm unhappy to see that, but my preferences are over states of the world, not beliefs, unless they simply strongly favor the belief that they are over states of the world.
Fortunately, we have some time, but that does bode ill I think. OTOH, the general trend, though not the universal trend, is for CEV to look more difficult and stranger with time.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-03-22T16:57:33.097Z · LW(p) · GW(p)
I don't trust CEV. The further you extrapolate from where you are, the less experience you have with applying the virtue you're trying to implement.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-03-23T12:46:30.087Z · LW(p) · GW(p)
So you would like experience with the interactions through which our virtues unfold and are developed to be part of the extrapolation dynamic? http://www.google.com/search?q=%22grown+up+further+together%22&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a That always was intended I think.
If that's not what you mean, well, if you can propose alternatives to CEV that don't automatically fail and which also don't look to me like variations on CEV I think you will be the first to do so. CEV is terribly underspecified, so it's hard to think hard about the problem and propose something that doesn't already fall within the current specification.
↑ comment by PhilGoetz · 2010-03-21T22:37:48.096Z · LW(p) · GW(p)
There's several grounds for criticism here. Criticizing CEV by saying, "I think CEV will lead to good dogs, because that's what a lot of people would like," sounds valid to me, but would merit more argumentation (on both sides).
Another problem I mentioned is a possibly fundamental problem with CEV. Is it legitimate to say that, when CEV assumes that reasoned extrapolation trumps all existing values, that that is not the same as asserting that reason is the primary value? You could argue that reason is just an engine in service of some other value. There's some evidence that that actually works, as demonstrated by the theologians of the Roman Catholic Church, who have a long history of using reason to defeat reason. But I'm not convinced that makes sense. If it doesn't, then it means that CEV already assumes from the start the very kind of value that its entire purpose is to prevent being assumed.
Third, most human values, like dog-values, are neutral with respect to rationality or threatened by rationality. The dog itself needs to not be much more rational or intelligent than it is.
The only solution is to say that the rationality and the values are in the FAI sysop, while the conscious locus of the values is in the humans. That is, the sysop gets smarter and smarter, with dog-values as its value system. It knows that to get the experiential value out of dog-values, the conscious experiencer needs limited cognition; but that's okay, because the humans are the designated experiencers, while the FAI is the designated thinker and keeper-of-the-values.
There are two big problems with this.
By keeping the locus of consciousness out of the sysop, we're steering dangerously close to one of the worst-possible-of-all-worlds, which is building a singleton that, one way or the other, eventually ends up using most of the universe's computational energy, yet is not itself conscious. That's a waste of a universe.
Value systems are deictic, meaning they use the word "I" a lot. To interpret their meaning, you fill in the "I" with the identity of the reasoning agent. The sysop literally can't have human values if it doesn't have deictic values; and if it has deictic values, they're not going to stay doglike under extrapolation. (You could possibly get around this by using a non-deictic representation, and saying that the values have meaning only when seen in light of the combined sysop+humans system. Like the knowledge of Chinese in Searle's Chinese room.)
The FAI document says it's important to use non-deictic representations in the AI. Aside from the fact that this is probably impossible - cognition is compression, and deictic representations are much more compact, so any intelligence is going to end up using something equivalent to deictic representations - I don't know if it's meaningful to talk about non-deictic values. That would be like saying "I value the taste of chocolate" without saying who is tasting the chocolate. (That's one entry-point into paperclipping scenarios.)
The final, biggest problem illustrated by dog-values is that it's just not sensible to preserve "human values", when human values, even those found within the same person at different times of life, are as different as it is possible for values to be different. Sure, maybe we would have different values if we could see in the ultraviolet, or had seven sexes; but there is just no bigger difference between values than "valuing states of the external world", and "valuing phenomenal perceptions within my head." And there are already humans committed to each of those two fundamental value systems.
↑ comment by simplicio · 2010-03-21T23:31:19.496Z · LW(p) · GW(p)
A Christian just wants to be a good dog. They've found a way to reach that same blissful state themselves.
The materialistic worldview really is gloomy compared to being a dog.
You have a point here. But as you mentioned, we aren't really capable of such a state, nor would it be virtuous to chase after one.
You guys have totally lost me with this AI stuff. I guess there's probably a sequence on it somewhere...
comment by MichaelVassar · 2010-03-21T22:01:50.449Z · LW(p) · GW(p)
I tend to think that the hazard of perverse response to materialism has been fairly adequately dealt with in this community. OTOH, the perverse response to psychology has not. The fact that something is grounded in "status seeking", "conditioning", or "evolutionary motives" generally no more deprives the higher or more naive levels of validity or reality than does materialism, hence my quip that "I believe exactly what Robin Hanson believes, except that I'm not cynical"
Replies from: NancyLebovitz, simplicio↑ comment by NancyLebovitz · 2010-03-21T23:16:56.287Z · LW(p) · GW(p)
If anyone's addressed the interaction between status-seeking, conditioning, and/or evolved drives and the fact that people manage to do useful and sometimes wonderful things anyway, I haven't seen it.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-03-22T16:14:01.348Z · LW(p) · GW(p)
I'm just confused. Those terms are short-hand for a model that exists to predict the world. If that model doesn't help you to predict the world, throw the model out, just don't bemoan the world fitting the model if and when it does fit. The world is still the world, as well as being a thing described by a model that is typically phrased cynically.
comment by BenAlbahari · 2010-03-21T14:20:27.768Z · LW(p) · GW(p)
You only included the last sentence of Dawkins' quote. Here's the full quote:
The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.
The universe is perverse. You have to learn to love it in spite of that.
Replies from: Vladimir_Nesov, MichaelVassar↑ comment by Vladimir_Nesov · 2010-03-22T12:44:18.397Z · LW(p) · GW(p)
The universe is perverse. You have to learn to love it in spite of that.
What? Why would you love the indifferent universe? It has to be transformed.
Replies from: Nisan, BenAlbahari↑ comment by Nisan · 2010-03-23T02:03:18.239Z · LW(p) · GW(p)
Right. Materialism tells us that we're probably going to die and it's not going be okay; the right way to feel good about it is to do something about it.
↑ comment by BenAlbahari · 2010-03-22T13:04:00.546Z · LW(p) · GW(p)
My attitude is easier to transform than the universe's attitude.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-22T13:14:07.313Z · LW(p) · GW(p)
Maybe easier, but is it the right thing to do? Obvious analogy is wireheading. See also: Morality as Fixed Computation.
Replies from: Nick_Tarleton, Nick_Tarleton, byrnema↑ comment by Nick_Tarleton · 2010-03-22T13:28:45.947Z · LW(p) · GW(p)
Emotions ≠ preferences. It may be that something in the vague category "loving the universe" is (maybe depending on your personality) a winning attitude (or more winning than many people's existing attitudes) regardless of your morality. (Of course, yes, in changing your attitude you would have to be careful not to delude yourself about your preferences, and most people advocating changing your attitude don't seem to clearly make the distinction.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-22T13:35:49.903Z · LW(p) · GW(p)
I certainly make that distinction. But it seems to me that "loving" the current wasteland is not an appropriate emotion. Wireheading is wrong not only when/because you stop caring about other things.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-03-22T14:37:20.725Z · LW(p) · GW(p)
But it seems to me that "loving" the current wasteland is not an appropriate emotion.
Granted. It seems to me that the kernel of truth in the original statement is something like "you are not obligated to be depressed that the universe poorly satisfies your preferences", which (ISTM) some people do need to be told.
Replies from: SoullessAutomaton, byrnema↑ comment by SoullessAutomaton · 2010-03-23T02:56:32.501Z · LW(p) · GW(p)
Since when has being "good enough" been a prerequisite for loving something (or someone)? In this world, that's a quick route to a dismal life indeed.
There's the old saying in the USA: "My country, right or wrong; if right, to be kept right; and if wrong, to be set right." The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken--the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it's still our sack of crap.
By all means, don't look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn't going to notice.
↑ comment by byrnema · 2010-03-22T15:25:07.787Z · LW(p) · GW(p)
Insisting on being unhappy that the universe poorly satisfies your preferences is certainly contrary, if not perverse. Of course, humans greatly value their ability to imagine and desire that the universe be different. This desire might only be perverse if it is impossible to modify the universe to satisfy your preferences. This is the situation that dis-satisfied materialists could find themselves in: a materialistic world is a world that cannot be modified to suit their preferences.
[last paragraph taken out as off-topic and overly speculative]
↑ comment by Nick_Tarleton · 2010-03-22T13:26:09.990Z · LW(p) · GW(p)
Emotions ≠ preferences. It seems likely to me that loving the universe is (maybe depending on your personality) a winning attitude (or is more winning than many people's attitudes) regardless of your morality.
↑ comment by byrnema · 2010-03-22T13:29:36.170Z · LW(p) · GW(p)
There's no need to "transform" the universe. The universe is the same if we modify the universe to satisfy our evolved goals, or we modify our goals to be satisfied by the universe. The latter is at least coherent, whereas the former is persisting in the desire to impose a set of values on the universe even after you've realized those desires are arbitrary and perhaps not even salvageably self-consistent without modification. What kind of intelligence would be interested in that?
To put it another way, as intelligence increases, we will increasingly modify our goals to what is possible. Given the deterministic nature of the universe, that's a lot of modification.
Replies from: Vladimir_Nesov, Multiheaded↑ comment by Vladimir_Nesov · 2010-03-22T13:38:08.515Z · LW(p) · GW(p)
To put it another way, as intelligence increases, we will increasingly modify our goals to what is possible.
A lot more is possible than what is currently present. You don't need to modify unreachable programming, it just doesn't run (until it does).
↑ comment by Multiheaded · 2012-05-13T12:40:23.218Z · LW(p) · GW(p)
I heard lobotomy is an excellent way to do that.
↑ comment by MichaelVassar · 2010-03-21T22:14:32.268Z · LW(p) · GW(p)
The amount of pain in nature is immense. Suffering? I'm not so sure. That's a technical question, even if we don't yet know how to ask the right question. A black widow male is certainly in pain as it's eaten but is very likely not suffering. Many times each day I notice that I have been in pain that I was unaware of. The Continental Philosophy and Women's Studies traditions concern themselves with suffering that people aren't aware of, but don't suggest that such suffering comes in varieties that many animals could plausible experience.
Replies from: BenAlbahari↑ comment by BenAlbahari · 2010-03-22T01:26:17.206Z · LW(p) · GW(p)
This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11. Perhaps you don't notice the pain because it's relatively minor. I'm assuming you didn't have your leg chewed off.
Replies from: orthonormal, Nick_Tarleton, Morendil↑ comment by orthonormal · 2010-03-22T02:37:43.085Z · LW(p) · GW(p)
This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11.
In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism.
Suffering as we experience it is actually a very complicated brain activity, and it's virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering.
(Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It's because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don't see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)
Replies from: BenAlbahari↑ comment by BenAlbahari · 2010-03-22T03:32:48.436Z · LW(p) · GW(p)
I agree with your points on pain and suffering; more about that on a former Less Wrong post here.
However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be "well, let's get these animals fighting and eating each other". Anyone looking at your design would exclaim: "What kind of perverse utopia is that?! Are you sick?!". Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn't change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.
Replies from: MichaelVassar, orthonormal↑ comment by MichaelVassar · 2010-03-24T19:48:53.675Z · LW(p) · GW(p)
To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.
Replies from: BenAlbahari↑ comment by BenAlbahari · 2010-03-25T02:56:01.095Z · LW(p) · GW(p)
Perhaps: pain is near-mode; suffering is far-mode. Scenario: my leg is getting chewed off.
Near-mode thinking: direct all attention to attempt to remove the immediate source of pain / fight or flight / (instinctive) scream for attention
Far-mode thinking: reevaluate the longer-term life and social consequences of having my leg chewed off / dwell on the problem in the abstract
↑ comment by orthonormal · 2010-03-22T03:38:15.811Z · LW(p) · GW(p)
I agree with this point, and I'd bet karma at better than even odds that so does Michael Vassar.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-03-22T17:04:55.161Z · LW(p) · GW(p)
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I'm curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg's loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human 'unachieved but reasonably achievable without superintelligence flourishing' outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn't be surprised if the former is a MUCH bigger problem than suffering. I also wouldn't be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
↑ comment by Nick_Tarleton · 2010-03-24T20:03:22.599Z · LW(p) · GW(p)
This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11.
That's surely a common reason, but are you sure you're not letting morally loaded annoyance at that phenomenon prejudice you against the proposition?
The cognitive differences between a human and a cow or a spider go far beyond "kinda", and, AFAIK, nobody really knows what "suffering" (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are).
↑ comment by Morendil · 2010-03-22T07:14:24.975Z · LW(p) · GW(p)
It doesn't take much near-thinking to draw a distinction between "signals to our brain that are indicative of damage inflicted to a body part" on the one hand, and "the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts" on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Replies from: BenAlbahari↑ comment by BenAlbahari · 2010-03-22T10:01:41.332Z · LW(p) · GW(p)
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from "Consider The Lobster":
Lobsters do not, on the other hand, appear to have the equipment for making or absorbing natural opioids like endorphins and enkephalins, which are what more advanced nervous systems use to try to handle intense pain. From this fact, though, one could conclude either that lobsters are maybe even more vulnerable to pain, since they lack mammalian nervous systems’ built-in analgesia, or, instead, that the absence of natural opioids implies an absence of the really intense pain-sensations that natural opioids are designed to mitigate. I for one can detect a marked upswing in mood as I contemplate this latter possibility...
The entire article is here and that particular passage is here. And later:
Replies from: MorendilStill, after all the abstract intellection, there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience. To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.
↑ comment by Morendil · 2010-03-22T11:15:38.059Z · LW(p) · GW(p)
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that "frantically" and "pathetic" are projections: the emotions they refer to originate in the viewer's mind, not in the lobster's.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience "ascribed emotions", which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That's where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters "are known to exhibit preferences". Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its "get the hell away from here" program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that's a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
Replies from: RobinZ, khafra↑ comment by RobinZ · 2010-03-22T11:27:32.236Z · LW(p) · GW(p)
cf. "The Soul of the Mark III Beast", Terrel Miedaner, included in The Mind's I, Dennett & Hofstadter.
Replies from: JenniferRM↑ comment by JenniferRM · 2010-03-22T15:34:12.258Z · LW(p) · GW(p)
Trust this community to connect the idea to the reference so quickly. "In Hofstadter we trust" :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-03-22T16:27:12.965Z · LW(p) · GW(p)
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
Replies from: JenniferRM1988 docudrama about "the ideas of Douglas Hofstadter". It was created by Dutch director Piet Hoenderdos. Features interviews with Douglas Hofstadter and Dan Dennett. Dennett also stars as himself. Original acquired from the Center for Research in Concepts and Cognition at Indiana University. Uploaded with permission from Douglas Hofstadter. Uploaded by Virgil Griffith.
↑ comment by JenniferRM · 2010-03-22T17:15:42.642Z · LW(p) · GW(p)
That was fascinating. A lot of the point of the story - the implicit claim - was that you'd feel for an entity based on the way its appearance and behavior connected to your sympathy - like crying sounds eliciting pity.
In text that's not so hard because you can write things like "a shrill noise like a cry of fright" when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about "fright", simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I'd have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text... like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman... with her hair pinned up, scary eye shadow, and black stockings.
"She's a witch! Burn her!"
↑ comment by khafra · 2010-03-22T14:31:12.036Z · LW(p) · GW(p)
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
Replies from: Morendil↑ comment by Morendil · 2010-03-22T15:12:21.859Z · LW(p) · GW(p)
That's a preference of theirs; fine by me, but not obviously evidence-based.
Replies from: khafra↑ comment by khafra · 2010-03-22T15:27:13.822Z · LW(p) · GW(p)
I don't mean to suggest that plants are clearly sentient, just that it's plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
Replies from: Morendil↑ comment by Morendil · 2010-03-22T16:35:45.407Z · LW(p) · GW(p)
I'd agree with that sentence if you replaced the word "suffering", unsuitable because of its complex connotations, with "killing", which seems adequate to capture the Jainists' intuitions as represented in the link above.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-22T16:41:50.503Z · LW(p) · GW(p)
Although it is relevant to note that the motive may be to avoid suffering - I wasn't there when the doctrine was formed, and haven't read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.
comment by Morendil · 2010-03-21T09:54:13.095Z · LW(p) · GW(p)
tuning your ears to the words “just” and “merely.”
Indeed! See also this classic essay by Jerry Weinberg on Lullaby Words. "Just" is one of them, can you think of others before reading the essay? ;)
Replies from: Richard_Kennaway, CronoDAS, NancyLebovitz, Document, Rain↑ comment by Richard_Kennaway · 2010-03-21T21:51:05.682Z · LW(p) · GW(p)
"Fundamentally" and all of its near-synonyms: "really", "essentially", "at bottom", "actually", etc.
Usually, these mean "not". ("How was that party you went to last night?" "Oh, it was all right really.") ("Yes, I kidnapped you and chained you in my basement, but fundamentally, underneath it all, I'm essentially a nice guy.")
Replies from: Morendil↑ comment by Morendil · 2010-03-22T06:52:38.282Z · LW(p) · GW(p)
Good one.
On a related note, I often find myself starting a sentence with "The fundamental issue" - and when I catch myself and ask if what I'm talking about is the single issue that in fact underlies all others, and answer myself "no" - then I revise the sentence so something line "One important issue"... Here the lullaby is in two parts, a) everything is less important than this thing and b) there is only this one thing to care about. It's rarely the case that either is true, let alone both.
↑ comment by CronoDAS · 2010-03-21T19:06:27.243Z · LW(p) · GW(p)
In mathematics, "obvious" is one of those words. It tends to mean "something I don't know how to justify."
Replies from: PeteSchult, nhamann, CronoDAS↑ comment by PeteSchult · 2010-03-22T03:03:08.599Z · LW(p) · GW(p)
A joke along these lines has the math professor claiming that the proof of some statement is trivial. They pause for a moment, think, then leave the classroom. Half an hour later, they come back and say, "Yes, it was trivial."
Replies from: RobinZ↑ comment by RobinZ · 2010-03-22T03:07:46.359Z · LW(p) · GW(p)
I heard about a professor (I think physics) who was always telling his students that various propositions were "simple", despite the fact that the students always struggled to show them. Eventually, the students went to the TA (the one I heard the story from), who told the professor.
So, the next class the professor said, "I have heard that the students do not want me to say 'simple'. I will no longer do so. Now, this proposition is straightforward..."
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2010-03-23T03:04:51.312Z · LW(p) · GW(p)
At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.
I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, "And therefore such-and-such is true."
"Why is that?" the guy on the couch asks.
"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that; then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so..." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!
Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."
We physicists were laughing, trying to figure them out. We decided that "trivial" means "proved." So we joked with the mathematicians: "We have a new theorem -- that mathematicians can prove only trivial theorems, because every theorem that's proved is trivial."
The mathematicians didn't like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.
↑ comment by nhamann · 2010-03-21T19:20:54.085Z · LW(p) · GW(p)
Most of the time I've run into the word "obviously" is in the middle of a proof in some textbook, and my understanding of the word in that context is that it means "the justification of this claim is trivial to see, and spelling it out would be too tedious/would disrupt the flow of the proof."
Replies from: SoullessAutomaton, CronoDAS↑ comment by SoullessAutomaton · 2010-03-21T21:19:55.095Z · LW(p) · GW(p)
I thought the mathematical terms went something like this:
- Trivial: Any statement that has been proven
- Obviously correct: A trivial statement whose proof is too lengthy to include in context
- Obviously incorrect: A trivial statement whose proof relies on an axiom the writer dislikes
- Left as an exercise for the reader: A trivial statement whose proof is both lengthy and very difficult
- Interesting: Unproven, despite many attempts
↑ comment by CronoDAS · 2010-03-21T19:35:19.762Z · LW(p) · GW(p)
Well, that's what it's supposed to mean. One of my professors (who often waxed sarcastic during lectures) described it as a very dangerous word...
Replies from: kpreid↑ comment by NancyLebovitz · 2010-03-21T12:32:45.856Z · LW(p) · GW(p)
Voted up because that's an excellent link.
comment by Pablo (Pablo_Stafforini) · 2010-03-22T11:30:09.261Z · LW(p) · GW(p)
Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.
So, what's your argument here? That we shouldn't care about the far future because it is temporally very removed from us? I personally deeply dislike this implication of modern cosmology, because it imposes an upper limit on sentience. I would much prefer that happiness continues to exist indefinitely than that it ceases to exist simply because the universe can no longer support it.
Replies from: khafra↑ comment by khafra · 2010-03-22T14:48:11.910Z · LW(p) · GW(p)
Your personally being inconvenienced by the heat death of the universe is even less likely than winning the powerball lottery; if you wouldn't spend $1 on a lottery ticket, why spend $1 worth of time worrying about the limits of entropy? Sure, it's the most unavoidable of existential risks, but it's vanishingly unlikely to be the one that gets you.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-03-22T15:13:07.378Z · LW(p) · GW(p)
Why should I only emotionally care about things that will affect me?
I don't see any good reason to be seriously depressed about any Far fact; but if any degree of sadness is ever an appropriate response to anything Far, the inevitability of death seems like one of the best candidates.
comment by Academian · 2010-03-21T16:11:46.992Z · LW(p) · GW(p)
What I liked in a nutshell:
What would you prefer to be made of, if not matter?
On behalf of chemicals everywhere, I say: Screw you!
If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.
comment by [deleted] · 2012-05-13T16:47:55.852Z · LW(p) · GW(p)
"Love is Wonderful biochemistry."
"Rainbows are a Wonderful refraction phenomena"
"Morality is a Wonderful expression of preference"
And so on. Let's go out and replace 'just' and 'merely' with 'wonderful' and assorted terms. Let's sneak Awesomeness into reductionism.
Replies from: EphemeralNight, army1987, MarkusRamikin↑ comment by EphemeralNight · 2012-06-24T09:52:18.681Z · LW(p) · GW(p)
This may be the wrong tact. As I pointed out above, I think it likely that the problem lies not in the nature of the phenomenon but in the way a person relates to the phenomenon emotionally. Particularly, that for natural accidents like rainbows, most people simply can't relate emotionally to the physics of light refraction, even if they sort of understand it.
So, I think a more effective tact would be to focus on the experience of seeing the rainbow, rather than the rainbow itself, because if a person is focusing on the rainbow itself, then they inevitably will by disappointed by the reductionist explanation supplanting their instinctive sense of there being something ontologically mental behind the rainbow.
Because, however you word it, the rainbow is just a refraction phenomena, but when you look at the rainbow and experience the sight of the rainbow there are lots of really awesome things happening in your own brain that are way more interesting than the rainbow by itself is.
I think trying to assign words like "just" or "wonderful" to physical processes that cause rainbows is an example of the Mind Projection Fallacy. So, let's not try to get people excited about what makes the rainbow. Let's try to get people excited about what makes the enjoyment of seeing one.
Replies from: VKS, None↑ comment by VKS · 2012-06-24T16:44:44.676Z · LW(p) · GW(p)
It may be true that saying these things may not get everybody to see the beauty we see in the mechanics of those various phenomena. But perhaps saying "Rainbows are a wonderful refraction phenomena" can help get across that even if you know that the rainbows are refraction phenomena, you can still see feel wonder at them in the same way as before. The wonder at their true nature can come later.
I guess what I'm getting at is the difference between "Love is wonderful biochemistry" and "Love is a wonderful consequence of biochemistry". The second, everybody can perceive. The first, less so.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2012-06-24T23:45:43.499Z · LW(p) · GW(p)
that even if you know that the rainbows are refraction phenomena, you can still see feel wonder at them
This kind of touches my point You're talking about two separate physical processes here, and I hold that the latter is the only one worth getting excited about. Or, at least the only one worth trying to get laypeople excited about.
Replies from: VKS↑ comment by VKS · 2012-06-25T08:14:29.198Z · LW(p) · GW(p)
Eh, both phenomena are things we can reasonably get excited about. I don't see that there's much point in trying to declare one inherently cooler than the other. Different people get excited by different things.
I do see, though, that so long as they think that learning about either the cause of their wonder or the cause of the rainbows will steal the beauty from them, no progress will be made on any front. What I'm trying to say is that once that barrier is down, once they stop seeing science as the death of all magic (so to speak), then progress is much easier. Arguably, only then should you be asking yourself whether to explain to them how rainbows work or why one feels wonder when one looks at them.
Replies from: EphemeralNight↑ comment by EphemeralNight · 2012-07-03T11:10:09.851Z · LW(p) · GW(p)
Okay, maybe we need to taboo "excited".
I do see, though, that so long as they think that learning about either the cause of their wonder or the cause of the rainbows will steal the beauty from them, no progress will be made on any front.
This right here is at the crux of my point. I am predicting that, for your average neurotypical, explaining their wonder produces significantly less feeling of stolen beauty than explaining the rainbow. Because, in the former case, you're explaining something mental, whereas in the latter case, you're explaining something mental away.
The rainbow may still be there, but it's status as a Mentally-Caused Thing is not.
Replies from: VKS↑ comment by VKS · 2012-07-03T13:10:18.517Z · LW(p) · GW(p)
If people react badly to having somebody explain how their love works, what makes you think that things will go better with wonder?
And, in a different mental thread, I'm going to posit that really, what you talk about matters much less than how you talk about it, in this context. You can (hopefully) get the point across by demonstrating by example that wonder can survive (and even thrive) after some science. At least if, as I suspect, people can perceive wonder through empathy. So, if you feel wonder, feel it obviously and try to get them to do so also. And just select whatever you feel the most wonder at.
Less dubiously, presentation is fairly important to making things engaging. Now, I would guess that the more familiar you are with a subject, the easier it becomes to make it engaging. So select whether you explain rainbow or the wonder of rainbows based on that.
Maybe.
I'm speculating.
↑ comment by [deleted] · 2012-06-24T12:28:20.654Z · LW(p) · GW(p)
That is an interesting analysis. I think I might view "just" and "wonderful" more like physically null words, so as to say they do not have any meaning beyond interpretation.
I guess I am just getting too rational for interacting with normal people psychology purely by typical-mindedness.
↑ comment by A1987dM (army1987) · 2012-05-13T19:38:42.413Z · LW(p) · GW(p)
This reminds me of something, though I can't remember for sure which something it was.
↑ comment by MarkusRamikin · 2012-05-13T17:30:35.372Z · LW(p) · GW(p)
Didn't know we were into affirmations around here. I'm gonna need me some pepto...
Replies from: Nonecomment by Rain · 2010-03-21T12:50:54.830Z · LW(p) · GW(p)
From what I can tell, my framing depends upon my emotions more than the reverse, though there's a bit of a feedback cycle as well.
That is to say, if I am feeling happy on a sunny day, I will say that the amazing universe is carrying me along a bright path of sunshine and joy, providing light to dark places, and friendly faces to accompany me, and holy crap that sunlight's passing millions of miles to warm our lives, how awesome is that?
But if I am feeling depressed on that very same day, I will say that the sun's radiation is slowly breaking down the atoms of my weak flesh on the path toward decay and death while all energy slips into entropy and... well, who really cares, anyway?
The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense.
If emotions drive the words, as I feel they do, then this statement, while true, comes from the bright side: "Say happy things, look at the world in a happy way, and you, too, will be happy!"
My dark side disagrees: "There's yet another happy person telling me I shouldn't be depressed, because they're not, and it's not so hard, is it? Great. Thanks for all your help. "
Replies from: simplicio↑ comment by simplicio · 2010-03-21T15:46:34.344Z · LW(p) · GW(p)
If emotions drive the words, as I feel they do, then this statement, while true, comes from the bright side: "Say happy things, look at the world in a happy way, and you, too, will be happy!"
My dark side disagrees: "There's yet another happy person telling me I shouldn't be depressed, because they're not, and it's not so hard, is it? Great. Thanks for all your help. "
I understand how it might sound like that. Of course a sunny disposish is not always possible or even desirable - cheeriness can be equally self-indulgent, and in many ways nature really is trying to kill us.
But there are some fact questions that people feel bad about quite gratuitously. That's what I would like to change. These are the obstacles to human contentedness that people only encounter if they actually go out looking for obstacles, looking for something to feel bad about.
There's lots to legitimately be upset about in this world, lots of suffering endured by people not unlike us. We don't need extra suffering contrived ex nihilo by our minds.
comment by EphemeralNight · 2012-05-13T16:33:33.923Z · LW(p) · GW(p)
I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was.
... which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme.
It occurred to me upon reading this, that perhaps your analogy about the painting is overlooking something important.
In the case of a beautiful painting, if you examine the chain of causality that led to its existence, you will find within that chain, a material system that is the mind and being of the painter. In the case of a rainbow, or an aurora, which, like the painting, is aesthetically pleasing for a human to look upon, the chain of causality that led to its existence does not contain anything resembling our definition of a mind.
In both cases, there exists a real thing, a thing with a reductionist explanation. In both cases a human is likely to be aesthetically pleased by looking at that thing. And, I suspect, in both cases a human's social instincts create a positive emotional response to not just the perceived beauty but to the mind responsible for the existence of said beauty. A human's Map would be marked by that emotional connection, but of course, only in the former case is there actually a mind anywhere in the Territory to correspond to that marking.
It seems possible, even likely, that most of the disappointment you describe, is not in the existence of an explanation, but that the explanation requires the severing of that emotional connection, the erasing from our Map that which is most important to us--other minds. We want to find/meet/see/understand/etc. the mind that caused our feeling of aesthetic pleasure, and hurt when we first understand that there is no mind to find.
That is what I suspect, at least.
Replies from: Friendly-HI↑ comment by Friendly-HI · 2012-05-19T21:39:23.806Z · LW(p) · GW(p)
Brilliant train of thought, there may very well be something to this idea.
I used the painting analogy myself in debating anti-materialists but could always see, how that analogy didn't really satisfy them the way it satisfied me and you've possibly give a valuable clue why.
comment by haig · 2010-03-24T21:18:58.328Z · LW(p) · GW(p)
In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of 'soul', whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern and New Age ideologies. The root of spiritual, 'spirit', is a non-corporeal substance/entity whose main purpose is to contrast itself with the material body. Spirit is that which is not material and so can survive the loss of material pattern decay.
In my opinion, THIS IS the hard pill to swallow.
comment by AlanCrowe · 2010-03-21T22:15:52.265Z · LW(p) · GW(p)
If we are nothing but matter in motion, mere chemicals, then there are only molecules drunkenly bumping into each other and physicists are superstitious fools for believing in the macroscopic variables of thermodynamics such as temperature and pressure.
I find the philosophical position of "nothing buttery" silly because, in the name of materialist reductionism, it asks us to give up thermodynamics. It is indeed an example of perverse mindedness.
Replies from: simplicio↑ comment by simplicio · 2010-03-21T23:22:47.115Z · LW(p) · GW(p)
Not sure what you're arguing against here. Temperature and pressure are explicable in terms of "molecules drunkenly bumping into each other." Or have I misunderstood?
Replies from: AlanCrowe↑ comment by AlanCrowe · 2010-03-22T01:25:28.715Z · LW(p) · GW(p)
The "Nothing But" argument claims that the things explained by materialistic reduction are explained away. In particular, the "Nothing But" argument claims that materialistic reduction, by explaining love and morality and meaning thereby explains them away, destroying them.
The flaw I see in the "Nothing But" argument is that materialistic reduction also explains temperature and presssure. If to explain is necessarily to explain away then the "Nothing But" argument is not merely claiming that materialistic reduction is trashing love and beauty, the "Nothing But" argument is also claiming that materialistic reduction is trashing temperature and pressure. That is a silly claim and shows that there must be something wrong with the "Nothing But" argument.
I think that there is a socially constructed blind spot around this point. People see that the "Nothing But" argument is claiming that materialistic reduction destroys love, beauty, temperature, and pressure. However claiming that materialistic reduction destroys temperature and pressure is silly. If you acknowledge the point then the "Nothing But" argument is obviously silly, which leaves nothing to discuss, and this is blunt to the point of rudeness. So, for social reasons, we drop the last two and let the "Nothing But" argument make the more modest claim that materialistic reduction destroys love and beauty. Then we can get on with our Arts versus Science bun fight.
In brief, I'm agreeing with you. I just wanted to add a striking example of a meaning above the base level of atoms and molecules. You do not have to look at a pointillist painting to experience the reality of something above the base level. It is enough to breath on your hand and feel the pressure exerted by the warm air.
Replies from: simplicio↑ comment by simplicio · 2010-03-22T01:31:07.783Z · LW(p) · GW(p)
Oh, I see, sorry for the misunderstanding.
I think that there is a socially constructed blind spot around this point. People see that the "Nothing But" argument is claiming that materialistic reduction destroys love, beauty, temperature, and pressure. However claiming that materialistic reduction destroys temperature and pressure is silly.
Yes! Excellent point. I'm not even sure what "explaining away" means, for that matter. It seems to be another one of these notions that comes with a value judgment dangling from it.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-22T02:06:18.898Z · LW(p) · GW(p)
I'm not even sure what "explaining away" means
comment by SoullessAutomaton · 2010-03-21T18:47:04.239Z · LW(p) · GW(p)
It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!
I recall studies showing that major positive/negative events in people's lives don't really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.
Replies from: ktismael, CronoDAS↑ comment by ktismael · 2010-03-23T14:35:00.109Z · LW(p) · GW(p)
I recall reading (One of Tyler Cowen's books, I think) that happiness is highly correlated with capacity for self-deception. In this case, positive / negative events would have little impact, but not necessarily because people accepted them, but more because the human brain is a highly efficient self-deception machine.
Similarly, a tendency toward depression correlated with an ability to make more realistic predictions about one's life. So I think it may in fact be a particular aspect of human psychology that encourages self-deception and responds negatively to reality.
None of this is to say that these effects can't be reduced or eliminated through various mental techniques, but I don't think it's sufficient to just assert it as cultural.
comment by Mitchell_Porter · 2010-03-21T10:50:24.822Z · LW(p) · GW(p)
Let's talk about worldviews and the sensibilities appropriate to them. A worldview is some thesis about the nature of reality: materialism, solipsism, monotheism, pantheism, transhumanism, etc. A sensibility is an emotion or a complex of emotions about life.
Your thesis is: rationalist materialism is the correct worldview; its critics say negative things about its implications for sensibility; and some of us are accepting those implications, but incorrectly. Instead we can (should?) feel this other way about reality.
My response to all this is mostly at the level of worldview. I don't have your confidence that I have the basics of reality sorted out. I have confidence that I have had a certain sequence of experiences. I expect the world and the people in it to go on behaving, and responding to me, in a known range of ways, but I do not discount the possibility of fundamental changes or novelties in the future. I can picture a world that is matter in motion, and map it onto certain aspects of experience and the presumed history of the world, but I'm also aware of many difficulties, and also of the rather hypothetical nature of this mapping from the perspective of my individual knowledge. I could be dreaming; these consistencies might show themselves to be superficial or nonsensical if I awoke to a higher stage of lucidity. Even without invoking the skeptical option, I would actually expect an account of the world which fully encompassed what I am, and embedded it into a causal metaphysics, to have a character rather different, and rather richer, than the physics we actually have. I'm also aware that there are limits to my own understanding of basic concepts like existence, cause, time and so forth, and that further progress here might not only change the way I feel about reality, but might reveal vast new tracts of existence I had not hitherto suspected. On a personal level, the possible future transmutations of my own being remain unknown, though the experience of others suggests that it ends in the grave.
So much for criticism at the level of worldview. At the level of sensibility... it seems to me that Dawkins grasps the implications of his worldview better than Watts (that is, if one reads Watts as an expression of the same facts under a different sensibility). There is agony as well as wonder in the materialist universe. Most of it consists of empty cosmic tedium and lifeless realms occasionally swept by vast violences (but of course, this is already a strong supposition about the nature of the rest of the universe, namely that it's a big desert), but life in our little bubble of air and water can surely be viewed as vicious and terrible without much difficulty. That we come from the world does not mean we will inevitably manage to make our peace with it.
Mostly you talk about various forms of nihilism and self-alienation as emotional errors. I think that both the nihilism and the "joy in the merely real" come from a sort of subjective imagining and have very little connection to knowledge. The people for whom materialism threatens nihilism at first imagine themselves to be living in one sort of world; then, they imagine another sort of world, and they have those responses. Meanwhile, the self-identified materialists have been having their experiences while already imagining themselves to be living in a materialist world, so they don't see a problem.
Now in general I am unimpressed (to say the least) with the specific materialistic accounts of subjectivity that materialists have to offer. So I think that the reflections of a typical materialist on how their feelings are really molecules, or whatever, are really groundless daydreams not much removed from a medieval astronomer thrilling to the thought of the celestial spheres. It's just you imagining how it works, and you're probably very wrong about the details.
However, I don't think these details actually play much role in the everyday well-being of materialists anyway. Insofar as they are mentally healthy, it is because things are functioning well at the level of subjectivity, psychological self-knowledge, and so forth. Belief that everything is made of atoms isn't playing a role here. So the real question is, what's going on in the non-materialist or the reluctant materialist, for their mental health to be disturbed by the adoption of such a belief? That is an interesting topic of psychology that might be explored. I think you get a few aspects of it right, but that it is far more subtle and diverse than you allow for. There may be psychological makeups where the nihilist response really is the appropriate emotional reaction to the possibility or the subjective certainty of materialism.
But for me the bottom line is this: discussing rationalist materialism as a total worldview simply reminds me of just how tentative, incomplete, and even problematic such a worldview is, and it impels me to make further efforts towards actually knowing the truth, rather than just lingering in the aesthetics made available by acceptance of one particular possibility as reality.
Replies from: simplicio, torekp↑ comment by simplicio · 2010-03-22T00:02:06.863Z · LW(p) · GW(p)
My response to all this is mostly at the level of worldview. I don't have your confidence that I have the basics of reality sorted out... I could be dreaming; these consistencies might show themselves to be superficial or nonsensical if I awoke to a higher stage of lucidity... I'm also aware that there are limits to my own understanding of basic concepts like existence, cause, time and so forth, and that further progress here might not only change the way I feel about reality, but might reveal vast new tracts of existence I had not hitherto suspected.
I think I see what you're saying. However, I feel that hoping for ultimate realities undreamt-of hitherto is giving too much weight to one's own wishes for how the universe ought to be. There is no reason I can think of why the grand nature of reality has to be "richer" than physics (whatever that means). This reality, whether it inspires us or not, is where we find ourselves.
Now in general I am unimpressed (to say the least) with the specific materialistic accounts of subjectivity that materialists have to offer. So I think that the reflections of a typical materialist on how their feelings are really molecules, or whatever, are really groundless daydreams not much removed from a medieval astronomer thrilling to the thought of the celestial spheres. It's just you imagining how it works, and you're probably very wrong about the details.
Well now, I hope you were being facetious when you implied materialists believe that feelings are molecules. You are allowed to be unimpressed by materialist accounts of subjectivity, of course. However, you should seriously consider what kind of account would impress you. An account of subjectivity or consciousness or whatever is kind of like an explanation of a magic trick. It often leaves you with a feeling of "that can't be the real thing!"
↑ comment by torekp · 2010-03-24T00:54:24.680Z · LW(p) · GW(p)
I think that both the nihilism and the "joy in the merely real" come from a sort of subjective imagining and have very little connection to knowledge. The people for whom materialism threatens nihilism at first imagine themselves to be living in one sort of world; then, they imagine another sort of world, and they have those responses. Meanwhile, the self-identified materialists have been having their experiences while already imagining themselves to be living in a materialist world, so they don't see a problem.
Doesn't this support simplicio's thesis? If there's little connection to knowledge - which I take to mean that neither emotional response follows logically from the knowledge - then epistemic rationality is consistent with joy. And where epistemic rationality is not at stake, instrumental rationality favors a joyful response, if it is possible.
comment by Psychohistorian · 2010-03-21T20:19:35.627Z · LW(p) · GW(p)
We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.
Your interpretation of this is overly charitable. The analogy to the apple tree makes it basically teleological; as apples define an apple tree, people define the earth. This phrasing implies a sort of purpose, importance (how important are apples to an apple tree?) and moral approval. Also, "We are not born into this world" is a false statement. And the process by which the earth generates people is pretty much nothing like the way in which an apple tree produces apples.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-03-22T00:26:20.078Z · LW(p) · GW(p)
And the process by which the earth generates people is pretty much nothing like the way in which the earth generates people.
I think you misspoke there...
Replies from: Psychohistorian↑ comment by Psychohistorian · 2010-03-22T00:43:50.446Z · LW(p) · GW(p)
Touche. Fixed.
comment by Rain · 2010-03-21T13:09:22.727Z · LW(p) · GW(p)
I take exception to this passage, and feel that it is an unnecessary attack:
Replies from: SoullessAutomaton, simplicioI have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit.
↑ comment by SoullessAutomaton · 2010-03-21T17:34:58.466Z · LW(p) · GW(p)
It's a reasonable point, if one considers "eventual cessation of thought due to thermodynamic equilibrium" to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
Replies from: orthonormal, Rain↑ comment by orthonormal · 2010-03-21T17:55:44.685Z · LW(p) · GW(p)
There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2010-03-21T18:20:11.839Z · LW(p) · GW(p)
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We're talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
↑ comment by Rain · 2010-03-21T19:09:29.326Z · LW(p) · GW(p)
I believe we have a duty to attempt to predict the future as far as we possibly can. I don't see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
Replies from: billswift↑ comment by billswift · 2010-03-21T23:56:26.034Z · LW(p) · GW(p)
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don't work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
Replies from: Rain↑ comment by Rain · 2010-03-22T19:44:44.790Z · LW(p) · GW(p)
I've been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we're very bad at predicting the future. I haven't had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer "however much it predicts will be useful" seems like a circular problem.
Replies from: billswift↑ comment by billswift · 2010-03-23T00:34:53.918Z · LW(p) · GW(p)
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins's are particularly good, and on economics, try Sowell's Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers - depending on the costs of acquiring and analyzing further information versus the expected value of that information.
↑ comment by simplicio · 2010-03-21T17:57:08.006Z · LW(p) · GW(p)
I'm sorry you feel that way but, to be honest, I don't repent of my statement. I simply can't imagine why the ultimate fate of an (at that point uninhabited) cosmos should matter to a puny hoo-man (except intellectually). It's like a mayfly worrying about the Andromeda galaxy colliding with the Milky Way.
I think the confusion here is similar to the fear of being dead (not fear of dying). You sort of imagine how horrible it'll be to be a corpse, just sitting around in a grave. But there will be no one there to experience how bad being dead is, and when the universe peters out in the end, no one will be there to be disappointed. If you care emotionally about entropic heat death, you should logically also feel bad every time an ice cube melts.
Replies from: Rain↑ comment by Rain · 2010-03-21T19:07:06.157Z · LW(p) · GW(p)
I care about what to measure (utility function) as much as I care about when to measure it (time function). For any measure, there's a way to maximize it, and I'd like to see whatever measure humans decide is appropriate to be maximized across as much time as possible. So worrying about far future events is important insofar as I'd like my values to be maximized even then.
As for worrying about ice cubes, you're right, it would be inconsistent of me to say otherwise, so I will say that I do. However, I apply a weighted scale of care, and our future galactic empire tends to weigh pretty heavily when compared with something like that.
ETA: Care about ice cube loss is so small I can't feel it. Dealing with entropy / resource consumption, my caring gets large enough I can start feeling it around the point of owning and operating large home appliances, automobiles, etc., and ramps up drastically for things like inefficient power plants, creating new humans, and war.
comment by PeteSchult · 2010-03-22T02:44:46.401Z · LW(p) · GW(p)
On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?
As Monsanto (and some of my user friends :-) ) tells us, "Without chemicals, life itself would be impossible."
More seriously, this post voiced some of the things I've been thinking about lately. It's not that it doesn't all reduce to physics in the end, but the reduction is complicated and probably non-linear, so you have to look at things in a given domain according to the empirically based rules for that domain. Even in chemistry (at least beyond the hydrogen atom, if things are the same as when I was in high school back in the Pleistocene), the reduction to physics is not entirely practical, so chemists develop higher level theories about chemicals rather than lower level "machine language" theories.
Replies from: orthonormal↑ comment by orthonormal · 2010-03-22T02:54:46.804Z · LW(p) · GW(p)
Let me be the first to say, Welcome to Less Wrong!
You're quite right, and your comment touches on some of the topics of the reductionism sequence here, in particular the eponymous post.
comment by CytokineStorm · 2010-03-21T23:40:35.312Z · LW(p) · GW(p)
So facts can fester because you only allow yourself to judge them by their truthfulness, even though your actual relation with them is of a nonfactual nature.
One I had problems with: Humans are animals. It's true, isn't it?! But it's only bothering people for its stereotypical subtext. "Humans are like animals: mindless, violent and dirty."
Festering facts?
Replies from: orthonormal, wedrifid↑ comment by orthonormal · 2010-03-22T00:15:11.488Z · LW(p) · GW(p)
Ah yes, it's time to dust off YSITTBIDWTCIYSTEIWEWITTAW again. Er, make that ADBOC.
Replies from: CytokineStorm↑ comment by CytokineStorm · 2010-03-22T00:19:17.738Z · LW(p) · GW(p)
Oops, sorry about that.
comment by Alex Flint (alexflint) · 2010-03-22T07:43:27.487Z · LW(p) · GW(p)
Bravo for an excellent post!
The one point I want to make is that gloominess is our natural emotional response to many reductionist truths. It is difficult not to see a baseless morality in evolution, hard not to feel worthless before the cosmos, challenging not to perceive meaninglessness in chemical neurology. Perhaps realising the fallacies of these emotional conclusions must necessarily come after the reductionist realisations themselves.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-22T07:48:52.665Z · LW(p) · GW(p)
I'd still deny this. You need the right (wrong) fallacies to jump to those conclusions. Maybe the fallacies are easy to invent, or maybe our civilization ubiquitously primes people with them, but it still takes an extra and mistaken step.
Replies from: byrnema, BenAlbahari, alexflint↑ comment by BenAlbahari · 2010-03-22T08:45:51.383Z · LW(p) · GW(p)
What if the priming is developmental? I wonder if there's any parents out there who have tried to bring up their kids with rational beliefs. E.g. No lies about "bunny heaven"; instead take the kid on a field-trip to a slaughterhouse. And if so, how did it effect how well adjusted the kids were?
Replies from: NancyLebovitz, Strange7↑ comment by NancyLebovitz · 2010-03-22T10:46:51.299Z · LW(p) · GW(p)
Insulating children from death is a relatively modern behavior.
For a long time, most people grew up around killing animals for food, and there was still religion.
↑ comment by Alex Flint (alexflint) · 2010-03-22T08:25:38.433Z · LW(p) · GW(p)
I agree. I think it is the particularities of human psychology leads people to such conclusions. The gloomy conclusions are in no way inherent in the premises.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-03-22T15:10:33.357Z · LW(p) · GW(p)
I think Eliezer is claiming that human psychology does not lead to those conclusions; culturally transmitted errors are required.
comment by orthonormal · 2010-03-21T18:07:59.501Z · LW(p) · GW(p)
We're evolutionarily optimized for the savannah, not for the stars. It doesn't seem to me that our present selves are really as capable of being effortlessly content with our worldview as some of our forebears were, because we have some lingering Wrong Questions and wrong expectations written into our minds. Some part of us really wants to see agency in the basic causal framework of our lives, as much as we know this isn't so.
Now that's not a final prescription for hopelessness, because we can hope not to be running on the same bug-riddled brainware for our entire existence, and because there do exist ways to make the universe much more interesting than it presently is.
But it does mean that it's not a moral failing to be disillusioned with the world, now and then, in a way that our religious next-door neighbor isn't. Taking it to an extreme can signify a lack of understanding and imagination, but some amount of it may well be proper for now.
Replies from: Roko, Sniffnoy↑ comment by Sniffnoy · 2010-03-21T21:20:14.884Z · LW(p) · GW(p)
I think you have an extra negation in the first sentence of your last paragraph?
Replies from: orthonormal↑ comment by orthonormal · 2010-03-21T21:26:29.306Z · LW(p) · GW(p)
No, I think it's right as written. Our religious next-door neighbor may not feel disillusioned, and we might, and this is not necessarily a moral failing in us.
Replies from: Sniffnoycomment by byrnema · 2010-03-22T12:23:32.282Z · LW(p) · GW(p)
I completely disagree with your post, but I really appreciate it. Perhaps as an artful and accurate node of what people who are satisfied or not satisfied with materialism disagree about.
The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc. This is an intellectual and emotional response dove-tailed together. I would say that the intellectual response is first, and the emotional response comes second, because the melancholy is only there if I dwell on it.
As far as I can tell, the only argument that materialism-satisfied materialists have against the intellectual response that generates my negative emotional response is that they lack a negative emotional response. So I see it quite the other way: satisfied materialists lack the emotional response -- in a nod to the normative tone of your post -- that they should have.
Materialism is very compelling, but it has this flaw in its current (hopefully incomplete) formulation. That's the pill to swallow. I would like to see this problem tackled head on and resolved. (I'll add that admitting that some subset of people are not designed to be happy with materialism would be one resolution.)
Replies from: mattnewport↑ comment by mattnewport · 2010-03-22T16:16:41.837Z · LW(p) · GW(p)
The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc.
Perhaps part of the difference between those who are satisfied/not satisfied with materialism is in what role something other than materialism could play here. I just don't get how any of the non-materialist 'answers' are more satisfying than the materialist ones. If it bothers you that morality is 'arbitrary', why is it more satisfying if it is the arbitrary preferences of god rather than the arbitrary preferences of humans? Just as I don't get how the answer 'because of god' to the question 'why is there something rather than nothing' is more satisfying for some people than the alternative materialist answer of 'it just is'.
As Eliezer says in Joy in the Merely Real:
Replies from: LauraABJ, PhilGoetz, byrnemaYou might say that scientists - at least some scientists - are those folk who are in principle capable of enjoying life in the real universe.
↑ comment by LauraABJ · 2010-03-22T17:27:05.857Z · LW(p) · GW(p)
Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology. They try to explain things in large concepts that humans have evolved to easily grasp rather than the minutiae and logical puzzles of reality. If materialists want these memes to be given up, they will need to create equally compelling human metaphor, which is a tall order if we want everything to convey reality correctly. Compelling metaphor is frequently incorrect. My atheist-jewish husband loves to talk about the beauty of scripture and parables in the Christian bible and stands firm against my insistence that any number of novels are both better written and provide better moral guidance. I personally have a disgust reaction whenever he points out a flowery passage about morality and humanity that doesn't make any actual sense. HOW CAN YOU BE TAKEN IN BY THAT? But unlike practicing religious people, he doesn't 'believe' any of it, he's just attracted to it aesthetically, as an idea, as a beautiful outgrowth of the human spirit. Basically, it presses all the right psychological buttons. This is not to say that materialists cannot produce equally compelling metaphors, but it may be a very difficult task, and the spiritualists have a good, I don't know, 10,000 years on us in honing in on what appeals to our primitive psychology.
Replies from: Jack, mattnewport↑ comment by Jack · 2010-03-22T19:27:46.596Z · LW(p) · GW(p)
Why produce new metaphors when we can subvert ones we already know are compelling?
For it is written: The Word of God is not a voice from on High but the whispers of our hopes and desires. God's existence is but His soul, which does not have material substance but resides in our hearts and the Human spirit. Yet this is not God's eternal condition. We are commanded: for the God without a home, make the universe His home. For the God without a body, make Him a body with your own hands. For the God without a mind, make Him a mind like your mind, but worthy of a god. And instill in this mind, in this body, in this universe the soul of God copied from your own heart and the hearts of your brothers and sisters. The Ancients dreamed that God had created the world only because they could not conceive that the world would create God. For God is not the cause of our humility but the unfulfilled aim of our ambition. So learn about the universe so that you may build God a home, learn about your mind so you may build a better one for God, learn about your hopes and desires so that you may give birth to your own savior. With God incarnate will come the Kingdom of God and eternal life.
Replies from: soreff↑ comment by mattnewport · 2010-03-22T17:50:05.223Z · LW(p) · GW(p)
Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology.
I'm wondering whether your statement is true only when you substitute 'some people's' for 'our' in 'our psychology'. I don't feel a god-shaped emotional hole in my psyche. I'm inclined to believe byrenma's self report that she does. I've talked about this with my lapsed-catholic mother and she feels similarly but I just don't experience the 'loss' she appears to.
Whether this is because I never really experienced much of a religious upbringing (I was reading The Selfish Gene at 8, I've still never read the Bible) or whether it is something about our personality types or our knowledge of science I don't know but there appears to be an experience of 'something missing' in a materialist world view amongst some people that others just don't seem to have.
Replies from: LauraABJ, Academian↑ comment by LauraABJ · 2010-03-23T00:58:29.174Z · LW(p) · GW(p)
While not everyone experiences the 'god-shaped hole,' it would be dense of us not to acknowledge the ubiquity of spirituality across cultures just because we feel no need for it ourselves (feel free to replace 'us' and 'we' with 'many of the readers of this blog'). Spirituality seems to be an aesthetic imperative for much of humanity, and it will probably take a lot teasing apart to determine what aspects of it are essential to human happiness, and what parts are culturally inculcated.
Replies from: mattnewport, NancyLebovitz↑ comment by mattnewport · 2010-03-23T01:25:35.490Z · LW(p) · GW(p)
Well, coming back to the original comment I was responding to:
The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc.
I don't feel that way, despite being a thoroughgoing materialist for as long as I can remember being aware of the concept. I also don't really see how believing in the 'spiritual' or non-material could change how I feel about these concepts. It does seem to be somewhat common for people to feel that only spirituality can 'save' us from feeling this way but I don't really get why.
I acknowledge that some people do see 'spirituality' (a word that I admittedly have a tenuous grasp on the supposed meaning of) as important to these things which is why I'm postulating that there is some difference in the way of thinking or perhaps personality type of people who don't see a dilemma here and those for whom it is a source of tremendous existential angst.
↑ comment by NancyLebovitz · 2010-03-23T01:40:27.633Z · LW(p) · GW(p)
I think Core transformation offers a plausible theory.
People are capable of feeling oneness, being loved (without a material source) and various other strong positive emotions, but are apt to lose track of how to access them.
Dysfunctional behavior frequently is the result of people jumping to the conclusion that if only some external condition can be met, they'll feel one of those strong positive emotions.
Since the external condition (money, respect, obeying rules) isn't actually a pre-condition for the emotion but the belief about the purpose of the dysfunctional behavior isn't conscious, the person keeps seeking joy or peace or whatever in the wrong place.
Core transformation is based on the premise that it's possible to track the motives for dysfunctional behavior back to the desired emotion, and give them access to the emotion-- the dysfunctional behavior evaporates, and the person may find other parts of their life getting better.
I've done a little with this system-- enough to think there's at least something to it.
↑ comment by Academian · 2010-03-22T20:35:09.257Z · LW(p) · GW(p)
Do you take awe in the whole of humanity, Earth, or the universe as something greater than yourself? Does it please you to think that even if you die, the universe, life, or maybe even the human race will go on existing long afterward?
Maybe you don't feel the hole because you've already filled it :)
Replies from: mattnewport, RobinZ↑ comment by mattnewport · 2010-03-22T21:46:03.703Z · LW(p) · GW(p)
I've experienced an emotion I think is awe but generally only in response to the physical presence of something in the natural world rather than to sitting and thinking. Being on top of a mountain at sunrise, staring at the sky on a clear night, being up close to a large and potentially dangerous animal and other such experiences have produced the emotion but it is only evoked weakly if at all by sitting and contemplating the universe.
I don't think I have a very firm grip on the varieties of 'religious' experience. I am not really clear on the distinction between awe and wonder for example though I believe they are considered separate emotions.
↑ comment by RobinZ · 2010-03-22T20:59:04.055Z · LW(p) · GW(p)
I can't speak for mattnewport, but I don't take awe, as a rule - I just haven't developed a taste for it. I am occasionally awed, I admit - by acts of cleverness, bravery, or superlative skill, most frequently - but I am rarely rocked back on my heels by "goodness, isn't this universe huge!" and other such observations.
↑ comment by PhilGoetz · 2010-03-23T03:27:05.332Z · LW(p) · GW(p)
Perhaps part of the difference between those who are satisfied/not satisfied with materialism is in what role something other than materialism could play here. I just don't get how any of the non-materialist 'answers' are more satisfying than the materialist ones.
The answers are satisfying because they're not really answers. They're part of a completely different value and belief system - a large, complex structure that has evolved because it is good at generating certain feelings in those who hold it; feelings which hijack those people's emotional systems to motivate them to spread it. Very much like the fly bacteria (or was it a virus?) that reprograms its victims' brains to climb upwards before they die so that their bodies will spread its spores more effectively.
Replies from: tut↑ comment by tut · 2010-03-23T08:52:50.108Z · LW(p) · GW(p)
I think that the standard example of that is a fungus that infects ants. And the bad pun is "Is it just a fluke?" that the ant climbs to the top of a straw, and that it's behind gets red and swollen like a berry, so that the birds are sure to eat it.
Replies from: PhilGoetz↑ comment by byrnema · 2010-03-22T21:48:22.283Z · LW(p) · GW(p)
If it bothers you that morality is 'arbitrary', why is it more satisfying if it is the arbitrary preferences of god rather than the arbitrary preferences of humans?
I believe I can answer this question. The question is a misunderstanding of what "God" was supposed to be. (I think theists often have this misunderstanding as well.)
We live in a certain world, and it natural for some people (perhaps only certain personality types) to feel nihilistic about that world. There are many, many paths to this feeling -- the problem of evil, the problem of free will, the problem of objective value, the problem of death, etc. There doesn't seem to be any resolution within the material world so when we turn away from nihilism, as we must, we hope that there's some kind of solution outside the material. This trust, an innate hope, calls on something transcendental to provide meaning.
However you articulate that hope, if you have it, I think that is theism. Humans try and describe what this solution would be explictly, but then our solution is always limited by our current worldview of what the solution could be (God is the spirit in all living things; God is love and redemption from sin; God is an angry father teaching and exacting justice ). In my opinion, religion hasn't kept up with changes in our worldview and is ready for a complete remodeling.
Perhaps we are ready for a non-transcendent solution, as that would seem most appropriate given our worldview in non-religious areas, but I just don't see any solutions yet.
I've been listening carefully, and people who are satisfied with materialism seem to still possess this innate hope and trust; but they are either unable to examine the source of it or they attribute it to something inadequate. For example, someone once told me that for them, meaning came from the freedom to choose their own values instead of having them handed down by God.
But materialism tells us we don't get to choose. We need to learn to be satisfied with being a river, always choosing the path determined by our landscape. The ability to choose would indeed be transcendental. So I think some number of people realized that without something exceptional, we don't have freedom. In religions, this is codified as God is necessary for the possibility of free will.
So if I say 'there is no God', I'm not denying the existence of a supreme being that could possibly take offense. I'm giving up on freedom, value and purpose. I would like to see, in my lifetime, that those things are already embedded in the material world. Then I would still believe in God -- even more so -- but my belief would be intellectually justified and consistent within my current (scientific) world view.
But if the truth is that they're not there, anywhere, I do wonder what it would take to make me stop believing in them.
Replies from: Furcas, Jack↑ comment by Furcas · 2010-03-23T01:00:54.378Z · LW(p) · GW(p)
Without relaunching the whole discussion, there's one thing I'd like to know: Do you acknowledge that the concepts you're "giving up on" ('transcendental' freedom, value, and purpose, as you define them) are not merely things that don't exist, but things that can't exist, like square circles?
Replies from: byrnema↑ comment by Jack · 2010-03-22T22:01:36.271Z · LW(p) · GW(p)
How'd I do here?
Replies from: byrnema↑ comment by byrnema · 2010-03-23T00:47:01.231Z · LW(p) · GW(p)
If God doesn't exist, creating him as the purpose of my existence is something I could get behind.
And then I would want the God of the future to be omnipotent enough to modify the universe so that he existed retroactively, so that the little animals dying in the forest hadn't been alone, after all. (On the day I intensely tried to stop valuing objective purpose, I realized that this image was one of my strongest and earliest attachments to a framework of objective value.)
God wouldn't have to modify the universe in any causal way, he would just need to send information back in time (objective-value-information). Curiosity about the possibility of a retroactive God motivated this thread. If it is possible for a God created in the future to propagate backwards in time, then I would rate the probability of God existing currently as quite nearly 1.
comment by Vladimir_Nesov · 2010-03-21T10:28:36.903Z · LW(p) · GW(p)
Whether something is good is also a factual question.
Replies from: bogdanb↑ comment by bogdanb · 2010-03-21T13:47:09.976Z · LW(p) · GW(p)
Care to elaborate?
Replies from: orthonormal↑ comment by orthonormal · 2010-03-21T17:51:28.071Z · LW(p) · GW(p)
The parent is assuming the naturalistic reduction of morality that EY argued for in the Metaethics Sequence, in which "good" is determined by a currently opaque but nonetheless finite computation (at least for a particular agent, but then there's the additional claim that humanity has enough in common that this answer shouldn't vary between people any significant amount).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-21T20:48:14.093Z · LW(p) · GW(p)
"good" is determined by a currently opaque but nonetheless finite computation
With a finite definition, but not at all finite or even knowable consequences (they are knowably good, but what they are exactly, one can't know).
there's the additional claim that humanity has enough in common that this answer shouldn't vary between people any significant amount
It's going to vary a very significant amount, just a lot less than the distance from any other preference we might happen to construct, and as such, for example, creating a FAI modeled on any single person is hugely preferable for other people to letting an arbitrary AGI to develop, even if this AGI was extensively debugged and trained, and looks to possess all the right qualities.
Replies from: bogdanb↑ comment by bogdanb · 2010-04-01T17:33:05.256Z · LW(p) · GW(p)
Well, OK, let’s suppose* I agree with that. Could you elaborate on what that means in the context of the post? (Or link to somewhere where you did, if so.)
(*: Even after re-reading the AID post linked by orthonormal, I’m not sure what you mean by “knowably good” above, but I think that answering to the paragraph above would be more helpful than an abstract discussion.)
comment by ktismael · 2010-03-23T14:55:18.438Z · LW(p) · GW(p)
It has been a while since I've read Watts, but I suspect you're misreading his attitude here. In essence the buddhist (particularly the Zen Buddhist) attitude toward reality is very similar to the materialist view which you endorse. That is, that reality exists, and our opinions about it should be recognized as illusory. This can be confused for nihilism or despair, but really is distinct. Take the universe as it is, and experience it directly, without allowing your expectations of how it should be to affect that experience.
Perhaps he doesn't share this view (though given his background it's hard to believe he wouldn't) although without further context it is difficult to judge from just that quote.
Certainly you can argue about reincarnation and divinity and other aspects of Watts philosophy that you find irrational or dogmatic. But on this individual case you bring up, I suspect he shares your view, and I think you (OP) are projecting these views based on assuming that someone recognizing human life is natural in the same way as vegetable life must consider that a bad thing. But to quote the inscrutable philosophy behind this, that is "perfect in its suchness".
Replies from: simplicio↑ comment by simplicio · 2010-03-23T15:53:16.027Z · LW(p) · GW(p)
I am rather fond of Watts, having read many of his books & listened to his lectures as a youngster. He seems to vacillate between accepting the scientific worldview and inserting metaphysical claims about consciousness as a fundamental phenomenon (as well as other weird claims). For instance, you can find in "The Book on the Taboo..." a wonderful passage about life as "tubes" with an input and an output, playing a huge game of one-upmanship; "this all seems wonderfully pointless," he says, "but after a while it seems more wonderful than pointless."
But in the same book he basically dismisses scientists as trying so hard to be rigorous that they make life not worth living. And you can find him ranting about how Euclid must have been kind of stupid because he started with straight lines (as opposed to organic shapes).
The guy frustrates the hell out of me, because with a couple years of undergrad science under his belt he could've been a correct philosopher as well as an original one.
Replies from: ktismael↑ comment by ktismael · 2010-03-23T17:09:40.299Z · LW(p) · GW(p)
Yeah, I suppose his understanding is not consistent, like most of us he has (had) blindspots in which emotion takes over. I, too, found him interesting and frustrating as a writer.
Mostly, I wanted to bring up the distinction between nihilism and what I guess I'll refer to as the buddhist doctrine of "acceptance". I'm not sure how that distinction is to be drawn, since they look quite similar.
Perhaps I could compare it to the difference between agnosticism (or skepticism) and "hard" atheism. The first, here from Dawkins says "There's probably no god, so quit worrying and enjoy your life." The second, a la Penn Jillette says "There is no God". Nihilism seems to make a claim to knowledge closer to the first, as "Nothing matters". Acceptance seems closer to the first, "It probably doesn't matter whether or not it matters." But I could be full of crap with this whole line of argument.
Anyway, your paraphrase here makes it pretty clear that at least part of the time he suffered from the "mechanism = despair" fallacy, so I suppose it doesn't especially matter here.
Replies from: simplicio↑ comment by simplicio · 2010-03-23T17:26:11.040Z · LW(p) · GW(p)
I think I get the distinction. I suspect Watts would say something like "all of these things - materialism, spiritualism, etc. are just concepts. Reality is reality." Which sounds nice until you realize he means subjectively experienced reality. Elevating the latter to some sort of superior status is a big mistake imo, although the distinction between reality and our conceptions of it is well founded.
Replies from: ktismael↑ comment by ktismael · 2010-03-23T18:00:07.333Z · LW(p) · GW(p)
Well, I hesitate to challenge your reading of Watts, as you've definitely retained more than I have, but I would say that subjectively experienced reality isn't the goal of understanding, rather an attempt to bring once perception closer to actual reality. So I suspect that the doctrine of acceptance would say that if your eyes and ears contradict what appears to be actually happening, then you should let your eyes and ears go.
But of course there is always perception bias, and I'm sure the subject is well covered on LW elsewhere. And, in buddhism all of this is weighted down with a lot of mysticism and even with that this is a highly idealized version anyway. For FSM's sake, the majority of buddhists are sending their prayers up to heaven with incense. So perhaps I should just let it go, eh? :) Anyway, thanks for your comments, it may be helping me set some of my thoughts on all this.
comment by Nanani · 2010-03-23T01:15:07.998Z · LW(p) · GW(p)
"We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”
This statement is patently false in many ways and there is no way to justify saying that "the basic idea is indisputably correct". The basic idea that the OP imputed was not derivable from this statement in any way that I can see. Am I missing some crucial bit of context?
Some non-trivial holes: We ARE born into this world; we do not grow out of it in any sense, even metaphorical (though I think many here hope to accomplish the feat in the future); the Earth is not an agent and does not verb-people.
The more interesting materialsm discussion is already vigorous. I choose to focus on a minor point not to detract from it.
Replies from: Johnicholas↑ comment by Johnicholas · 2010-03-23T13:08:38.990Z · LW(p) · GW(p)
The claim "we do not grow out of it in any sense, even metaphorical" is overly strong.
Consider: The process of evolution is just as natural as (on the one hand) the process of birth and (on the other hand) the process of hydrogen fusing into helium. Considering "the earth" as an agent in the process of evolution is no more peculiar than considering the earth as an agent in the statement "The earth moves around the sun."
The claim "we are not born into this world" is literally false, but if we assume (from context) a philosophical notion of "we are born, tabula rasa, into this world and philosophy is us wondering what to make of it", it is rejecting the notion that humans (or viewpoints, or consciousnesses) are somehow special and atomic, made out of a substance fundamentally incompatible to, say, mud.
comment by andrew sauer (andrew-sauer) · 2021-10-12T19:45:22.156Z · LW(p) · GW(p)
I dunno, man, my angst at the state of the universe isn't that it is meaningless, but that it is all too meaningful and horrible and there is no reason for the horror to ever stop.
comment by [deleted] · 2012-05-16T17:19:09.435Z · LW(p) · GW(p)
Alex Rosenberg is arguing for the more gloomy take on materialism.
From amazon:
"His bracing and ultimately upbeat book takes physics seriously as the complete description of reality and accepts all its consequences. He shows how physics makes Darwinian natural selection the only way life can emerge, and how that deprives nature of purpose, and human action of meaning, while it exposes conscious illusions such as free will and the self."
comment by stainlesssteelneuron · 2010-03-22T18:33:10.766Z · LW(p) · GW(p)
Brilliant.
comment by [deleted] · 2012-05-16T19:20:59.479Z · LW(p) · GW(p)
I like to do some plain ole' dissolving of my unupdated concept of the world and asking "What did I value about X (in the unupdated version)?" and compare the result to see if those features are withstanding or not in the updated version. And oftentimes I only care about that which is left unchanged, since my starting-point is often how normality come about rather than what normality is. Come to think of it, this sounds somewhat like a re-phrasing of EY:s stand on reductionism(?).
comment by xamdam · 2010-03-21T19:22:34.938Z · LW(p) · GW(p)
Strangely relevant: "Hard pill in a chewable form": http://www.youtube.com/watch?v=UmjmFNrgt5k
comment by Roko · 2010-03-21T19:16:48.231Z · LW(p) · GW(p)
I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism.
The real problem for me has not been that materialism implies in principle that things are going to be gloomy, for example because of lack of free will, souls, consciousness, etc. It is not the rules of physics that I find problematic.
It is the particular arrangement of atoms, the particular initial conditions that are the issue. Things could be good under materialism, but actually, they are going more mixed.
Replies from: PhilGoetz, CronoDAS↑ comment by CronoDAS · 2010-03-21T20:07:51.559Z · LW(p) · GW(p)
Personally, I really, really hate the laws of thermodynamics; among other things, they make survival more difficult because I have to eat and maintain my body temperature. It would be nice to be powered by a perpetual motion machine, wouldn't it?
Replies from: Roko, PhilGoetz↑ comment by Roko · 2010-03-21T21:05:58.008Z · LW(p) · GW(p)
You have to critique the rules in aggregate, rather than in isolation.
The laws of thermodynamics are not actually basic laws by the way - the basic laws are the standard model plus gravity. Thermodynamics (may be, probably is) an emergent property of these laws.
↑ comment by PhilGoetz · 2010-03-21T20:18:26.511Z · LW(p) · GW(p)
The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win. If you took any of the laws away, you'd probably be a paperclip-equivalent by now. And even if you weren't, living without physics would be like playing tennis without a net. You'd have no goals or desires as we understand them.
Replies from: SoullessAutomaton, Jack↑ comment by SoullessAutomaton · 2010-03-23T03:12:18.386Z · LW(p) · GW(p)
The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win.
Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow:
- You can't win the game.
- You can't break even.
- You can't stop playing.
↑ comment by [deleted] · 2012-06-24T16:24:09.394Z · LW(p) · GW(p)
Maybe, that is the problem. Can't you look at a coastline and see the beauty of it without thinking about fractals? Can you not enjoy a flower w/o thinking of Phi?
No, why should I? It adds to the awesomeness of coastlines that they are paradoxically unmeansurable, and that flower leaves grow according to repulsion which results in fibbonachi spiral systems.
I can already do the simple trick of "that's a pretty thing" but when I think about the maths it gets better.
Also, if by reductionism you are talking about reducing objects down to their interactions, this is where things get unnecessarily complex for the 'normal' folks.
By reductionism I mean the reductionist thesis: The brain has a multi-level model of reality but reality is single-level. Reducing a rainbow means finding out that raindrops cause rainbows by refraction.
The reason why reductionism is important is that by virtue of Mind Projection, we might be tempted to think that Rainbows are fundamental, solely because we haven't reduced them to constituent parts yet, so our world-model contains an opague black box called "rainbows."
Reduce down to a distinction between objects and concepts FIRST. Once that is straight, talk about how 'things' interface.
What do you mean?
Replies from: Monkeymind↑ comment by Monkeymind · 2012-06-24T23:55:50.422Z · LW(p) · GW(p)
"What do you mean?"
I may have wrongly determined (because of your name) that you held the same view as other plasma cosmologists (the Electric Universe folks) that I have been talking with the last couple of weeks. Their view is that reality is at the single level, but 'observable reality' (the multi-level model) is the interface between the brain and reality. Consequently, all their discussions are about the interface (phenomena).
If so, then understanding the difference between an object and a concept might help one come up with ways to make reductionism kewl for the 'normal' folk. Math is an abstract and dynamic language that may be good for describing (predicting) phenomena like rainbows (concepts) but raindrops are static objects and better understood by illustration.
While the math concepts make the rainbow all the more beautiful and wonderful for you, this may not be the case for normal folks. I for one have a better "attitude" about so called knowledge when it makes sense. When I understand the objects involved, the phenomena is naturally more fascinating.
But as you suggested, I may be totally misunderstanding the Scourge of Perverse-mindedness.
BTW: The negative thumbs are not mine, but most likely your peers trying to tell you not to talk to me. If you doubt this check my history.... Take care!
↑ comment by [deleted] · 2012-06-24T14:01:23.498Z · LW(p) · GW(p)
You are misunderstanding the purposes of this discussion.
I don't have any problems, I can hardly not see anything as beautiful without maths.
But normalfolk are not so fortunate. How do we trick them into thinking that reductionism is cool?