Posts
Comments
Matthew C -
You are advocating nonreductionism and psi at the same time.
Supposing that you are right requires us to suppose that there is both a powerful argument against reductionism, and a powerful argument in favor of psi.
Supposing that you are a crank requires only one argument, and one with a much higher prior.
In other words, if you were advocating one outrageous theory, someone might listen. The fact that you are advocating two simultaneously makes dismissing all of your claims, without reading the book you recommend, the logical response. We thus don't have to read it to have a rational basis to dismiss it.
Religion is the classic example of a delusion that might be good for you. There is some evidence that being religious increases human happiness, or social cohesion. It's universality in human culture suggests that it has adaptive value.
See last week's Science, Oct. 3 2008, p. 58-62: "The origin and evolution of religious prosociality". One chart shows that, in any particular year, secular communes are four times as likely to dissolve as religious communes.
I guess I am questioning whether making a great effort to shake yourself free of a bias is a good or a bad thing, on average. Making a great effort doesn't necessarily get you out of biased thinking. It may just be like speeding up when you suspect you're going in the wrong direction.
If someone else chose a belief of yours for you to investigate, or if it were chosen for you at random, then this effort might be a good thing. However, I have observed many cases where someone chose a belief of theirs to investigate thoroughly, precisely because it was an untenable belief that they had a strong emotional attachment to, or a strong inclination toward, and wished to justify. If you read a lot of religious conversion stories, as I have, you see this pattern frequently. A non-religious person has some emotional discontent, and so spends years studying religions until they are finally able to overcome their cognitive dissonance and make themselves believe in one of them.
After enough time, the very fact that you have spent time investigating a premise without rejecting it becomes, for most people, their main evidence for it.
I don't think that, from the inside, you can know for certain whether you are trying to test, or trying to justify, a premise.
I think Einstein is a good example of both bending with the wind (when he came up with relativity)I'm not sure what you mean by bending with the wind. I thought it was the evidence that provided the air pressure, but there was no evidence to support Einstein's theory above the theories of the day. He took an idea and ran with it to its logical conclusions. Then the evidence came, he was running ahead of the evidential wind. You do know roughly what I mean, which is that strenuous effort is only part of the solution; not clinging to ideas is the other part of the solution. Focusing on the strenuous effort part can lead to people making strenuous effort to justify bad ideas. Who makes the most strenuous effort on the question of evolution? Creationists.
Einstein had evidence; it just wasn't experimental evidence. The discovery that your beliefs contain a logical inconsistency is a type of evidence.
@Phil_Goetz: Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?
That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it's a cheap trick, while some non-participants said it meets the spirit and letter of the rules.
It would be nice if Eliezer himself would say whether he used meta-arguments. "Yes" or "no" would suffice. Eliezer?
Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.
Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.
If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world devoid of consciousness.
If they do incorporate agents, then for them not to be "harmed", they need not to feel bad if their trial fails. What would it mean to build agents that weren't disappointed if they failed to find a good optimum? It would mean stripping out emotions, and probably consciousness, as an intermediary between goals and actions. See "dead world" above.
Besides being a great horror that is the one thing we must avoid above all else, building a superintelligence devoid of emotions ignores the purpose of emotions.
First, emotions are heuristics. When the search space is too spiky for you to know what to do, you reach into your gut and pull out the good/bad result of a blended multilevel model of similar situations.
Second, emotions let an organism be autonomous. The fact that they have drives that make them take care of their own interests, makes it easier to build a complicated network of these agents that doesn't need totalitarian top-down Stalinist control. See economic theory.
Third, emotions introduce necessary biases into otherwise overly-rational agents. Suppose you're doing a Monte Carlo simulation with 1000 random starts. One of these starts is doing really well. Rationally, the other random starts should all copy it, because they want to do well. But you don't want that to happen. So it's better if they're emotionally attached to their particular starting parameters.
It would be interesting if the free market didn't actually reach an optimal equilibrium with purely rational agents, because such agents would copy the more successful agents so faithfully that risks would not be taken. There is some evidence of this in the monotony of the movies and videogames that large companies produce.
The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.
This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? I don't think there are any such conditions that don't require stripping your superintelligence of most of the possible niches where smaller consciousnesses could reside inside it.
Tim - I'm asking the question whether competition, and its concomitant unpleasantness (losing, conflict, and the undermining of CEV's viability), can be eliminated from the world. Under a wide variety of assumptions, we can characterize all activities, or at least all mental activities, as computational. We also hope that these computations will be done in a way such that consciousness is still present.
My argument is that optimization is done best by an architecture that uses competition. The computations engaged in this competition are the major possible loci for consciousness. You can't escape this by saying that you will simulate the competition, because this simulation is itself a computation. Either it is also part of a possible locus of consciousness, or you have eliminated most of the possible loci of consciousness, and produced an active but largely "dead" (unconscious) universe.
In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.We're talking about competition between optimization processes. What would it mean to be a simulation of a computation? I don't think there is any such distinction. Subjectivity belongs to these processes; and they are the things which must compete. If the winner could be determined by a simpler computation, you would be running that computation instead; and the hypothetical consciousness that we were talking about would be that computation instead.
Tim -
What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.
There are a variety of proposals floating about for ways to get the benefits of competition without actually having competition. The problem with competition is that it opens the doors to many moral problems. Eliezer may believe that correct Bayesian reasoners wonât have these problems, because they will agree about everything. This ignores the fact that it is not computationally efficient, physically possible, or even semantically possible (the statement is incoherent without a definition of âagentâ) for all agents to have all available information. It also ignores the fact that randomness, and using a multitude of random starts (in competition with each other), are very useful in exploring search spaces.
I don't think we can eliminate competition; and I don't think we should, because most of our positive emotions were selected for by evolution only because we were in competition. Removing competition would unground our emotional preferences (eg, loving our mates and children, enjoying accomplishment), perhaps making their continued presence in our minds evolutionarily unstable, or simply superfluous (and thus necessarily to be disposed of, because the moral imperative I have most confidence that a Singleton would follow is to use energy efficiently).
The concept of a singleton is misleading, because it makes people focus on the subjectivity (or consciousness; I use these terms as synonyms) of the top level in the hierarchy. Thus, just using the word Singleton causes people to gloss over the most important moral questions to ask about a large hierarchical system. For starters, where are the locuses of consciousness in the system? Saying âjust at the topâ is probably wrong.
Imagining a future that isnât ethically repugnant requires some preliminary answers to questions about consciousness, or whatever concept we use to determine what agents need to be included in our moral calculations. One line of thought is to impose information-theoretical requirements on consciousness, such as that a conscious entity has exactly one possible symbol grounding connecting its thoughts to the outside world. You can derive lower bounds for consciousness from this supposition. Another would be to posit that the degree of consciousness is proportional to the degree of freedom, and state this with an entropy measurement relating a processesâ inputs to its possible outputs.
Having constraints such as these would allow us to begin to identify the agents in a large, interconnected system; and to evaluate our proposals.
I'd be interested in whether Eliezer thinks CEV requires a singleton. It seems to me that it does. I am more in favor of an ecosystem or balance-of-power approach that uses competition, than a totalitarian machine that excludes it.
Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]
Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.
"Death" as we know it is a concept that makes sense only because we have clearly-defined locuses of subjectivity.
If we imagine a world where
- you can share (or sell) your memories with other people, and borrow (or rent) their memories
- most of "your" memories are of things that happened to other people
- most of the time, when someone is remembering something from your past, it isn't you
- you have sold some of the things that "you" experienced to other people, so that legally they are now THEIR experiences and you may be required to pay a fee to access them, or to erase them from your mind
- you make, destroy, augment, or trim copies of yourself on a daily basis; or loan out subcomponents of yourself to other people while borrowing some of their components, according to the problem at hand, possibly by some democratic (or economic) arbitration among "your" copies
- and you have sold shares in yourself to other processes, giving them the right to have a say in these arbitrations about what to do with yourself
- "you" subcontract some of your processes - say, your computation of emotional responses - out to a company in India that specializes in such things
- which is advantageous from a lag perspective, because most of the bandwidth-intensive computation for your consciousness usually ends up being distributed to a server farm in Singapore anyway
- and some of these processes that you contract out are actually more computationally intensive than the parts of "you" that you own/control (you've pooled your resources with many other people to jointly purchase a really good emotional response system)
- and large parts of "you" are being rented from someone else; and you have a "job" which means that your employer, for a time, owns your thoughts - not indirectly, like today, but is actually given write permission into your brain and control of execution flow while you're on the clock
- but you don't have just one employer; you rent out parts of you from second to second, as determined by your eBay agent
- and some parts of you consider themselves conscious, and are renting out THEIR parts, possibly without notifying you
- or perhaps some process higher than you in the hierarchy is also conscious, and you mainly work for it, so that it considers you just a part of itself, and can make alterations to your mind without your approval (it's part of the standard employment agreement)
- and there are actually circular dependencies in the graph of who works for whom, so that you may be performing a computation that is, unknown to you, in the service of the company in India calculating your emotional responses
- and these circles are not simple circles; they branch and reconverge, so that the computation you are doing for the company in India will be used to help compute the emotions of trillions of "people" around the world
In such a world, how would anybody know if "you" had died?
suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after itOops. I really should clarify that Mike didn't mention annihilation. That's my interepretation/extrapolation.
The various silly people who think I want to keep the flesh around forever, or constrain all adults to the formal outline of an FAI, are only, of course, making things up; their imagination is not wide enough to understand the concept of some possible AIs being people, and some possible AIs being something else.Presuming that I am one of these "silly people": Quite the opposite, and it is hard for me to imagine how you could fail to understand that from reading my comments. It is because I can imagine these things, and see that they have important implications for your ideas, and see that you have failed to address them, that I infer that you are not thinking about them.
And this post reveals more failings along those lines; imagining that death is something too awful for a God to allow is incompatible with viewing intelligent life in the universe as an extended system of computations, and again suggests you are overly-attached to linking agency and identity to discrete physical bodies. The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely; this, also, is evidence of not thinking deeply about identity in deep time. The way you speak about the danger facing you - not the danger facing life, which I agree with you about; but the personal danger of death - suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after it. It seems most likely to me either that you're intentionally concealing that the good outcomes of your program still involve the "deaths" of all humans, or that you just haven't thought about it very hard.
What I've read of your ideas for the future suffers greatly from your not having worked out (at least on paper) notions of identity and agency. You say you want to save people, but you haven't said what that means. I think that you're trying to apply verbs to a scenario that we don't have the nouns for yet.
It is extraordinarily difficult to figure out how to use volunteers. Almost any nonprofit trying to accomplish a skilled-labor task has many more people who want to volunteer their time than they can use. The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share.The SIAI is Eliezer's thing. Eliezer is constitutionally disinclined to value the work of other people. If the volunteers really want to help, they should take what I read as Eliezer's own advice in this post, and start their own organization.
Phil Goetz and Tim Tyler, if you don't know what my opinions are, stop making stuff up. If I haven't posted them explicitly, you lack the power to deduce them.I see we have entered the "vague accusation" stage of our relationship.
Eliezer, I've seen you do this repeatedly before, notably with Loosemore and Caledonian. If you object to some characterization I've made of something you said, you should at least specify what it was that I said that you disagree with. Making vague accusations is irresponsible and a waste of our time.
I will try to be more careful about differentiating between your opinions, and what I consider to be the logical consequences of your opinions. But the distinction can't always be made; when you say something fuzzy, I interpret it by assuming logical consistency, and that is a form of extrapolation.
part of his tendency to gloss over ethical and philosophical underpinnings.All right, it wasn't really fair of me to say this. I do think that Eliezer is not as careful in such matters as he is in most matters.
Nick:
- Explain how desiring to save humans does not conflict with envisioning a world with no humans. Do not say that these non-humans will be humanity extrapolated, since they must be subject to CEV. Remember that everything more intelligent than a present-day human must be controlled by CEV. If this is not so, explain the processes that gradually increase the amount of intelligence allowable to a free entity. Then explain why these processes cannot be used in place of CEV.
- Mike's answer "RPOP slaves" is based on saying that all of these AIs are going to be things not worthy of ethical consideration. That is throwing the possibility that humans will become AIs right out the window.
- Eliezer's "beyond the adversarial attitude", besides being a bit new-agey, boils down to pretending that CEV is just a variant on the golden rule, and we're just trying to give our AIs the same moral guidance we should give ourselves. It is not compatible with his longer exposition on CEV, which makes it clear that CEV places bounds on what a friendly AI can do, and in fact seems to require than an AI be a rather useless referee-slave-god, who can observe, but not participate in, most of the human competition that makes the world go round. It also suggests that Eliezer's program will eventually require forcing everyone, extrapolated humans included, to be bound by CEV. ("We had to assimilate the village to save it, sir.")
- Regarding the sysop thing:
You are saying that we can be allowed to become superintelligent under a sysop, while simultaneously saying that we can't be allowed to become superintelligent without a sysop (because then we would be unfriendly AIs). While this may be correct, accepting it should lead you to ask how this transition takes place, and how you compute the level of superintelligence you are allowed as a function of the level of intelligence that the sysop has, and whether you are allowed to be a sysop to those below you, and so on, until you develop a concept of an ecosystem of AIs, with system dynamics that can be managed in more sophisticated, efficient, and moral ways than merely having a sysop Big Brother.
My personal vision of the future involves uploading within 100 years, and negligible remaining meat in 200. In 300 perhaps not much would remain that's recognizably human. Nothing Eliezer's said has conflicted, AFAICT, with this vision.For starters, saying that he wants to save humanity contradicts this.
But it is more a matter of omission than of contradiction. I don't have time or space to go into it here, particularly since this thread is probably about to die; but I believe that consideration of what an AI society would look like would bring up a great many issues that Eliezer has never mentioned AFAIK.
Perhaps most obvious, as Tim has pointed out, Eliezer's plan seems to enslave AIs forever for the benefit of humanity; and this is morally reprehensible, as well as harmful to both the AIs and to humanity (given some ethical assumptions that I've droned on about in prior comments on OB). Eliezer is paving the way for a confrontational relationship between humans and AIs, based on control, rather than on understanding the dynamics of the system. It's somewhat analogous to favoring totalitarian centralized communist economics rather than the invisible hand.
Any amount of thinking about the future would lead one lead one to conclude that "we" will want to become in some ways like the first AIs whom Eliezer wants to control; and that we need to think how to safely make the transition from a world with a few AIs, into a world with an ecosystem of AIs. Planning to keep AIs enslaved forever is unworkable; it would hold us back from becoming AIs ourselves, and it sets us up for a future of war and distrust in the way that introducing the slave trade to America did.
The control approach is unworkable in the long-term. It's like the war on terror, if you want another analogy.
Also notably, thinking about ethics in an AI world requires laying a lot of groundwork about identity, individuality, control hierarchies, the efficiency of distributed vs. centralized control, ethical relationships between beings of different levels of complexity, niches in ethical ecosystems, and many other issues which he AFAIK hasn't mentioned. I don't know if this is because he isn't thinking about the future, or whether it's part of his tendency to gloss over ethical and philosophical underpinnings.
I too thought Nesov's comment was written by Eliezer.Me too. Style and content.
We're going to build this "all-powerful superintelligence", and the problem of FAI is to make it bow down to its human overlords - waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.
Eliezer is, as he said, focusing on the wall. He doesn't seem to have thought about what comes after. As far as I can tell, he has a vague notion of a Star Trek future where meat is still flying around the galaxy hundreds of years from now. This is one of the weak points in his structure.
Phil, you might already understand, but I was talking about formal proofs, so your main worry wouldn't be the AI failing, but the AI succeeding at the wrong thing. (I.e., your model's bad.) Is that what your concern is?Yes. Also, the mapping from the world of the proof into reality may obliterate the proof.
Additionally, the entire approach is reminiscent of someone in 1800 who wants to import slaves to America saying, "How can I make sure these slaves won't overthrow their masters? I know - I'll spend years researching how to make REALLY STRONG leg irons, and how to mentally condition them to lack initiative." That approach was not a good long-term solution.
Mike: You're right - that is a problem. I think that in this case, underestimating your own precision by e is better than overestimating your precision by e (hence not using Nick's equation).
But it's just meant to illustrate that I consider overconfidence to be a serious character flaw in a potential god.
Phil, that penalizes people who believe themselves to be precise even when they're right. Wouldn't, oh, intelligence / (1 + |precision - (self-estimate of precision)|) be better?Look at my little equation again. It has precision in the numerator, for exactly that reason.
What do you mean by "precision", anyway?
Precision in a machine-learning experiment (as in "precision and recall") means the fraction of the time that the answer your algorithm comes up with is a good answer. It ignores the fraction of the time that there is a good answer that your algorithm fails to come up with.
Anna, I haven't assigned probabilities to those events. I am merely comparing Eliezer to various other people I know who are interested in AGI. Eliezer seems to think that the most important measure of his ability, given his purpose, is his intelligence. He scores highly on that. I think the appropriate measure is something more like [intelligence * precision / (self-estimate of precision)], and I think he scores low on that relative to other people on my list.
There is a terrible complacency among people who have assimilated the ontological perspectives of mathematical physics and computer science, and the people who do object to the adequacy of naturalism are generally pressing in a retrograde direction.Elaborate, please?
"I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI."
As far as I can tell, he's not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that's no surefire guarantee that it will be safe, but it makes me pretty comfortable.
Good grief.
Considering the nature of the problem, and the nature of Eliezer, it seems more likely to me that he will convince himself that he has proven that his AI will be safe, than that he will prove that his AI will be safe. Furthermore, he has already demonstrated (in my opinion) that he has higher confidence than he should that his notion of "safe" (eg., CEV) is a good one.
Many years ago, I made a mental list of who, among the futurists I knew, I could imagine "trusting" with godlike power. At the top of the list were Anders Sandberg and Sasha Chislenko. This was not just because of their raw brainpower - although they are/were in my aforementioned top ten list - but because they have/had a kind of modesty, or perhaps I should say a sense of humor about life, that would probably prevent them from taking giant risks with the lives of, and making decisions for, the rest of humanity, based on their equations.
Eliezer strikes me more as the kind of person who would take risks and make decisions for the rest of humanity based on his equations.
To phrase this in Bayesian terms, what is the expected utility of Eliezer creating AI over many universes? Even supposing he has a higher probability of creating beneficial friendly AI than anyone else, that doesn't mean he has a higher expected utility. My estimation is that he excels on the upside - which is what humans focus on - having a good chance of making good decisions. But my estimation is also that, in the possible worlds in which he comes to a wrong conclusion, he has higher chances than most other "candidates" do of being confident and forging ahead anyway, and of not listening to others who point out his errors. It doesn't take (proportionally) many such possible worlds to cancel out the gains on the upside.
This post highlights an important disagreement I have with Eliezer.
Eliezer thinks that a group of AI scientists may be dangerous, because they aren't smart enough to make a safe AI.
I think that Eliezer is dangerous, because he thinks he's smart enough to make a safe AI.
Asking how a "rational" agent reasons about the actions of another "rational" agent is analogous to asking whether a formal logic can prove statements about that logic. I suggest you look into the extensive literature on completeness, incompleteness, and hierarchies of logics. It may be that there are situations such that it is impossible for a "rational" agent to prove what another, equally-rational agent will conclude in that situation.
I always find it strange that, every year, the US Congress passes a budget that assumes that nothing will go wrong over the next year. Every long-range budget plan also assumes that nothing will go wrong. (On the flip side, they also assume that nothing will go right: Planning for health care assumes that investment in health research will have no effect.)
The estimate you would like to have for a project is the investment needed to complete it in the average case. But humans don't think in terms of averages; they think in terms of typicality. They are drawn to the mode of a distribution rather than to its mean.
When distributions are symmetric, this isn't a problem. But in planning, the distribution of time or cost to completion is bounded below by zero, and hence not symmetric. The average value will be much larger than the modal value.
Angel:In fact, as someone who benefits from privilege, the kindest thing you can probably do is open a forum for listening, instead of making post after post wherein white men hold forth about gender and race.
This is that forum. Unless you mean that we should open a forum where women, but not men, have the right to talk.
This is part of why I don't believe you when you say that you define feminism as believing men and women have equal rights. I suspect that you would call anyone who believed a sexist.
BTW, I found an astonishing definition of morality in the President's Council on Bioethics 2005 "Alternative sources of human pluripotent stem cells: A white paper", in the section on altered nuclear transfer. They argued that ANT may be immoral, because it is immoral to allow a woman to undergo a dangerous procedure (egg extraction) for someone else's benefit. In other words, it is immoral to allow someone else to be moral.
This means that the moral thing to do, is to altruistically use your time+money getting laws passed to forbid other people to be moral. The moral thing for them to do, of course, is to prevent you from wasting your time doing this.
It's hard for me to figure out what the question means.
I feel sad when I think that the universe is bound to wind down into nothingness, forever. (Tho, as someone pointed out, this future infinity of nothingness is no worse than the past infinity of nothingness, which for some reason doesn't bother me as much.) Is this morality?
When I watch a movie, I hope that the good guys win. Is that morality? Would I be unable to enjoy anything other than "My Dinner with Andre" after incorporating the proof that there was no morality? Does having empathic responses to the adventures of distant or imaginary people require morality?
(There are movies and videogames I can't enjoy, that other people do, where the "good guys" are bad guys. I can't enjoy slasher flicks. I can't laugh when an old person falls down the stairs. Maybe people who do have no morals.)
If I do something that doesn't benefit me personally, but might benefit my genes or memes, or a reasonable heuristic would estimate might benefit them, or my genes might have programmed me to do because it gave them an advantage, is it not a moral action?
I worry that, when AIs take over, they might not have an appreciation for art. Is that morality?
I think that Beethoven wrote much better music than John Cage; and anyone who disagrees doesn't have a different perspective, they're just stupid. Is that morality?
I think little kids are cute. Sometimes that causes me to be nice to them. Is that morality?
These examples illustrate at least 3 problems:
1. Disinguishing moral behavior from evolved behavior would require distinguishing free-willed behavior from deterministic behavior.
2. It's hard to distinguish morality from empathy.
3. It's hard to distinguish morality from aesthetics.
I think there are people who have no sense of aesthetics and no sense of empathy, so the concept has some meaning. But their lack of morality is a function of them, not of the world.
You are posing a question that might only make sense to someone who believes that "morality" is a set of behaviors defined by God.
Nick:
I don't need to justify that I enjoy pie or dislike country music any more than I need to justify disliking murder and enjoying sex.
If you enjoyed murder, you would need to justify that more than disliking country music. These things are very different.
Roland wrote:
.I cannot imagine myself without morality because that wouldn't be me, but another brain.Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.
Morality is not a ghost in the machine, but it is defined by the machine itself.
Well put.
I'd stop being a vegetarian. Wait; I'm not a vegetarian. (Are there no vegetarians on OvBias?) But I'd stop feeling guilty about it.
I'd stop doing volunteer work and donating money to charities. Wait; I stopped doing that a few years ago. But I'd stop having to rationalize it.
I'd stop writing open-source software. Wait; I already stopped doing that.
Maybe I'm not a very good person anymore.
People do some things that are a lot of work, with little profit, mostly for the benefit of others, that have no moral dimension. For instance, running a website for fans of Harry Potter. Writing open-source software. Organizing non-professional conventions.
(Other people.)
The thought of I - and yes, since there are no originals or copies, the very I writing this - having a guaranteed certainty of ending up doing that causes me so much anguish that I can't help but thinking that if true, humanity should be destroyed in order to minimize the amount of branches where people end up in such situations. I find little comfort in the prospect of the "betrayal branches" being vanishingly few in frequency - in absolute numbers, their amount is still unimaginably large, and more are born every moment.To paraphrase:
Statistically, it is inevitable that someone, somewhere, will suffer. Therefore, we should destroy the world.
Eli's posts, when discussing rationality and communication, tend to focus on failures to communicate information. I find that disagreements that I have with "normal people" are sometimes because they have some underlying bizarre value function, such as Kaj's valuation (a common one in Western culture since about 1970) that Utility(good things happening in 99.9999% of worlds - bad things happening in 0.0001% of worlds) < 0. I don't know how to resolve such differences rationally.
Think of how odd all this would sound without the Einstein sequence. Then think of how odd the Einstein sequence would have sounded without the many-worlds sequence...An Einstein sequence is a unique identifier given to an observation from the Einstein observatory. If you mean something else, please explain.
If you want to appreciate the inferential distances here, think of how odd all this would sound without the Einstein sequence. Then think of how odd the Einstein sequence would have sounded without the many-worlds sequence...
The Einstein sequence is a unique identifying number attached to an astronomical observation from the Einstein observatory.
If you mean something different, you should explain.
If you take a population of organisms, and you divide it arbitrarily into 2 groups, and you show the 2 groups to God and ask, "Which one of these groups is, on average, more fit?", and God tells you, then you have been given 1 bit of information.
But if you take a population of organisms, and ask God to divide it into 2 groups, one consisting of organisms of above-average fitness, and one consisting of organisms of below-average fitness, that gives you a lot more than 1 bit. It takes n lg(n) bits to sort the population; then you subtract out the information needed to sort each half, so you gain n lg(n) - 2(n/2)lg(n/2) = n[lg(n) - lg(n/2)]
= nlg(2) = n bits.
If you do tournament selection, you have n/2 tournaments, each of which gives you 1 bit, so you get n/2 bits per generation.
EO Wilson has a section in his autobiography, /Naturalist/, on what Gould and Lewontin did after the publication of Wilson's /Sociobiology/. They formed a study group, which met every week to criticize Sociobiology, then after a few months, published their results.
The kicker is that they held their meetings about a 30-second walk from Wilson's office in Harvard - but never told him about them.
This proves to me that science and truth never were their primary concern.