Posts

Comments

Comment by Richard_Hollerith on Three Worlds Decide (5/8) · 2009-02-03T21:46:00.000Z · LW · GW

Eliezer's novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.

For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your extinction or the disintegration of your agricultural civilization with a reversion to a more primitive existence in which technological advancement is slow, uncertain and easily reversed by things like natural disasters.

But then babyeating became an end in itself.

By clinging to the false terminal value of babyeating, the Babyeaters caused their own extinction even though at the time of their extinction they had an alternative means of preventing an explosion of their population (particularly, editing their own genome so that fewer babies are born: if they did not have the tech to do that, they could have asked the humans or the the Superhappies for it).

In the same way, the humans in the novella and the Superhappies are the victims of a false terminal value, which we might call "hedonic altruism": the goal of extinguishing suffering wherever it exists in the universe. Eliezer explains some of the reasons for the great instrumental value of becoming motivated by the suffering of others in Sympathetic Minds in the passage that starts with "Who is the most formidable, among the human kind?" Again, just because something has great instrumental value is no reason to promote it to a terminal value; when circumstances change, it may lose its instrumental value; and a terminal value once created tends to persist indefinitely because by definition there is no criterion by which to judge a system of terminal values.

I hope that human civilization will abandon the false terminal value of hedonic altruism before it spreads to the stars. I.e., I hope that the human dystopian future portrayed in the novella can be averted.

Comment by Richard_Hollerith on The Baby-Eating Aliens (1/8) · 2009-02-01T20:00:00.000Z · LW · GW

Anna, it takes very little effort to rattle off a numerical probability -- and then most readers take away an impression (usually false) of precision of thought.

At the start of Causality Judea Pearl explains why humans (should and usually do) use "causal" concepts rather than "statistical" ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number. I do recall that stating a number is clearly what Pearl defines as a statistical statement rather than a causal statement. What you should usually do instead of stating a probability estimate is to share with your readers the parts of your causal graph that most directly impinges on the question under discussion.

So, unless Eliezer goes on to list one or more factors that he believes would cause a human to convert to or convert away from my system of valuing things (namely, goal system zero or GSZ) or one or more factors that he believes would tend to prevents other factors from causing a conversion to or away from GSZ, I am going to go on believing that Eliezer has probably not reflected enough on the question for his numbers to be worth anything and that he is just blowing me off.

In summary, I tend to think that most uses of numerical probabilities on these pages have been useless. On this particular question I am particularly sceptical because Eliezer has exhibited signs (which I am prepared to describe if asked) that he has not reflected enough on goal system zero to understand it well enough to make any numerical probability estimate about it.

I am busy with an urgency today, so I might take 24 h to reply to replies to this.

Comment by Richard_Hollerith on The Baby-Eating Aliens (1/8) · 2009-02-01T18:24:00.000Z · LW · GW

Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven's question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam's Razor.

Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.

Comment by Richard_Hollerith on The Baby-Eating Aliens (1/8) · 2009-02-01T10:00:00.000Z · LW · GW

Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.

And pulling numbers (80%, 95%) out of the air on this question is absurd.

Comment by Richard_Hollerith on The Baby-Eating Aliens (1/8) · 2009-01-31T22:41:00.000Z · LW · GW
Richard, I'd take the black holes of course.

As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.

Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify the means. But to put ordinary matter and radiation into black holes massive enough that the mass will never come back out as Hawking radiation as an end in itself -- horror!

Comment by Richard_Hollerith on What I Think, If Not Why · 2008-12-13T13:03:00.000Z · LW · GW

Question for Eliezer. If the human race goes extinct without leaving any legacy, then according to you, any nonhuman intelligent agent that might come into existence will be unable to learn about morality?

If your answer is that the nonhuman agent might be able to learn about morality if it is sentient then please define "sentient". What is it about a paperclip maximizer that makes it nonsenient? What is it about a human that makes it sentient?

Comment by Richard_Hollerith on What I Think, If Not Why · 2008-12-13T12:39:00.000Z · LW · GW

Speaking of compressing down nicely, that is a nice and compressed description of humanism. Singularitarians, question humanism.

Comment by Richard_Hollerith on Which Parts Are "Me"? · 2008-10-23T04:57:00.000Z · LW · GW
trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it.

I cannot recall ever trying to delete or even control a large part of myself, so no opinion there, but "distancing ourselves from ourselves" sounds a lot like developing what some have called an observing self, which is probably a very valuable thing for an person wishing to make a large contribution to the world IMHO.

A person worried about not feeling alive enough would probably get more bang for his buck by avoiding exposure to mercury, which binds permanently to serotonin receptors, causing a kind of deadening.

Comment by Richard_Hollerith on Crisis of Faith · 2008-10-15T20:39:00.000Z · LW · GW

s/werewolf/Easter bunny/ IMHO.

Comment by Richard_Hollerith on How Many LHC Failures Is Too Many? · 2008-10-02T15:43:00.000Z · LW · GW
Did that make sense?

Yes, and I can see why you would rather say it that way.

My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.

Comment by Richard_Hollerith on How Many LHC Failures Is Too Many? · 2008-10-02T11:35:00.000Z · LW · GW

OK, my previous comment was too rude. I won't do it again, OK?

Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.

I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking out of existence to enduring pain.

My objection to this talk of destroying the universe in response to a terrorism incident, etc, is that the people whose terminal values are served by that outcome (such as, I am assuming, you) share the universe with people whose terminal values assign a negative value to that outcome (such as me). By using this method of increasing your utility you impose severe negative utility on me.

Note that if you engage in ordinary quantum suicide then my circumstances remain materially the same in both Everett branches, and the objection I just described does not apply.

Comment by Richard_Hollerith on How Many LHC Failures Is Too Many? · 2008-10-01T13:15:00.000Z · LW · GW
At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator.

Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.

Comment by Richard_Hollerith on Friedman's "Prediction vs. Explanation" · 2008-09-30T17:18:00.000Z · LW · GW

Whoever (E or Friedman) chose the title, "Prediction vs. Explanation", was probably thinking along the same lines.

Comment by Richard_Hollerith on Friedman's "Prediction vs. Explanation" · 2008-09-30T17:14:00.000Z · LW · GW

The way science is currently done, experimental data that the formulator of the hypothesis did not know about is much stronger evidence for a hypothesis than experimental data he did know about.

A hypothesis formulated by a perfect Bayesian reasoner would not have that property, but hypotheses from human scientists do, and I know of no cost-effective way to stop human scientists from generating the effect. Part of the reason human scientists do it is because the originator of a hypothesis is too optimistic about the hypothesis (and this optimism stems in part from the fact that being known as the originator of a successful hypothesis is very career-enhancing), and part of the reason is because a scientist tends to stop searching for hypotheses once he has one that fits the data (and I believe this has been called motivated stopping on this blog).

Most of the time, these human biases will swamp the other considerations (except that consideration mentioned below) mentioned so far in these comments. Consequently, the hypothesis advanced by Scientist 1 is more probable.

Someone made a very good comment to the effect that Scientist 1 is probably making better use of prior information. It might be the case that that is another way of describing the effect I have described.

Comment by Richard_Hollerith on How Many LHC Failures Is Too Many? · 2008-09-22T04:49:00.000Z · LW · GW
in a previous [comment] in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,

Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification.

It is not a justifiable simplification. A satisfactory answer to the question you were trying to answer should remain satisfactory even if other existential risks (e.g., a giant comet) are high. If other existential risks were high, would you just throw up your hands and say that the question you were trying to answer is unanswerable?

Again, I think your contributions to this comment thread were better than anyone else's. I hope you continue to contribute here.

Comment by Richard_Hollerith on Contaminated by Optimism · 2008-08-07T22:48:00.000Z · LW · GW

An unusually moderate and temperate exchange.

Comment by Richard_Hollerith on Contaminated by Optimism · 2008-08-07T15:09:00.000Z · LW · GW

I disagree with the last 2 comments.

Eliezer's priority has gradually shifted over the last 5 years or so from increasing his own knowledge to transmitting what he knows to others, which is exactly the behavior I would expect from someone with his stated goals who knows what he is doing.

Yes, he has suggested or implied many times that he expects to implement the intelligence explosion more or less by himself (and I do not like that) but ever since the Summer of AI his actions (particularly all the effort he has put into blogging and his references to 15-to-18-year-olds, which suggests that he has thought about the most effective audience to target with his blogging) strongly indicate that he understands that the best way for him to assist the singularitarian project at this time is to transmit what he knows to other.

The blog is exactly the choice of means of transmission of scientific knowledge I would expect from someone who knows what he is doing. Surely we can look past the fact the some crusty academics look down on the blog.

I know of no one who has been more effective than Eliezer over the last 8 years or so at transmitting knowledge to people with a high aptitude for math and science.

And the suggestion that Eliezer lacks discipline strikes me as extremely unlikely. Just because a person is extremely intelligent does not mean that it is easy for the person to acquire knowledge at the rate Eliezer has acquired knowledge or to become so effective at transmitting knowledge.

Comment by Richard_Hollerith on [deleted post] 2008-07-12T05:12:00.000Z

I will probably have to stop reading this blog for a while because my life has gotten very tricky and precarious. I am still available for more personal communication with rationalists and scientific generalists especially those living in the Bay Area.

There have been 3 comments on this blog by men to the effect that sex is not that important or that the writer has given up on sex. Those comments suggest what I would consider a lack of sufficient respect for the importance of sex. I tend to believe that for a young man to learn how to have a satisfying and engaging sex life is about as important as obtaining an education or achieving economic security through working. In other words, it is primary.

If someone emails me that they want to read it, I might write more on this topic on my blog.

Comment by Richard_Hollerith on Is Morality Given? · 2008-07-07T19:34:00.000Z · LW · GW
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

denis bider, under the CEV plan for singularity, no human has to give an unambiguous definition or enumeration of his or her terminal values before the launch of the seed of the superintelligence. Consequently, those who lean toward the CEV plan feel much freer to regard themselves as having hundreds of terminal values. Consequently, refraining from murder might easily be a terminal value for them.

Defn. "Murder" is killing under particular circumstances, e.g., not by uniformed soldiers during a war, not in self-defense, not by accident.

Comment by Richard_Hollerith on Is Morality Given? · 2008-07-07T19:31:00.000Z · LW · GW

My comment is not charitable enough towards the CEVists. I ask the moderator to delete it, I will now submit a replacement.

Comment by Richard_Hollerith on Is Morality Given? · 2008-07-07T18:20:00.000Z · LW · GW
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

denis bider, I would not be surprised to learn that refraining from murder is a terminal value for Eliezer. Eliezer's writings imply that he has hundreds of terminal values: he cannot even enumerate them all.

Defn. "Murder" is killing under particular circumstances, e.g., not by uniformed soldiers during a war, not in self-defense, not by accident.

Comment by Richard_Hollerith on Possibility and Could-ness · 2008-06-20T08:41:00.000Z · LW · GW
Thesis: regarding some phenomenon as possible is nothing other than . . .

I consider that an accurate summary of Eliezer's original post (OP) to which these are comments.

Will you please navigate to this page and start reading where it says,

Imagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep.

You need read only to where it says, "Markos Sophisticus Maximus".

Those six paragraphs attempt to be a reductive exposition of the concept of whole number, a.k.a., non-negative integer. Please indicate whether you have the same objection to that exposition, namely, that the exposition treats of the number of pebbles in a bucket and therefore circularly depends on the concept of number (or whole number).

Comment by Richard_Hollerith on Possibility and Could-ness · 2008-06-20T05:39:00.000Z · LW · GW

Joseph Knecht says to Eliezer,

you dedicated an entire follow-up post to chiding Brandon, in part for using realizable in his explanation . . . [and] you committed the same mistake in using reachable.

Congratulations to Joseph Knecht for finding a flaw in Eliezer's exposition!

I would like his opinion about Eliezer's explanation of how to fix the exposition. I do not see a flaw in the exposition if it is fixed as Eliezer explains. Does he?

Comment by Richard_Hollerith on Living in Many Worlds · 2008-06-11T20:50:00.000Z · LW · GW

Is Death capitalized because it is being used in a technical sense?

Comment by Richard_Hollerith on Timeless Beauty · 2008-05-29T17:02:00.000Z · LW · GW

The question that interests me, Michael, is whether a human being's coming to believe that the future is already determined will make the human being less likely to write the blog post or to help build the spaceship that deflects the civilization-destroying asteroid.

Comment by Richard_Hollerith on That Alien Message · 2008-05-24T18:25:00.000Z · LW · GW

Richard, if you're seriously proposing that consciousness is a mistaken idea, but morality isn't, I can only say that that has got to be one unique theory of morality.

Yes, Z.M.D., I am seriously proposing. And I know my theory of morality is not unique to me because a man caused thousands of people to declare for a theory of morality that makes no reference to consciousness (or subject experience for that matter) and although most of those thousands might have switched by now to some other moral theory and although most of the declarations might have been insincere in the first place, a significant fraction have not and were not if my correspondence with a couple of dozens of those thousands is any indication.

Maybe [Eliezer is] right, and superintelligence implies consciousness. I don't see why it would, but maybe it does. How would we know? I worry about how productive discussions about AI can be, if most of the participants are relying so heavily upon their intuitions, as we don't have any crushing experimental evidence.

It is not only that we don't have any experimental evidence, crushing or otherwise, but also that I have never seen anything resembling an embryo of a definition of consciousness (or personhood unless personhood is defined "arbitrarily", e.g., by equating it to being a human being) that would commit a user of the concept to any outcome in any experiment. I have never seen anything resembling an embryo of a definition even after reading Chalmers, Churchland, literally most of SL4 before 2004 (which goes on and on about consciousness) and almost everything Eliezer published (e.g., on SL4).

Comment by Richard_Hollerith on That Alien Message · 2008-05-23T20:00:00.000Z · LW · GW

RI asks,

how moral or otherwise desirable would the story have been if half a billion years' of sentient minds had been made to think, act and otherwise be in perfect accordance to what three days of awkward-tentacled, primitive rock fans would wish if they knew more, thought faster, were more the people they wished they were...

Eliezer answers,

A Friendly AI should not be a person. I would like to know at least enough about this "consciousness" business to ensure a Friendly AI doesn't have (think it has) it. An even worse critical failure is if the AI's models of people are people.

Suppose consciousness and personhood are mistaken concepts. Well, since personhood is an important concept in our legal systems, there is something in reality (namely, in the legal environment) that corresponds to the term "person", but suppose there is not any "objective" way to determine whether an intelligent agent is a person where "objective" means without someone creating a legal definition or taking a vote or something like that. And suppose consciousness is a mistaken concept like phlogiston, the aether and the immortal soul are mistaken concepts. Then would not CEV be morally unjustifiable because there is no way to justify the enslavement -- or "entrainment" if you want a less loaded term -- of the FAI to the (extrapolated) desires of the humans?

Comment by Richard_Hollerith on The "Intuitions" Behind "Utilitarianism" · 2008-01-31T19:36:00.000Z · LW · GW

I also see no explanation as to why knowledge of objective reality is of any value, even derivative; objective reality is there, and is what it is, regardless of whether it's known or not.

You and I can influence the future course of objective reality, or at least that is what I want you to assume. Why should you assume it, you ask? For the same reason you should assume that reality has a compact algorithmic description (an assumption we might call Occam's Razor): no one knows how to be rational without assuming it: in other words, it is an inductive bias necessary for effectiveness.

It is an open question which future courses are good and which are evil, but IMO neither the difficulty of the question nor the fact that no one so far has advanced a satisfactory answer for futures involving ultratechnologies and intelligence explosions -- neither of those two facts -- removes from you and I the obligation to search for an answer the best we can -- or to contribute in some way to the search. This contribution can take many forms. For example, many contribute by holding down a job in which they make lunches for other people to eat or a job in which they care for other people's elderly or disabled family members.

That last is the same as saying that you should seek power, but without saying what the power is for.

The power is for searching for a goal greater than ourselves and if the search succeeds, the power is for achieving the goal. The goal should follow from the fundamental principles of rationality and from correct knowledge of reality. I do not know what that goal is. I can only hope that someone will recognize the goal when they see it. I do not know what the goal is, but I can rule out paperclip maximization, and I am almost sure I can rule out saving each and every human life. That last goal is not IMO worthwhile enough for a power as large as the power that comes from an explosion of general intelligence. I believe that Eliezer should be free to apply his intelligence and his resources to a goal of his own choosing and that I have no valid moral claim on his resources, time or attention. My big worry is that even if my plans do not rely on his help or cooperation in any way, the intelligence explosion Eliezer plans to use to achieve his goal will prevent me from achieving my goal.

I like extended back-and-forth. Since extended back-and-forth is not common in blog comment sections, let me repeat my intention to continue to check back here. In fact, I will check back till further notice.

This comment section is now 74 hours old. Once a comment section has reached that age, I suggest that it is read mainly by people who have already read it and are checking back to look for replies to particular conversational threads.

I would ask the moderator to allow longer conversations and even longer individual comments once a comment section reaches a certain age.

Mitchell Porter, please consider the possibility that many if not most of the "preference-relevant human cognitive universals" you refer to are a hinderance rather than a help to agents who find themselves in an environment as different from the EEA as our environment is. It is my considered opinion that my main value to the universe derives from the ways my mind is different -- differences which I believe I acquired by undergoing experiences that would have been extremely rare in the EEA. (Actually, they would have been depressingly common: what would have been extremely rare is for an individual to survive them.) So, it does not exactly ease my fear that the really powerful optimizing process will cancel my efforts to affect the far future for you to reply that CEV will factor out the "contigent idiosyncracies . . . of particular human beings".

Comment by Richard_Hollerith on The "Intuitions" Behind "Utilitarianism" · 2008-01-31T08:18:00.000Z · LW · GW

Yes, mitchell porter, of course there is no method (so far) (that we know of) for moral perception or moral action that does not rely on the human mind. But that does not refute my point, which again is as follows: most of the readers of these words seem to believe that the maximization of happiness or pleasure and the minimization of pain is the ultimate good. Now when you combine that belief with egalitarianism, which can be described as the belief that you yourself have no special moral value relative to any other human, and neither do kings or movie stars or Harvard graduates, you get a value system that is often called utilitarianism. Utilitarianism and egalitarianism have become central features of our moral culture over the last 400 years, and have exerted many beneficial effects. To give one brief example, they have done much to eliminate the waste of human potential that came from having a small groupand their descendants own everything. But the scientific and technological environment we now find ourselves in has become challenging enough that if we continue to use utilitarianism and egalitarianism to guide us, we will go badly astray. (I have believed this since 1992 when I read a very good book on the subject.) I consider utilitarianism particularly inadequate in planning for futures in which humans will no longer be the only ethical intelligences. I refer to those futures in which humans will share the planet and the solar system with AGIs.

You mentioned CEV, which is a complex topic, but I will briefly summarize my two main objections. The author of CEV says that one of his intentions is for everyone's opinion to have weight: he does not wish to disenfranchise anyone. Since most humans care mainly or only about happiness, I worry that that will lead to an intelligence explosion that is mostly or all about maximizing happiness and that that will interfere with my plans, which are to exert a beneficial effect on reality that persists indefinitely but has little to do in the long term with whether the humans were happy or sad. Second, there is much ambiguity in CEV that has to be resolved in the process of putting it into a computer program. In other words, everything that goes into a computer program has to be specified very precisely. The person who currently has the most influence on how the ambiguities will be resolved has a complex and not-easily summarized value system, but utilitarianism and "humanism", which for the sake of this comment will be defined as the idea that humankind is the measure of all things, obviously figure very prominently.

I will keep checking this thread for replies to my comment.

Comment by Richard_Hollerith on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T21:46:00.000Z · LW · GW

Doug, I do not agree because my utility function depends on the identity of the people involved, not simply on N. Specifically, it might be possible for an agent to become confident that Bob is much more useful to whatever is the real meaning of life than Charlie is, in which case a harm to Bob has greater disutility in my system than a harm to Charlie. In other words, I do not consider egalitarianism to be a moral principle that applies to every situation without exception. So, for me U is not a function of (N,I,T)

Comment by Richard_Hollerith on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T17:58:00.000Z · LW · GW

Please let me interrupt this discussion on utilitarianism/humanism with an alternative perspective.

I do not claim to know what the meaning of life is, but I can rule certain answers out. For example, I am highly certain that it is not to maximize the number of paperclips in my vicinity.

I also believe it has nothing to do with how much pain or pleasure the humans experience -- or in fact anything to do with the humans.

More broadly, I believe that although perhaps intelligent or ethical agents are somehow integral to the meaning of life, they are integral for what they do, not because the success or failure of the universe hinges somehow on what the sentients experience or whether their preferences or desires are realized.

Humans, or rather human civilization, which is an amalgam of humans, knowledge and machines, are of course the most potent means anyone knows about for executing plans and achieving goals. Hobble the humans and you probably hobble whatever it is that really is the meaning of life.

But I firmly reject humanity as repository of ultimate moral value.

I looks to me like Eliezer plans to put humanism at the center of the intelligence explosion. I think that is a bad idea. I am horrified. I am appalled.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-25T21:00:00.000Z · LW · GW

Do you consider the following a fair rephrasing of your last comment? A quantum measurement has probability p of going one way and p - 1 of going the other way where p depends on a choice made by the measurer. That is an odd property for the next bit in a message to have, and makes me suspicious of the whole idea.

If so, I agree. Another difficulty that must be overcome is, assuming one has obtained the first n bits of the message, to explain how one obtains the next bit.

Nevertheless, I believe my primary point remains: since our model of physics does not predict the evolution of reality exactly, the discovery of a previously overlooked means of receiving data need not violate our model of physics. The discovery that if you do X, you can read out the Old Testament in UTF-8, would constitute the addition of a new conjunct to our current model of physics, but not a falsification of the model. That last sentence is phrased in the language of traditional rationality, but my obligation in this argument is only to establish that looking for a new physical principle for receiving data is not a complete waste of resources, and I think the sentence achieves that much.

Also, I wish to return to a broader view to avoid the possibility of our getting lost in a detail. My purpose is to define a system of valuing things suitable for use as the goal system of a seed AI. This scenario in which physicists find themselves in communication with an ontologically privileged observer is merely one contingency that the AI should handle correctly (and a lot more fruitful to think about than simulation scenarios IMHO). It is also useful to consider special cases like this one to keep the conversation about the system of value from becoming too abstract.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-25T14:31:00.000Z · LW · GW

No blog yet, but I now have a wiki anyone can edit. Click on "Richard Hollerith" to go there.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-25T14:18:00.000Z · LW · GW

In cryptography, you try to hide the message from listeners (except your friends). In anticryptography, you try to write a message that a diligent and motivated listener can decode despite his having none of your biological, pyschological and social reference points.

I certainly don't know how you are going to do it at the blackboard. Anything you write on the blackboard comes from you, not something outside space-time.
I meant that most of the difficulty of the project is in understanding our laws of physics well enough to invent a possible novel method for sending and receiving messages.

I don't see how you can have this other observer and at the same time have the scientist with control over the lab.
It is possible for the fundamental laws of physics as we know them to continue to apply without exception and for physicists to discover a novel method of sending or receiving messages because the fundamental laws are not completely deterministic. Specifically, when a measurement is performed on a quantum system, the result of the measurement is "random". If as E.T.Jaynes taught saying that something is random is a statement about our ignorance rather than a statement about reality then it is not a violation of the fundamental laws to discover that the data we used to consider random in actuality has a signal or a message in it.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-24T22:47:00.000Z · LW · GW

Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time.
As far as I know, none of them are looking for a message from beyond the space-time continuum. Maybe I will try to interest them in making the effort. My main interest however is a moral system that does not break down when thinking about seed AI and the singularity. Note that the search for a message from outside space-time takes place mainly at the blackboard and only at the very end moves to the laboratory for the actual construction of the experimental apparatus. Moreover, it is irrational to expect the message to arrive in any human tongue or in a human-originated encoding like Ascii or UTF8. How absurd! The rational approach is an embryonic department of mathematics called anticryptography. Also, the SETI project probably knows an algorithm to detect a signal created by an intelligent agent about which we know nothing specific trying to communicate with another intelligent agent about which it knows nothing specific.

It also seems you are postulating an extra-agent (the Mugger), which limits the amount of control experimenters have and in turn makes the experiment unrepeatable.
I see your point. To explain the concept of the ontologically privileged observer, I borrowed Pascal's Mugger because my audience is already familiar with that scenario. I have another scenario in which a physicist finds himself in a dialog or monologue with an ontologically privileged observer in which physicists retain their accustomed level of control over their laboratories.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-24T21:53:00.000Z · LW · GW

This was Eliezer's point: how could you ever recognize which ones are good and which ones are evil? How could you even recognize a process for recognizing objective good and evil?

I have only one suggestion so far, which is that if you find yourself in a situation which satisfies all five of the conditions I just listed, obeying the Mugger initiates an indefinitely-long causal chain that is good rather than evil. I consider, "You might as well assume it is good," to be equivalent to, "It is good." Now that I have an example I can try to generalize it, which is best done after the scenario has been expressed mathematically. That is my plan of research. So for example I am going to characterize mathematically the notion of a possible world in which an agent can become confident of a "negative fact" about its environment. An example of a negative fact is, I will probably not be able to refine further my model of the Mugger using any evidence except what the Mugger tells me. Then I will try to determine whether our reality is an example of a possible world that allows agents to become confident of negative facts. I will try to devise a way to compute an answer to the question of how to trade off the two goals of obeying the Mugger and refining my model of reality.

A moral system must contain some postulates. I have retracted my claim that one can derive ought from is and apologize for advancing it. Above I give a list four postulates I consider unobjectionable -- the list whose last item is Occam's razor. I do not claim that you and I will come to agree on the fundamental moral postulates if we knew more, thought faster, were more the people we wished we were, had grown up farther together. I do not claim that we have or can discover a procedure that allows two rational humans always to cooperate. I do not claim that this is the summer of love. I reserve the right to continue to advocate for my fundamental moral postulates even if it causes conflict.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-24T19:00:00.000Z · LW · GW

TGGP pointed out a mistake, which I acknowledged and tried to recover from by saying that what you learn about reality can create a behavioral obligation. g pointed out that you don't need to consider exotic things like godlike beings to discover that. If you're driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there's a person trying to cross the road immediately in front of you. So now I have to retreat again.

There are unstated premises that go into the braking-sharply conclusion. What is noteworthy about my argument is that none of its premises has any psychological or social content, yet the conclusion (obey the Mugger) seems to. The premises of my argument are the 4 normative postulates I just listed plus the conditions on when you should obey the Mugger. It is time to recap those conditions:

  • You find yourself in communication with an ontologically privileged observer.

  • After extensive investigation you have discovered no other way to cause effects that go on indefinitely.

  • You have no concrete hope of ever discovering a way.

  • Once the observer has demonstrated that he exists outside your spacetime, the only information you can obtain about him is what he tells you.

  • You have no concrete hope of ever discovering anything about the observer besides what he tells you.

Notice that there are no psychological or social concepts in those two lists! No mention for example of qualia or subjective mental experience. No appeal to the intrinsic moral value of every sentient observer, which creates the obligation to define sentience, which is distinct from and I claim fuzzier than the concept which I have been calling intelligence. Every concept in every premise comes from physics, cosmology, basic probability theory, information technology and well-understood parts of cognitive science and AI. The lack of pyschosocial concepts in the premises make my argument different from every moral argument I know about that contains what at first glance seems to be a psychological or social conclusion.

I think it's no more obvious that increasing the intelligence of whatever part of reality is under your control is good than that (say) preventing suffering is good
When applied to ordinary situations (situations that do not involve e.g. ultratechnology or the fate of the universe) those two imperative lead to largely the same decisions because if you have only a little time to do an investigation, asking a person, Are you suffering? is the best way to determine if there is any preventable or reversible circumstance in his life impairing his intelligence, which I remind I am defining as the ability to acheive goals. Suffering though is a psychological concept and I recommend for ultratechnologists and others concerned with the ultimate fate of the universe to keep their fundamental moral premises free from psychological or social concepts.

All even-remotely-credible claims to have encountered godlike beings with moral advice to offer have been (1) from people who weren't proceeding at all like physicists and (2) very unimpressive evidentially.
Their claims have been very unimpressive because they weren't proceeding like physicists. Impressive evidence would be an experiment repeatable by anyone with a physics lab that receives the Old Testament in Hebrew (encoded as UTF8) from a compartment of reality beyond our spacetime. For the evidence to have moral authority, there would have to be a very strong reason to believe that the message was not sent from a transmitter in our spacetime. (The special theory of relativity seems to be able to provide the strong reason.)

since we don't even know whether there are any effects that go on for ever it seems rather premature to declare that only such effects matter.
An understandable reaction. You might never discover a way to cause an effect that goes on forever even if you live a billion years and devote most of your resources to the search. I sympathize!

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-24T09:02:00.000Z · LW · GW

The blog "item" to which this is a comment started 5 days ago. I am curious whether any besides TGGP and I are still reading. One thing newsgroups and mailing lists do better than blogs is to enable conversational threads to persist for more than a few days. Dear reader, just this once, as a favor to me, please comment here (if only with a blank comment) to signal your presence. If no one signals, I'm not continuing.

Why is a "civilization" the unit of analysis rather than a single agent?
Since you put the word in quotes, I take it you hold something akin to the views of Margaret Thatcher who famously said that there is no society, just individuals and families. You should have been exposed to the mainstream view often enough to notice that my statement can be translated to an equivalent statement expressed in terms of individuals. If we introduce too many deviations from consensus reality at once, we are going to lose our entire audience. Please continue as if I had not used the word and had said instead that if there exists individuals who have a successful answer to the tricky question then they are not promoting the answer to the singularitarian community broadly understood or I would have become aware of them already.

Yes, I take as postulates

  • the desirabitity of increasing the intelligence of whatever part of reality is under your control,
  • the desirability of continuously refining your model of reality,
  • that the only important effects are those that go on forever,
  • for that matter, that the probability of a model of reality is proportional to 2^K where K is the complexity of the model in bits (Occam's razor).

What I meant by deriving ought from is is that what you learn about reality can create a behavioral obligation e.g. in certain specific circumstances it creates an obligation to obey an ontologically priviledged observer. This is not usually acknowledged in expositions about morality and the intrinsic good -- at least not to the extent I acknowledge it here. But yeah, you have a point that without the three oughts I listed above, I could not derive the ought of obeying the Mugger, so instead of my saying that you can derive ought from is, I should in the future say that it is not commonly understood by moral philosophers how much the moral obligations on an agent depend on the physical structure of the reality in which the agents finds himself. Note that he cannot do anything about that physical structure and consequently about the existence of the moral obligation (assuming the postulates above).

Comment by Richard_Hollerith on Expecting Short Inferential Distances · 2007-10-24T08:44:23.000Z · LW · GW

When I write for a very bright "puzzle-solving-type" audience, I do the mental equivalent of deleting every fourth sentence or at least the tail part of every fourth sentence to prevent the reader from getting bored. I believe that practice helps my writings to compete with the writings around it for the critical resource of attention. There are of course many ways of competing for attention, and this is one of the least prejudicial to rational thought. I recommend this practice only in forums in which the reader can easily ask followup questions. Nothing about this practice is incompatible with the practices Eliezer is advocating. This week I am experimenting with adding three dots to the end of a sentence to signal to the reader the need mentally to complete the sentence.

So, what sentence did I delete from the above? A sentence to the effect that I only do this for writing that resembles mathematical proof fairly closely: "Suppose A. Because B, C. Therefore D, from which follows E, which is a contradiction, so our original assumption A must be false."

After writing a first draft, I go back and add a lot more words than I had saved with the "do not bore the reader" practice. E.g. I add sentences explicitly to contradict interpretations that would lead to my being dismissed as hopelessly socially inept, eccentric or evil. Of course because I advocate outlandish positions here, I still get dismissed a lot.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-23T06:55:00.000Z · LW · GW

I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our [spacetime].

In the scenario with all the properties I list above, I assign most of the intrinsic good to obeying the Mugger. Some intrinsic good is assigned to continuing to refine our civilization's model of reality, but the more investment in that project fails to yield the ability to cause effects that persist indefinitely without the Mugger's help, the more intrinsic good gets heaped on obeying the Mugger. Nothing else gets any intrinsic good, including every human and in fact every intelligent agent in our spacetime. Agents in our spacetime must make do with whatever instrumental good derives from the two intrinsic goods. So for example if Robin is expected to be thrice as useful to those two goods as Eliezer is, then he gets thrice as much instrumental good. Not exactly Pete Singer! No one can accuse me of remaining vague on my goals to avoid offending people! I might revise this paragraph after learning more decision theory, Solomonoff induction, etc.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-23T05:40:00.000Z · LW · GW

The ought is, You ought to do whatever the very credible Mugger tells you to do if you find yourself in a situation with all the properties I list above. Blind obedience does not have a very good reputation; please remember, reader, that the fact that the Nazis enthusiastically advocated and built an interstate highway system does not mean that an interstate highway system is always a bad idea. Every ethical intelligent agent should do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same. That entails consistently resisting tyranny and exploitation. But intelligence can be defined as the ability to predict and control reality or to put it another way to achieve goals. So, if your only goal is to increase intelligence, you set up a recursion that has to bottom out somehow. You cannot increase intelligence indefinitely without eventually confronting the question of what other goals the intelligence you have helped to create will be applied to. That is a tricky question that our civilization does not have much success answering, and I am trying to do better.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-23T03:17:00.000Z · LW · GW

For the sake of brevity, I borrow from Pascal's Mugger.

If a Mugger appears in every respect to be an ordinary human, let us call him a "very unconvincing Mugger". In contrast, an example of a very convincing Pascal's Mugger is one who demonstrates an ability to modify fundamental reality: he can violate physical laws that have always been (up to now) stable, global, and exception-free. And he can do so in exactly the way you specify.

For example, you say, "Please Mr Mugger follow me into my physics laboratory." There you repeat the Millikan oil-drop experiment and demand of the mugger that he increase the electrical charge on the electrons in the apparatus by an amount you specify (stressing that he should leave all other electrons alone).

Then you set up an experiment to measure the gravitational constant G and demand that he increase or decrease G by a factor you specify (again stressing that he should leave G alone outside the experimental apparatus).

You ask him to violate the conservation of momentum in a system you specify by a magnitude and direction you specify.

I find it humorous to use the phrase "signs and wonders" for such violations of physical laws. You demand and verify other signs and wonders.

The Mugger's claim that your universe -- your "spacetime" -- is an elaborate simulation and that he exists outside the simulation is now very convincing.

My reason for introducing the very convincing Mugger is that I believe that under certain conditions, unless and until you acquire a means of modelling the part of reality outside the simulation that does not rely on communicating with the Mugger, the Mugger has Real Moral Authority over you: it is not too much of an exaggeration to say you should regard every communication from the Mugger as the Voice of God.

The Mugger's authority does not derive from the fact that he can at any time crush you like a bug. Many ordinary humans have had that kind of power over other humans. His authority stems from the fact that he is in a better position than you or anyone else you know to tell you how your actions might have a permanent effect on reality. But we are getting ahead of ourselves.

Probably the only conditions required on that last proposition are that our spacetime, which is the only "compartment" of reality we know about so far -- will end after a finite amount of time and that we become confident of that fact. In cosmology these days this is usually modelled as the Big Rip.

I believe the utility of directing one's efforts at a compartment of reality that might go on forever completely trumps the utility of directing efforts at a compartment of reality that will surely end even if the end is 100,000,000,000 years away and this remains true regardless the ratio of the probabilities that one's efforts will prove effective in those two compartments.

If scientists determine that the universe is going to end in 12 months or 10 years or 100 years and if during the time remaining to us society and the internet continue to operate normally, I tend to suspect that I could convince many people that the only hope we have for our lives and our efforts to have any ultimate or lasting relevance is for us to contribute to the discovery and investigation of a compartment of reality outside our spacetime because it is an intrinsic property of spacetime -- by which I mean the thing modelled by Einstein's equation -- that a spacetime that ends after a finite amount of time cannot support or host a causal chain that goes on indefinitely, and as we shall see, such chains are central to the search for intrinsic value.

Of course we have no evidence for what exists beyond our spacetime, and no concrete reason to believe we ever will find any evidence, but we have no choice but to conduct the search.

And that puts us in the proper frame for us to meet the very convincing Mugger: "Delighted to meet you, Mr Mugger. Please tell me and my civilization how to make our existence and our efforts meaningful."

The very convincing Mugger is "ontologically privileged": he has a causal model of the part of reality outside or beyond our spacetime. More precisely, the signs and wonders he performed on demand lead us to believe that it is much more probable that he can acquire such a model than that we can do so without his help.

Now we come to the heart of how I propose to derive a normative standard from positive facts: I propose that causal chains that go on forever or indefinitely are important; causal chains that peter out are unimportant. In fact, the most important thing about you is your ability to have a permanent effect on reality. Instead of worrying that the enemy will sap your Precious Bodily Fluids, you should worry that he will sap your Precious Ability to Initiate Indefinitely-Long Causal Chains.

The ontologically privileged observer has not proven to us that he has enough knowledge to tell us how to create causal chains that go on indefinitely. But unless we discover new fundamental physics, communicating with the privileged observer is the most likely means of our acquiring such knowlege. For us to communicate to the Mugger is a link in a causal chain that might go on indefinitely if the Mugger can cause effects that go on indefinitely. In the absence of other concrete hopes to permanently affect reality, helping the Mugger strikes me as the most likely way for my life and efforts to have True Lasting Meaning.

Now some readers are asking, But what do we do if we never stumble on a way to communicate with an ontologically privileged observer? My answer is that my purpose here is not to cover all contingencies but rather to exhibit a single contingency in which I believe it is possible to deduce ought from is.

Saying that only indefinitely-long causal chains are important does not tell us which indefinitely-long causal chains are good and which ones are evil. But consider my contingency again: you find yourself in communication with an ontologically privileged observer. After extensive investigation you have discovered no other way to cause effects that go on indefinitely and have no concrete hope of ever discovering a way. Once he has demonstrated that he exists outside your spacetime, the only information you can obtain about him is what he tells you. Sure, the privileged observer might be evil. But if you really have no way to learn about him and no way to cause effects that go on indefinitely except through communication with him, perhaps you should trust him. After contemplating for ~7 years, I think so.

I know I risk sounding arrogant or careless, but I must say I do not consider the possibility that our spacetime is an elaborate simulation important to think about. I use it here only to take advantage of the fact that the audience is already familiar with it and with the Mugger. There is another possibility I do consider important to think about that also features a communications link with an ontologically privileged observer. I would have used that possibility if it would not have made the comment longer.

In summary, I believe we can derive ought from is in the following situation: our reality "contains a horizon" the other side of which we are very unlikely to be able to model. The physical structure of the horizon allows us to become highly confident of this negative fact. But we have stumbled on a means to communicate with a mind beyond the horizon, who I have been calling the ontologically privileged observer. Finally, our spacetime will come to an end, and reality allows us to become highly confident of that fact.

Although a causal chain can cross a communications link, you cannot use the link to construct a causal model of the reality on the other side of the link. Perhaps your interlocutor will describe the other side to you, but you cannot use the link to verify he is telling the truth unless you already have an causal model of the other side (e.g. you know there is a trusted computer on the other side attached to trusted sensory peripherals and you know the "secrets" of the trusted computer and trusted sensors, which is quite a lot to know).

And there is my very compressed reply to You have been rather vague by saying that just as we discovered many positive facts with science, so we can discover normative ones, even if we have not been able to do so before. You haven't really given any indication as to how anyone could possibly do that.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-22T00:41:00.000Z · LW · GW

Er, they needn't remain constantly aware. They need only take it into account in all their public statements.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-22T00:23:00.000Z · LW · GW

Certainly ethical naturalism has encouraged many oppressions and cruelties. Ethical naturalists must remain constantly aware of that potential.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-21T21:53:00.000Z · LW · GW

Thanks for the nice questions.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-20T23:59:00.000Z · LW · GW

TGGP, I maintain that the goals that people now advocate as the goal that trumps all other goals are not deserving of our loyalty and a search most be conducted for a goal that is so deserving. (The search should use essentially the same intellectual skills as physicists.) The identification of that goal can have a very drastic effect on the universe e.g. by inspiring a group of bright 20 year-olds to implement a seed AI with that goal as its utility function. But that does not answer your question, does it?

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-20T23:44:00.000Z · LW · GW

Yes, TGGP, I've reread my comment and cannot see where I . . .

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-20T05:54:26.000Z · LW · GW

Eliezer clarified earlier that this blog entry is about personal utility rather than global utility. That presents me with another opportunity to represent a distinctly minority (many would say extreme) point of view, namely, that personal utility (mine or anyone else's) is completely trumped by global utility. This admittedly extreme view is what I have sincerely believed for about 15 years, and I know someone who held it for 30 years without his becoming an axe murderer or anything horrid like that. To say it in other words, I regard humans as means to nonhuman ends. Of course this is an extremely dangerous belief, which probably should not be advocated except when it is needed to avoid completely mistaken conclusions, and it is needed when thinking about simulation arguments, ultratechnologies, the eventual fate of the universe and similarly outre scenarios. If the idea took hold in ordinary legal or political deliberations, unnecessary suffering would result, so let us be discreet about to whom we advocate it.

Specifically, I wish to reply to Take away the individuals and there is no civilization which is a reply to my I believe it is an error to regard civilization as the servant of the individual. Ultimately, it is the other way around. Allow me to rephrase more precisely: Ultimately, the individual is the servant of the universe. I used civilization as a quick proxy for the universe because the primary way the individual contributes to the universe is by contributing to civilization.

The study of ultimate reality is of course called physics (and cosmology). There is an unexplored second half to physics. The first half of physics, the part we know, asks how reality can be bent towards goals humans already have. The second half of physics begins with the recognition that the goals humans currently have are vanities and asks what the deep investigation of reality can tell us about what goals humans ought to have. This "obligation physics" is the proper way to ground the civilization-individual recursion. Humanism, liberalism, progressivism and transhumanism ground the recursion in the individual, which might be the mistake made by most contemporary educated humans that could benefit the most from correction. The mistake is certainly very firmly entrenched in world culture. Perhaps the best way to see the mistake is to realize that subjective experience is irrelevant except as a proxy for the relevant things. What matters is objective reality.

Contrary to what almost every thoughtful person believes, it is possible to derive ought from is: the fact that no published author has done so correctly so far does not mean it cannot be done or that it is beyond the intellectual reach of contemporary humans. In summary my thesis is that the physical structure of reality determines the moral structure of reality.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-19T22:38:52.000Z · LW · GW

RI, in this comment section, you can probably safely replace "utility function" with "goal" and drop the word "expected" altogether.

Comment by Richard_Hollerith on Congratulations to Paris Hilton · 2007-10-19T22:13:21.000Z · LW · GW

TEXTAREAs in Firefox 1.5 have a disease in which a person must exercise constant vigilance to prevent stray newlines. Hence the ugly formatting.