Posts

Comments

Comment by Tim_Tyler on The Allais Paradox · 2009-05-04T11:18:00.000Z · LW · GW

Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.

Comment by Tim_Tyler on Less Wrong: Progress Report · 2009-04-25T13:47:09.000Z · LW · GW

Looking at:

http://google.com/search?q=Marshall+site:lesswrong.com

...there were about 500 comments involving "Marshall" - and now they all appear to have been deleted - leaving a trail like this:

http://lesswrong.com/lw/9/the_most_important_thing_you_learned/53

Did you delete your account there?

Comment by Tim_Tyler on Less Wrong: Progress Report · 2009-04-25T07:03:54.000Z · LW · GW

I don't pay much attention to karma - but it is weird what gets voted up and down.

For a rationist community, people seem to go for conformity and "applause signs" much more than I would have expcted - while criticisms and disagreements seem to be punished more than I would have thought.

Anyway, interesting raw material for groupthink studies - some day.

Comment by Tim_Tyler on Newcomb's Problem and Regret of Rationality · 2009-03-02T19:58:00.000Z · LW · GW

Re: First, foremost, fundamentally, above all else: Rational agents should WIN.

When Deep Blue beat Gary Kasparov, did that prove that Gary Kasparov was "irrational"?

It seems as though it would be unreasonable to expect even highly rational agents to win - if pitted against superior competition. Rational agents can lose in other ways as well - e.g. by not having access to useful information.

Since there are plenty of ways in which rational agents can lose, "winning" seems unlikely to be part of a reasonable definition of rationality.

Comment by Tim_Tyler on On Not Having an Advance Abyssal Plan · 2009-02-25T11:19:44.000Z · LW · GW
But what good reason is there not to? How can you be worse off from knowing in advance what you'll do in the worse cases?

The answer seems trivial: you may have wasted a bunch of time and energy performing calculations relating to what to do in a hypothetical situation that you might never face.

If the calculations can be performed later, then that will often be better - since then more information will be available - and possibly the calculations may not have to be performed at all.

Calculating in advance can be good - if you fear that you may not have time to calculate later - or (obviously) if the calculations affect the choices to be taken now. However, the act of performing calculations has associated time and energy costs - so it is best to use your "calculating" time wisely.

Comment by Tim_Tyler on On Not Having an Advance Abyssal Plan · 2009-02-24T22:16:59.000Z · LW · GW
For the same reason that when you're buying a stock you think will go up, you decide how far it has to decline before it means you were wrong

Do any investors actually do that? I don't mean to be rude - but why haven't they got better things to do with their time?

Comment by Tim_Tyler on Good Idealistic Books are Rare · 2009-02-24T19:34:26.000Z · LW · GW

I didn't find "Engines" very positive. I agree with Moravec:

"I found the speculations absurdly anthropocentric. Here we have machines millions of times more intelligent, plentiful, fecund, and industrious than ourselves, evolving and planning circles around us. And every single one exists only to support us in luxury in our ponderous, glacial, antique bodies and dim witted minds. There is no hint in Drexler's discussion of the potential lost by keeping our creations so totally enslaved."

IMO, Drexler's proposed future is an unlikely nightmare world.

Comment by Tim_Tyler on Cynical About Cynicism · 2009-02-18T13:43:48.000Z · LW · GW

Anon, you are arguing for "incorrect", not "cynical". Please consider the difference.

Like it or not, biologists are basically correct in identifying the primary goal of organisms as self-reproduction. That is the nature of the attractor to which all organisms' goal systems are drawn (though see also this essay of mine). Yes, some organisms break, and other organisms find themselves in unfamiliar environments - but if anything can be said to be the goal of organisms, then that is it. The exceptions (like your contraceptives) just prove the rule. Such organisms are acting in a way that is intended to promote their genetic fitness. It is just that some of their assumptions about the environment might be wrong. Alas, contraceptives are not a very good example, because they prevent disease, make sex easier (thus helping to create pair bonds), and have other positive effects.

Organisms tend to act as though their number one motive is self-reproduction. Philosophers may be able to debate whether that motive is "explicitly represented in their brains" - but if it looks like a duck and quacks like a duck, whether philosophers are prepared to call it a duck seems like a side issue.

It is the same as with Deep Blue. Deep Blue acts as though its number one motive is to win games of chess (thus inflating IBM's stock price). That is the single most helpful simple way in which to understand its behaviour. If you actually look at its utility function, it has thousands of elements, not one of which refers to winning games of chess - but so what? It is not "cynical" to treat Deep Blue as trying to win games of chess. That is what it is doing!

Comment by Tim_Tyler on Cynical About Cynicism · 2009-02-17T13:57:28.000Z · LW · GW
Consider the hash that some people make of evolutionary psychology in trying to be cynical - assuming that humans have a subconscious motive to promote their inclusive genetic fitness.

What is "cynical" about that? It is a central organising principle in biology that organisms tend to act in such a way to promote their own inclusive genetic fitness. There are a few caveats - but why would viewing people like that be "cynical"? I do not see anything wrong with promoting your own genetic fitness - rather it seems like a perfectly natural thing to do to me.

Looking at the population explosion, I would say that the world appears to be full of people who are acting in a manner that is highly effective at promoting their own genetic fitness. They are doing something wrong? What makes you think that?

Comment by Tim_Tyler on An Especially Elegant Evpsych Experiment · 2009-02-15T11:52:18.000Z · LW · GW

Re: The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value.

I think that a better way to put this would be to say that the Canadian humans miscalculate reproductive value - using subconscious math more appropriate for bushmen.

If you want to look at the the importance of reproductive value represented by children to humans, the most obvious studies to look at are the ones that deal with adopted kids - comparing them with more typical ones. For example look at the statistics about how much such kids get beaten, suffer from child abuse, die or commit suicide.

Comment by Tim_Tyler on An Especially Elegant Evpsych Experiment · 2009-02-14T11:19:05.000Z · LW · GW

Re: Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake [...]

Except where paternity suits are involved, presumably.

Comment by Tim_Tyler on The Evolutionary-Cognitive Boundary · 2009-02-13T12:06:45.000Z · LW · GW

[Tim, you post this comment every time I talk about evolutionary psychology, and it's the same comment every time, and it doesn't add anything new on each new occasion. If these were standard theories I could forgive it, but not considering that they're your own personal versions. I've already asked you to stop. --EY]

Comment by Tim_Tyler on Cynicism in Ev-Psych (and Econ?) · 2009-02-11T17:18:52.000Z · LW · GW
Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.

One big problem is that they tend to systematically ignore memes.

Human brains are parasitised by replicators that hijack them for their own ends. The behaviour of a catholic priest has relatively little to do with the inclusive genetic fitness of the priest - and a lot to do with the inclusive genetic fitness of the Catholicism meme. Pinker and many of the other evo-psych guys still show little sign of "getting" this.

Comment by Tim_Tyler on The Thing That I Protect · 2009-02-08T14:50:20.000Z · LW · GW

Wasn't there some material in CFAI about solving the wirehead problem?

Comment by Tim_Tyler on Value is Fragile · 2009-02-03T18:52:00.000Z · LW · GW

The analogy between the theory that humans behave like expected utility maximisers - and the theory that atoms behave like billiard balls could be criticised - but it generally seems quite appropriate to me.

Comment by Tim_Tyler on Value is Fragile · 2009-02-01T12:27:28.000Z · LW · GW

In dealing with your example, I didn't "change the space of states or choices". All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them - according to the maximum value of the utility function I specified.

The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Such an agent would drive in circles - but that is not necessarily irrational behaviour.

Of course much of the value of expected utility theory arises when you use short and simple utility functions - however, if you are prepared to use more complex utility functions, there really are very few limits on what behaviours can be represented.

The possibility of using complex utility functions does not in any way negate the value of the theory for providing a model of rational economic behaviour. In economics, the utility function is pretty fixed: maximise profit, with specified risk aversion and future discounting. That specifies an ideal which real economic agents approximate. Plugging in an arbitrary utility function is simply an illegal operation in that context.

Comment by Tim_Tyler on Value is Fragile · 2009-02-01T00:28:37.000Z · LW · GW
The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.

The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome would be much the same - a powerful chess computer.

The "targeting information" is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.

It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details - not the real target.

Comment by Tim_Tyler on War and/or Peace (2/8) · 2009-01-31T18:01:29.000Z · LW · GW

I note that filial cannibalism is quite common on this planet.

Gamete selection has quite a few problems. It only operates on half the genome at a time - and selection is performed before many of the genes can be expressed. Of course gamete selection is cheap.

What spiders do - i.e. produce lots of offspring, and have many die as infants - has a huge number of evolutionary benefits. The lost babies do not cost very much, and the value of the selection that acts on them is great.

Human beings can't get easily get there - since they currently rely on gestation inside a human female body for nine months, but - make no mistake - if we could produce lots of young, and kill most of them at a young age, then that would be a vastly superior system in terms of the quantity and quality of the resulting selection.

Human females do abort quite a few foetuses after a month or so - ones that fail internal and maternal integrity tests - but the whole system is obviously appalingly inefficient.

Comment by Tim_Tyler on Value is Fragile · 2009-01-31T16:22:31.000Z · LW · GW
I think Eliezer is due for congratulation here. This series is nothing short of a mammoth intellectual achievement [...]

It seems like an odd place for congratulations - since the conclusion here seems to be about 180 degrees out of whack - and hardly anyone seems to agree with it. I asked how one of the ideas here was remotely defensible. So far, there have been no takers.

If there is not even a debate, whoever is incorrect on this topic would seem to be in danger of failing to update. Of course personally, I think it is Eliezer who needs to update. I have quite a bit in common with Eliezer - and I'd like to be on the same page as him - but it is difficult to do when he insists on defending positions that I regard as poorly-conceived.

Comment by Tim_Tyler on Value is Fragile · 2009-01-31T10:34:59.000Z · LW · GW
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

That is silly - the associated utility function is the one you have just explicitly given. To rephrase:

if (senses contain (A,B)) selecting A has high utility; else if (senses contain (B,C)) selecting B has high utility; else if (senses contain (C,A)) selecting C has high utility;

Here's another example: When given (A,B) a program outputs "indifferent". When given (equal chance of A or B, A, B) it outputs "equal chance of A or B". This is also not allowed by EU maximization.

Again, you have just given the utility function by describing it. As for "indifference" being a problem for a maximisation algorithm - it really isn't in the context of decision theory. An agent either takes some positive action, or it doesn't. Indifference is usually modelled as lazyness - i.e. a preference for taking the path of least action.

Comment by Tim_Tyler on Value is Fragile · 2009-01-30T22:35:06.000Z · LW · GW
But there is no principled way to derive an utility function from something that is not an expected utility maximizer!

You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.

You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.

A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.

Comment by Tim_Tyler on Value is Fragile · 2009-01-30T22:22:13.000Z · LW · GW
Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies.

They are not perfect expected utility maximizers. However, no expected utility maximizer is perfect. Humans approach the ideal at least as well as other organisms. Fitness maximization is the central explanatory principle in biology - and the underlying idea is the same. The economic framework associated with utilitarianism is general, of broad applicability, and deserves considerable respect.

Comment by Tim_Tyler on Value is Fragile · 2009-01-30T18:50:23.000Z · LW · GW
I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe.

So: you think a "paperclip maximiser" would be "dull"?

How is that remotely defensible? Do you think a "paperclip maximiser" will master molecular nanotechnology, artificial intelligence, space travel, fusion, the art of dismantling planets and stellar farming?

If so, how could that possibly be "dull"? If not, what reason do you have for thinking that those technologies would not help with the making of paper clips?

Apparently-simple processes can easily produce great complexity. That's one of the lessons of Conway's game.

Comment by Tim_Tyler on Value is Fragile · 2009-01-29T22:55:56.000Z · LW · GW

Thanks for the probability assessments. What is missing are supporting arguments. What you think is relatively clear - but why you think it is not.

...and what's the deal with mentioning a "sense of humour"? What has that to do with whether a civilization is complex and interesting? Whether our distant descendants value a sense of humour or not seems like an irrelevance to me. I am more concerned with whether they "make it" or not - factors affecting whether our descendants outlast the exploding sun - or whether the seed of human civilisation is obliterated forever.

Comment by Tim_Tyler on Value is Fragile · 2009-01-29T18:40:59.000Z · LW · GW

This post seems almost totally wrong to me. For one thing, its central claim - that without human values the future would, with high probability be dull is not even properly defined.

To be a little clearer, one would need to say something like: if you consider a specified enumeration over the space of possibile utility functions, a random small sample from that space would be "dull" (it might help to say a bit more about what dullness means too, but that is a side issue for now).

That claim might well be true for typical "shortest-first" enumerations in sensible languages - but it is not a very interesting claim - since the dull utility functions would be those which led to an attainable goal - such as "count up to 10 and then stop".

The "open-ended" utilility functions - the ones that resulted in systems that would spread out - would almost inevitably lead to rich complexity. You can't turn the galaxy into paper-clips (or whatever) without extensively mastering science, technology, intergalactic flight, nanotechnology - and so on. So, you need scientists and engineers - and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.

I've explained all this to Eleizer before. After reading this post I still have very little idea about what it is that he isn't getting. He seems to think that making paper clips are boring. However, they are not any more boring than making DNA sequences, and that's the current aim of most living systems.

A prime-seeking civilisation has a competitive disadvantage over one that doesn't have silly, arbitrary bits tacked on to its utility function. It is more likely to be wiped out in a battle with an alien race - and it's more likely to suffer from a mutiny from within. However, that is about all. They are unlikely to lack science, technology, or other interesting stuff.

Comment by Tim_Tyler on Sympathetic Minds · 2009-01-20T21:37:08.000Z · LW · GW

The core of most of my disagreements with this article find their most concentrated expression in:

"Happiness" is an idiom of policy reinforcement learning, not expected utility maximization.

Under Omohundro's model of intelligent systems, these two approaches converge. As they do so, the reward signal of reinforcement learning and the concept of expected utility also converge. In other words, it is rather inappropriate to emphasize the differences between these two systems as though it was a fundamental one.

There are differences - but they are rather superficial. For example, there is often a happiness "set point", for example - whereas that concept is typically more elusive for an expected utility maximizer. However, the analogies between the concepts are deep and fundamental: an agent maximising its happiness is doing something deeply and fundamentally similar to an agent maximising its expected utility. That becomes obvious if you substitute "happiness" for "expected utility".

In the case of real organisms, that substitution is doubly appropriate - because of evolution. The "happiness" function is not an arbitrarily chosen one - it is created in such a way that it converges closely on a function that favours behaviour resulting in increased expected ancestral representation. So, happiness gets an "expectation" of future events built into it automatically by the evolutionary process.

Comment by Tim_Tyler on BHTV: de Grey and Yudkowsky · 2009-01-18T11:43:55.000Z · LW · GW

Here is Aubrey on the topic of cryonics.

Comment by Tim_Tyler on Eutopia is Scary · 2009-01-12T20:11:28.000Z · LW · GW

Dictionaries disagree with Ferris - e.g.:

"Happiness [...] antonym: sadness" - Encarta

"Boredom" makes a terrible opposite of "happiness". What is the opposite of boredom? Something interesting, to be sure, but many more things than just happiness fit that description.

Comment by Tim_Tyler on Serious Stories · 2009-01-10T11:06:19.000Z · LW · GW

David Pearce has written extensively on the topic of the elimination of suffering - e.g. see: THE ABOLITIONIST PROJECT and Paradise Engineering.

Comment by Tim_Tyler on Living By Your Own Strength · 2009-01-08T21:07:21.000Z · LW · GW

Eliezer, I think you have somehow gotten very confused about the topic of my now-deleted post.

That post was entirely about cultural inheritance - contained absolutely nothing about sexual selection.

Please don't delete my posts - unless you have a good reason for doing so.

Comment by Tim_Tyler on Living By Your Own Strength · 2009-01-08T20:12:17.000Z · LW · GW

[Deleted. Tim, you've been requested to stop talking about your views on sexual selection here. --EY]

Comment by Tim_Tyler on Emotional Involvement · 2009-01-07T20:13:52.000Z · LW · GW

As I recall, Arnold's character faced pretty-much this dilemma in Total Recall.

There's a broadly-similar episode of Buffy the Vampire Slayer.

Both characters wind up going on with their mission of saving the planet.

Comment by Tim_Tyler on Emotional Involvement · 2009-01-07T16:59:14.000Z · LW · GW

General Optimizer, you seem like a prospect for responding to this question: "in the interests of transparency, would anyone else like to share what they think their utility function is?"

Comment by Tim_Tyler on Changing Emotions · 2009-01-05T18:52:10.000Z · LW · GW

Intelligent machines will not really be built "from scratch" because augmentation of human intelligence by machines makes use of all the same technology as is present is straight machine intelligence projects, plus a human brain. Those projects have the advantage of being competitive with humans out of the box - and they interact synergetically with traditional machine intelligence projects. For details see my intelligence augmentation video/essay.

The thing that doesn't make much sense is building directly on the human brain's wetware with more of the same. Such projects are typically banned at the moment - and face all kinds of technical problems anyway.

Comment by Tim_Tyler on Growing Up is Hard · 2009-01-04T19:25:29.000Z · LW · GW
It seems like a general property of an intelligent system that it can't know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel's first incompleteness theorem) fairly directly.

Er, no, it doesn't.

Comment by Tim_Tyler on Growing Up is Hard · 2009-01-04T14:27:16.000Z · LW · GW

Robin, it sounds as though you are thinking about the changes that could be made after brain digitalisation.

That seems like a pretty different topic to me. Once you have things in a digital medium, it is indeed much easier to make changes - even though you are still dealing with a nightmarish mess of hacked-together spaghetti code.

Comment by Tim_Tyler on Growing Up is Hard · 2009-01-04T11:39:27.000Z · LW · GW
This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn't the brain produce more acetylcholine already?

There's considerable scope for the answer to this question being: "because of resource costs". Resource costs for nutrients today are radically different from those in the environment of our ancestors.

We are not designed for our parts to be upgraded. Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they are.

That's true - but things are not quite as bad as that makes it sound. Evolution is concerned with things like modularity and evolvability. Those contribute to the modularity of our internal organs - and that helps explain why things like kidney transplants work. Evolution didn't plan for organ transplant operations - but it did arrange things in a modular fashion. Modularity has other benefits - and ease of upgrading and replacement is a side effect.

People probably broke in the ancestral environment too. Organisms are simply fragile, and most fail to survive and reproduce.

Another good popular book on the evolution of intelligence is "The Runaway Brain". I liked it, anyway. I also have time for Sue Blackmore's exposition on the topic, in "The Meme Machine".

"Hm... to get from a chimpanzee to a human... you enlarge the frontal cortex... so if we enlarge it even further..." The road to +Human is not that simple.

Well, we could do that. Cesarian sections, nutrients, drugs, brain growth factor gene therapy, synthetic skulls, brains-in-vats - and so on.

It would probably only add a year or so onto the human expiration date, but it might be worth doing anyway - since the longer humans remain competitive for, the better the chances of a smooth transition. The main problem I see is the "yuck" factor - people don't like looking closely at that path.

Comment by Tim_Tyler on The Uses of Fun (Theory) · 2009-01-03T16:55:44.000Z · LW · GW

My guess is that it's a representation of my position on sexual selection and cultural evolution. I may still be banned from discussing this subject - and anyway, it seems off-topic on this thread, so I won't go into details.

If this hypothesis about the comment is correct, the main link that I can see would be: things that Eliezer and Tim disagree about.

Comment by Tim_Tyler on The Uses of Fun (Theory) · 2009-01-02T23:46:57.000Z · LW · GW

Well, that is so vague as to hardly be worth the trouble of responding to - but I will say that I do hope you were not thinking of referring me here.

However, I should perhaps add that I overspoke. I did not literally mean "any sufficiently-powerful optimisation process". Only that such things are natural tendencies - that tend to be produced unless you actively wire things into the utility function to prevent their manifestation.

Comment by Tim_Tyler on The Uses of Fun (Theory) · 2009-01-02T21:02:57.000Z · LW · GW
Going into the details of Fun Theory helps you see that eudaimonia is actually complicated - that there are a lot of properties necessary for a mind to lead a worthwhile existence. Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.

Something with a utility function "rolled at random" typically does not "optimise the universe". Rather it dies out. Of those agents with utility functions that do actually spread themselves throughout the universe, it is not remotely obvious that most of them are "worthless" or "uninteresting" - unless you choose to define the term "worth" so that this is true, for some reason.

Indeed, rather the opposite - since such agents would construct galactic-scale civilisations, they would probably be highly interesting and valuable instances of living systems in the universal community.

Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.

Sure it would: as proximate goals. Animals are expected gene-fitness maximisers. Expected gene-fitness is not somehow intrinsically more humane than expected paperclip number. Both have about the same chance of leading to the things you mentioned being proximate goals.

Novelty-seeking and self-awareness are things you get out of any sufficiently-powerful optimisation process - just as they all develop fusion, space travel, nanotechnology - and so on.

Comment by Tim_Tyler on Dunbar's Function · 2008-12-31T11:01:35.000Z · LW · GW
One of the primary principles of evolutionary psychology is that "Our modern skulls house a stone age mind"

Our minds are made by (essentially) stone-age genes, but they import up-to-date memes - and are a product of influences from both sources.

So: our minds are actually pretty radically different from stone-age minds - because they have downloaded and are running a very different set of brain-software routines. This influence of memes explains why modern society is so different from the societies present in the stone age.

Comment by Tim_Tyler on Nonsentient Optimizers · 2008-12-30T18:39:53.000Z · LW · GW
And that, to this end, we would like to know what is or isn't a person - or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

So: define such a function - as is done by the world's legal systems. Of course, in a post-human era, it probably won't "carve nature at the joints" much better than the "how many hairs make a beard" function manages to.

Comment by Tim_Tyler on Amputation of Destiny · 2008-12-29T21:07:22.000Z · LW · GW
And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.

A possible problem here is that your high entry requirement specifications may well, with a substantial probability, allow others with lower standards to create a superintelligence before you do.

So: since you seem to think that would be pretty bad, and since you say you are a consequentialist - and believe in the greater good - you should probably act to stop them - e.g. by stepping up your own efforts to get there first by bringing the target nearer to you.

Comment by Tim_Tyler on Nonsentient Optimizers · 2008-12-28T20:04:18.000Z · LW · GW
CEV runs once on a collection of existing humans then overwrites itself [...]

Ah. My objection doesn't apply, then. It's better than I had thought.

Comment by Tim_Tyler on Nonsentient Optimizers · 2008-12-28T11:29:18.000Z · LW · GW
You've already said the friendly AI problem is terribly hard, and there's a large chance we'll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be "friendly", making your design task all that harder?

While we are on the topic, the problem I see in this area is not that friendliness has too many extra conditions appended on it. It's that the concept is so vague and amorphous that only Yudkowsky seems to know what it means.

When I last asked what it meant, I was pointed to the CEV document - which seems like a rambling word salad to me - I have great difficulty in taking it seriously. The most glaring problem with the document - from my point of view - is that it assumes that everyone knows what a "human" is. That might be obvious today, but in the future, things could well get a lot more blurry - especially if it is decreed that only "humans" have a say in the proposed future. Do uploads count? What about cyborgs? - and so on.

If it is proposed that everything in the future revolves around "humans" (until the "humans" say otherwise) then - apart from the whole issue of whether that is a good idea in the first place - we (or at least the proposed AI) would first need to know what a "human" is.

Comment by Tim_Tyler on Harmful Options · 2008-12-28T10:44:11.000Z · LW · GW

FWIW, Phil's point there seems to be perfectly reasonable - and not in need of correction: if a moral system tells you to do what you were going to do anyway, it isn't going to be doing much work.

Moral systems usually tell you not to do things that you would otherwise be inclined to do - on the grounds that they are bad. Common examples include taking things you want - and having sex.

Comment by Tim_Tyler on Nonperson Predicates · 2008-12-27T14:02:30.000Z · LW · GW
If you would balk at killing a million people with a nuclear weapon, you should balk at this.

The main problem with death is that valuable things get lost.

Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value.

In summary, I don't see why this issue would be much of a problem.

Comment by Tim_Tyler on Harmful Options · 2008-12-25T21:31:31.000Z · LW · GW

Re: Barry Schwartz's The Paradox of Choice [...] talks about how offering people more choices can make them less happy. A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become - that's how it should ought to work.

This makes no sense to me. A blind choice between lady and tiger is preferable to a blind choice between a lady and two tigers. Problems arise when you don't know that the other choices are worse. So having more choices can be really bad - in a way that has nothing to do with the extra cycles burned in evaluating them.

Comment by Tim_Tyler on Devil's Offers · 2008-12-25T21:25:37.000Z · LW · GW

Re: Vassar advocates that rationalists should learn to lie, I advocate that rationalists should practice telling the truth more effectively, and we're still having that argument.

Uh huh. What are the goals of these hypothetical rational agents?

Comment by Tim_Tyler on Prolegomena to a Theory of Fun · 2008-12-20T15:47:39.000Z · LW · GW
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral, and above all a failure of imagination, to talk about human-level minds still running around the day after the Singularity.

For me, the very concept of "the day after the Singularity" is so far out - and off the rails - that I would hardly describe it as a failure of the imagination.

The idea seems more likely to be the result of an overactive, over-stimulated imagination - or perhaps the wild imaginings of some science fiction author.