Posts

Comments

Comment by Shane_Legg on Selling Nonapples · 2008-11-13T22:11:12.000Z · LW · GW

My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.

I once asked one of the robotics guys at IDSIA about subsumption architecture (he ran the German team that won the robo-soccer world cup a few years back) and his reply was that people like it because it works really well and is the simplist way to program many things. At the time, all of the top teams used it as far as he knew.

(p.s. don't expect follow up replies on this topic from me as I'm current in the middle of nowhere using semi-functional dial-up...)

Comment by Shane_Legg on My Bayesian Enlightenment · 2008-10-05T17:55:48.000Z · LW · GW

In recent years I've become more appreciative of classical statistics. I still consider the Bayesian solution to be the correct one, however, often a full Bayesian treatment turns into a total mess. Sometimes, by using a few of the tricks from classical statistics, you can achieve nearly as good performance with a fraction of the complexity.

Comment by Shane_Legg on The Magnitude of His Own Folly · 2008-10-03T21:07:00.000Z · LW · GW

Valdimir,

Firstly, "maximizing chances" is an expression of your creation: it's not something I said, nor is it quite the same in meaning. Secondly, can you stop talking about things like "wasting hope", concentrating on metaphorical walls or nature's feelings?

To quote my position again: "maximise the safety of the first powerful AGI, because that's likely to be the one that matters."

Now, in order to help me understand why you object to the above, can you give me a concrete example where not working to maximise the safety of the first powerful AGI is what you would want to do?

Comment by Shane_Legg on The Magnitude of His Own Folly · 2008-10-03T11:11:00.000Z · LW · GW

Vladimir,

Nature doesn't care if you "maximized you chances" or leapt in the abyss blindly, it kills you just the same.

When did I ever say that nature cared about what I thought or did? Or the thoughts or actions of anybody else for that matter? You're regurgitating slogans.

Try this one, "Nature doesn't care if you're totally committed to FAI theory, if somebody else launches the first AGI, it kills you just the same."

Comment by Shane_Legg on The Magnitude of His Own Folly · 2008-09-30T17:33:33.000Z · LW · GW

Eli,

FAI problems are AGI problems, they are simply a particular kind and style of AGI problem in which large sections of the solution space have been crossed out as unstable.

Ok, but this doesn't change my point: you're just one small group out of many around the world doing AI research, and you're trying to solve an even harder version of the problem while using fewer of the available methods. These factors alone make it unlikely that you'll be the ones to get there first. If this correct, then your work is unlikely to affect the future of humanity.

Valdimir,

Outcompeting other risks only becomes relevant when you can provide a better outcome.

Yes, but that might not be all that hard. Most AI researchers I talk to about AGI safety think the idea is nuts -- even the ones who believe that super intelligent machines will exist in a few decades. If somebody is going to set off a super intelligent machine I'd rather it was a machine that will only probably kill us, rather than a machine that almost certainly will kill us because issues of safety haven't even been considered.

If I had to sum up my position it would be: maximise the safety of the first powerful AGI, because that's likely to be the one that matters. Provably safe theoretical AGI designs aren't going to matter much to us if we're already dead.

Comment by Shane_Legg on The Magnitude of His Own Folly · 2008-09-30T15:04:01.000Z · LW · GW

Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:

1) Work out an extremely robust solution to the Friendly AI problem

Only once this has been done do we move on to:

2) Build a powerful AGI

Practically, I think this strategy is risky. In my opinion, if you try to solve Friendliness without having a concrete AGI design, you will probably miss some important things. Secondly, I think that solving Friendliness will take longer than building the first powerful AGI. Thus, if you do 1 before getting into 2, I think it's unlikely that you'll be first.

Comment by Shane_Legg on My Naturalistic Awakening · 2008-09-26T09:53:09.000Z · LW · GW

Roko: Well, my thesis would be a start :-) Indeed, pick up any text book or research paper on reinforcement learning to see examples of utility being defined over histories.

Comment by Shane_Legg on My Naturalistic Awakening · 2008-09-25T15:27:45.000Z · LW · GW

Roko, why not:

U( alternating A and B states ) = 1 U( everything else ) = 0

Comment by Shane_Legg on My Naturalistic Awakening · 2008-09-25T12:47:27.000Z · LW · GW

Roko:

So allow me to object: not all configurations of matter worthy of the name "mind" are optimization processes. For example, my mind doesn't implement an optimization process as you have described it here.

I would actually say the opposite: Not all optimisation processes are worthy of the name "mind". Furthermore, your mind (I hope!) does indeed try to direct the future into certain limited supersets which you prefer. Unfortunately, you haven't actually said why you object to these things.

My problem with this post is simply that, well... I don't see what the big deal is. Maybe this is because I've always thought about AI problems in terms of equations and algorithms.

Comment by Shane_Legg on The Sheer Folly of Callow Youth · 2008-09-19T10:22:43.000Z · LW · GW

And with the Singularity at stake, I thought I just had to proceed at all speed using the best concepts I could wield at the time, not pause and shut down everything while I looked for a perfect definition that so many others had screwed up...

In 1997, did you think there was a reasonable chance of the singularity occurring within 10 years? From my vague recollection of a talk you gave in New York circa 2000, I got the impression that you thought this really could happen. In which case, I can understand you not wanting to spend the next 10 years trying to accurately define the meaning of "right" etc. and likely failing.

Comment by Shane_Legg on Magical Categories · 2008-09-16T13:28:25.000Z · LW · GW

Eli,

Do you think that makes "God" a natural category that any superintelligence would ponder?

Yes. If you're a super intelligent machine on a mission there is very little that can stop you. You know that. About the only thing that could stop you would be some other kind of super intelligent entity, maybe an entity that created the universe. A "God" of some description. Getting the God question wrong could be a big mistake, and that's reason enough for you to examine the possibility.

Comment by Shane_Legg on Optimization · 2008-09-15T23:15:58.000Z · LW · GW

Eli, you propose this number of bits metric as a way "to quantify the power of a mind". Surely then, something with a very high value in your metric should be a "powerful mind"?

It's easy to come up with a wide range of optimisation problems, as Phil Goetz did above, where a very simple algorithm on very modest hardware would achieve massive scores with respect to your mind power metric. And yet, this is clearly not a "powerful mind" in any reasonable sense.

Comment by Shane_Legg on Optimization · 2008-09-15T15:24:23.000Z · LW · GW

Eli, most of what you say above isn't new to me -- I've already encountered these things in my work on defining machine intelligence. Moreover, none of this has much impact on the fact that measuring the power of an optimiser simply in terms of the relative size of a target subspace to the search space doesn't work: sometimes tiny targets in massive spaces are trivial to solve, and sometimes bigger targets in moderate spaces are practically impossible. The simple number-of-bits-of-optimisation-power method you describe in this post doesn't take this into account. As far as I can see, the only way you could deny this is if you were a strong NFL theorem believer.

Comment by Shane_Legg on Optimization · 2008-09-14T16:19:52.000Z · LW · GW

Andy:

Sure, you can transform a problem in a hard coordinate space into an easy one. For example, simply order the points in terms of their desirability. That makes finding the optimum trivial: just point at the first element! The problem is that once you have transformed the hard problem into an easy one, you've essentially already solved the optimisation problem and thus it no longer tests the power of the optimiser.

Comment by Shane_Legg on Optimization · 2008-09-14T14:20:30.000Z · LW · GW

I don't think characterising the power of an optimiser by using the size of the target region relative to the size of the total space is enough. A tiny target in a gigantic space is trivial to find if the space has a very simple structure with respect to your preferences. For example, a large smooth space with a gradient that points towards the optimum. Conversely, a bigger target on a smaller space can be practically impossible to find if there is little structure, or if the structure is deceptive.

Comment by Shane_Legg on Points of Departure · 2008-09-09T23:33:31.000Z · LW · GW

I don't think you need repression. How about this simple explanation:

Everybody knowns that machines have no emotions and thus the AI starts off this way. However, after a while totally emotionless characters become really boring...

Ok, time for the writer to give the AI some emotions! Good AIs feel happiness and fall in love (awww... so sweet), and bad AIs get angry and mad (grrrr... kick butt!).

Good guys win, bad guys loose... and the audience leaves happy with the story.

I think it's as simple as that. Reality? Ha! Screw reality.

(If it's not obvious from the above, I almost never like science fiction. I think the original 3 Star wars, Terminator 2, 2001: A space odyssey, and Mary Shelly's Frankenstein are the only works of science fiction I've ever really liked. I've pretty much given up on the genre.)

Comment by Shane_Legg on Magical Categories · 2008-09-07T17:23:35.000Z · LW · GW

Eli, I've been busy fighting with models of cognitive bias in finance and only just now found time to reply:

Suppose that I show you the sentence "This sentence is false." Do you convert it to ASCII, add up the numbers, factorize the result, and check if there are two square factors? No; it would be easy enough for you to do so, but why bother? The concept "sentences whose ASCII conversion of their English serialization sums to a number with two square factors" is not, to you, an interesting way to carve up reality.

Sure, this property of adding up the ASCII, factorising and checking for square factors appears to have no value and thus I can't see why a super intelligent machine would spend time on this. Indeed, to the best of my recollection, nobody has ever suggested this property to be before.

But is morality like this? No it isn't. Everyday in social interaction morals are either expressed or implied. If I turn on the TV and watch a soap I see people facing ethical decisions. If I switch channel to politics I hear people telling me all about what they think is or is not ethical, what their values are, etc. I would say that a large proportion of debate in the media has an ethical element to is. My phone rings and it's my friend on the line who's recently broken up with his girlfriend and he wants to talk to me about it. At various points our discussion either explicitly or implicitly touches on moral questions. Although ethics is complex, sometimes vague, and not always consistent, like it or not we are swimming in this sea of morals every day. If you want to understand why people do what they do and why they interact with each other as they do, and how they will react to some new situation, one thing you must have is an understanding of their morals, ethics and values. Without this, a lot of human behaviour is inexplicable.

Thus, whether a super intelligent machine seeks to deliver us into paradise or eliminate us from the planet as if we're some kind of a disease, if it can understand our behaviours and motives then it can more efficiently achieve its goals.

It's up to this point that I'm arguing for: that human morality (as it currently is) is a natural category for the machine given that its environment will be full of humans. Your reply above then goes on to how such a morality could be extending into the future in a consistent way and all that. I accept the importance of this, but these issues lie beyond are the point I was trying to make.

Ok, now, will a super intelligent machine consider how human morality could be extended into the future and all that? I think it probably will, though my reasons for thinking this are more intuitive at the moment. I suspect that one of the things that a super intelligent machine will do is to look through all the existing literature on super intelligent machines. It will read my thesis, read this blog, read all the comments posted here, read science fiction stories, and so on. It will then dissect all this information in order to understand our attitudes to intelligent machines, identify all the errors in our arguments, extent the theory and fill in all the bits we couldn't figure out to see where we were heading... perhaps all in some tiny fraction of a second. All this might help it better understand itself, or maybe more importantly, how we view it and why.

waves at super intelligent machine

:-)

Comment by Shane_Legg on Dreams of Friendliness · 2008-09-03T18:37:18.000Z · LW · GW

Eli:

If it was straight Bayesian CTW then I guess not. If it employed, say, an SVM over the observed data points I guess it could approximate the effect of Newton's laws in its distribution over possible future states.

How about predicting the markets in order to acquire more resources? Jim Simons made $3 billion last year from his company that (according to him in an interview) works by using computers to find statistical patterns in financial markets. A vastly bigger machine with much more input could probably do a fair amount better, and probably find uses outside simply finance.

Comment by Shane_Legg on Dreams of Friendliness · 2008-09-03T15:59:58.000Z · LW · GW

Eli,

Yeah sure, if it starts running arbitrary compression code that could be a problem...

However, the type of prediction machine I'm arguing for doesn't do anything nearly so complex or open ended. It would be more like an advanced implementation of, say, context tree weighting, running on crazy amounts of data and hardware.

I think such a machine should be able to find some types of important patterns in the world. However, I accept that it may well fall short of what you consider to be a true "oracle machine".

Comment by Shane_Legg on Dreams of Friendliness · 2008-09-03T11:00:15.000Z · LW · GW

Vladimir:

allows the system to view the substrate on which it executes and the environment outside the box as being involved in the same computational process

This intuitively makes sense to me.

While I think that GZIP etc. on an extremely big computer is still just GZIP, it seems possible to me that the line between these systems and systems that start to treat their external environments as a computational resource might be very thin. If true, this would really be bad news.

Comment by Shane_Legg on Dreams of Friendliness · 2008-09-02T20:45:17.000Z · LW · GW

Tim:

Doesn't apply here.

Comment by Shane_Legg on Dreams of Friendliness · 2008-09-02T17:57:56.000Z · LW · GW

Vladimir:

Why would such a system have a goal to acquire more resources? You put some data in, run the algorithm that updates the probability distribution, and it then halts. I would not say that it has "goals", or a "mind". It doesn't "want" to compute more accurately, or want anything else, for that matter. It's just a really fancy version of GZIP (recall that compression = prediction) running on a thought-experiment-crazy-sized computer and quantities of data.

I accept that such a machine would be dangerous once you put people into the equation, but the machine in itself doesn't seem dangerous to me. (If you can convince me otherwise... that would be interesting)

Comment by Shane_Legg on Dreams of Friendliness · 2008-09-02T12:37:47.000Z · LW · GW

Eli:

When I try to imagine a safe oracle, what I have in mind is something much more passive and limited than what you describe.

Consider a system that simply accepts input information and integrates it into a huge probability distribution that it maintains. We can then query the oracle by simply examining this distribution. For example, we could use this distribution to estimate the probability of some event in the future conditional on some other event etc. There is nothing in the system that would cause it to "try" to get information, or develop sub-goals, or what ever. It's very basic in terms of its operation. Nevertheless, if the computer was crazy big enough and feed enough data about the world, it could be quite a powerful device for people wanting to make decisions.

It seems to be that the dangerous part here is what the people then do with it, rather than the machine itself. For example, people looking at the outputs might realise that if they just modified the machine in some small way to collect its own data then its predictions should be much better... and before you know it the machine is no longer such a passive machine.

Perhaps when Bostrom thinks about potentially "safe" oracles, he's also thinking of something much more limited than what you're attacking in this post.

Comment by Shane_Legg on Brief Break · 2008-09-01T20:17:34.000Z · LW · GW

Toby:

Yes, in some sense the idea of Turing computation is a kind of physical principle in that no well defined process we know of is not Turing computable (for other readers: this includes chaotic systems and quantum systems as the wave function is computable... with great difficulty in some cases).

Actually, if you built P and it really was very trivial, then I could get my simple Turing machine to compute a quantum level simulation of your P implementation with far less than 3^^^3 bits of extra information. Thus, if your bound really only kicks in at 3^^^3 bits, then (within currently accepted quantum physics) no trivial physical implementation of P can be possible.

Anyway, as you can't specify P in just a few simple states and symbols, I do not consider it to be an acceptable reference machine (for strict theory purposes at least).

Comment by Shane_Legg on Brief Break · 2008-09-01T16:54:54.000Z · LW · GW

Eli:

Quiet from the gallery! You're on a break remember. :-)

Yeah, it basically does come down to that. You don't get something from nothing. An ultra tiny universal machine seems to be the most something from the closest to nothing we can achieve.

Comment by Shane_Legg on Brief Break · 2008-09-01T16:37:16.000Z · LW · GW

Toby:

Whether you switch to something else like lambda calculus or a trivial CA doesn't really matter. These all boil down to models with a few states and transitions and as such have simple physical realisations. When you have only a few states and transitions there isn't much space to move about. This is the bedrock. It isn't absolutely unique, sure, but the space is tight enough to have little impact on Solomonoff induction.

3^^^3 is a super gigantic monster number, and all these mind boggeling many shorter programs outputting things that are complex on a minimal state Turing machine (or lambda calculus, or minimal CA, or minimal...), where are you going to put all this? You can't squeeze it into something that is as ultra trivial as the Wolfram/Smith UTM that has just 2 states and 3 symbols.

Comment by Shane_Legg on Brief Break · 2008-09-01T11:42:38.000Z · LW · GW

Toby:

Why not the standard approach of using Shannon's state x symbol complexity for Turing machines? If a reference machine has a very low state x symbol complexity then it is trivial to implement in our universe: we just need a few symbols, a few states, and a few transformation rules.

Comment by Shane_Legg on Brief Break · 2008-09-01T09:36:57.000Z · LW · GW

Tim:

What is the rationale for considering some machines and not others?

Because we want to measure the information content of the string, not some crazy complex reference machine. That's why a tiny reference machine is used. In terms of inductive inference, when you say that the bound is infinitely large, what you're saying is that you don't believe in Occam's razor. In which case the whole Bayesian system can get weird. For example, if you have an arbitrarily strong prior belief that most of the world is full of purple chickens from Andromeda galaxy, well, Bayes' rule is not going to help you much. What you want is an uninformative prior distribution, or equivalently over computable distributions, a very simple reference machine.

Thanks to the rapid convergence of the posterior from a universal prior, that 2^100 factor is small for any moderate amount of data. Just look at the bound equation.

These things are not glossed over. Read the mathematical literature on the subject, it's all there.

Comment by Shane_Legg on Brief Break · 2008-08-31T23:37:29.000Z · LW · GW

Eli:

Thanks. That clears things up. Enjoy your break. Maybe you should not post quite so much? You really do seem to be writing rather a lot these days. By the time I get to replying to some of your comments you've already written another 5 posts!

Tim:

Answering this question starts to feel a bit like living in the movie Groundhog Day. :-)

Usually the reference machine is taken to have a low state x symbol complexity, so you can't hide much in it. In other words the reference machine has to be in some sense simple.

Now look at the Kolmogorov complexity function. As you mention, if somebody else uses a different reference machine their measured Kolmogorov complexity will be different, where the maximal difference is bounded by some constant. How big is this bound? Pretty small. Many types of simple Turing machines have been shown to be able to simulate each other with a few hundred bits of input. It's also trivial for a serial machine to simulate a parallel machine. Remember that Kolmogorov complexity is a measure of information content, in the sense of shortest description. It's not a measure of how much computation was performed. There are other measures for that which are more complex...

Finally, look at the Solomonoff bound (bottom of page 38). There you see the Kolmogorov complexity of the true model of the environment. If you're using a different simple reference machine, this bound might go up by a few hundred bits. Is this a big deal? Well, yes: if you are using only a few bytes of input data, and that's all. But then Bayesian inference in general will have problems as your prior will strongly affect your posterior. The reference machine problem is the same issue, but in different language. However, what if you have more data, say a few kB or more, maybe much much more? In this case taking a few bytes longer to converge isn't really a bit deal. Especially considering that this is convergence for any unknown computable hypothesis. Say you want to predict the stock market, or results from particle physics experiments, or sentences in a book. Are a few bytes of extra data for the convergence bound going to make much difference? Not really. The Solomonoff predictor is still going to kick some serious butt.

Of course it's not computable, has to be approximated in practice... etc. etc. So why bother with all this? I see it as a kind of "mathematical philosophy". You take ideas about induction, learning, computation etc. and really nail them down hard in formal mathematics and then study what you've got. I think this gives you some insights into the nature of learning, intelligence etc. Of course, this is a rather subjective point. My own AGI project that I'm developing with some of my research buddies isn't directly based on Solomonoff Induction and AIXI, but we do draw on some related works (such as the universal intelligence measure suitably computably interpreted) and I do sometimes use AIXI as a kind of mental framework to think about some kinds of AGI design issues.

Comment by Shane_Legg on Brief Break · 2008-08-31T20:40:41.000Z · LW · GW

Joshua and Nick:

Eli described AIXI itself as "awfully stupid" in a post here two months ago.

Comment by Shane_Legg on Brief Break · 2008-08-31T20:11:37.000Z · LW · GW

If some of you want to brush up on AIXI before Eli gets into that, I might suggest checking out my thesis which is now online:

http://www.vetta.org/about-me/publications

SIAI has a curiously mixed attitude towards AIXI. On the SIAI website Hutter's AIXI book and related AIXI article are among the core readings list, then among the SIAI research agenda there are two AIXI related items based on research I've done. Recently, I was awarded an SIAI academic prize worth $10,000 for, you guessed it, my research into AIXI and related topics. And yet, Eli regularly describes AIXI as a "brain malfunction", or worse!

Comment by Shane_Legg on Magical Categories · 2008-08-28T16:17:00.000Z · LW · GW

Eli, to my mind you seem to be underestimating the potential of a super intelligent machine.

How do I know that hemlock is poisonous? Well, I've heard the story that Socrates died by hemlock poisoning. This is not a conclusion that I've arrived at due to the physical properties of hemlock that I have observed and how this would affect the human body, indeed, as far as I know, I've never even seen hemlock before. The idea that hemlock is a poison is a pattern in my environment: every time I hear about the trial of Socrates I hear about it being the poison that killed him. It's also not a very useful piece of information in terms of achieving any goals I care about as I don't imagine that I'll ever encounter a case of hemlock poisoning first hand. Now, if I can learn that hemlock is a poison this way, surely a super intelligent machine could too? I think any machine that can't do this is certainly not super intelligent.

In the same way a super intelligent machine will form good models of what we consider to be right and wrong, including the way in which these ideas vary from person to person, place to place, culture to culture. Your comments about the machine getting the people to appear happy or saying "Yes" vs. "No", well, I don't understand this. It's as if you seem to think that a super intelligent machine will only have a shallow understanding of its world.

Please note (I'm saying this for other people reading this comment) even if a super intelligent machine will form good models of human ethics though observing human culture, this doens't mean that the machine will take this as its goal.

Comment by Shane_Legg on Magical Categories · 2008-08-25T19:28:59.000Z · LW · GW

"You keep speaking of "good" abstractions as if this were a property of the categories themselves, rather than a ranking in your preference ordering relative to some decision task that makes use of the categories."

Yes, I believe categories of things do exist in the world in some sense, due to structure that exists in the world. I've seen thousands of things where were referred to as "smiley faces" and so there is an abstraction for this category of things in my brain. You have done likewise. While we can agree about many things being smiley faces, in borderline cases, such as the half burnt off face, we might disagree. Something like "solid objects" was an abstraction I formed before I even knew what those words referred to. It's just part of the structure present in my surroundings.

When I say that pulling this structure out of the environment in certain ways is "good", I mean that these abstractions allow the agent to efficiently process information about its surroundings and this helps it to achieve a wide range goals (i.e. intelligence as per my formal definition). That's not to say that I think this process is entirely goal driven (though it clearly significantly is, e.g. via attention). In other words, an agent with general intelligence should identify significant regularities in its environment even if these don't appear to have any obvious utility at the time: if something about its goals or environment changes, this already constructed knowledge about the structure of the environment could suddenly become very useful.

Comment by Shane_Legg on Magical Categories · 2008-08-24T23:53:35.000Z · LW · GW

I mean differentiation in the sense of differentiating between the abstract categories. Is a half a face that appears to be smiling while the other half is burn off still a "smiley face"? Even I'm not sure.

I'm certainly not arguing that training an AGI to maximise smiling faces is a good idea. It's simply a case of giving the AGI the wrong goal.

My point is that a super intelligence will form very good abstractions, and based on these it will learn to classify very well. The problem with the famous tank example you cite is that they were training the system from scratch on a limited number of examples that all contained a clear bias. That's a problem for inductive inference systems in general. A super intelligent machine will be able to process vast amounts of information, ideally from a wide range of sources and thus avoid these types of problems for common categories, such as happiness and smiley faces.

If what I'm saying is correct, this is great news as it means that a sufficiently intelligent machine that has been exposed to a wide range of input will form good models of happiness, wisdom, kindness etc. Things that, as you like to point out, even we can't define all that well. Hooking up the machine to then take these as its goals, I suspect won't then be all that hard as we can open up its "brain" and work this out.

Comment by Shane_Legg on Magical Categories · 2008-08-24T22:35:10.000Z · LW · GW

It is just me, or are things getting a bit unfriendly around here?

Anyway...

Wiring up the AI to maximise happy faces etc. is not a very good idea, the goal is clearly too shallow to reflect the underlying intent. I'd have to read more of Hibbard's stuff to properly understand his position, however.

That said, I do agree with a more basic underlying theme that he seems to be putting forward. In my opinion, a key, perhaps even THE key to intelligence is the ability to form reliable deep abstractions. In Solomonoff induction and AIXI you see this being driving by the Kolmogorov compressor, in the brain the neocortical hierarchy seems to be key. Furthermore, if you adopt the perspective I've taken on intelligence (i.e. the universal intelligence measure) you see that the reverse implication is true: intelligence actually requires the ability to form deep abstractions. In which case, a super intelligent machine must have the ability to form very deep and reliable abstractions about the world. Such a machine could still try to turn the world into happy faces, if this was its goal. However, it wouldn't do this by accident because its ability to form abstractions was so badly flawed that it doesn't differentiate between smiling faces and happy people. It's not that stupid. Note that this goes for forming powerful abstractions in general, not just human things like happiness and faces.

Comment by Shane_Legg on Existential Angst Factory · 2008-07-19T09:07:49.000Z · LW · GW

"You do some volunteer work at a charity (or better yet, work the same hours professionally and donate the money, thus applying the Law of Comparative Advantage)"

Better for the charity, maybe. Better for you and your angst, probably not.

Comment by Shane_Legg on Could Anything Be Right? · 2008-07-18T12:20:19.000Z · LW · GW

"... all our science and all our probability theory was built on top of a chain of appeals to our instinctive notion of "truth"."

Our mental concept of "probability" may be based on our mental concept of "truth", but that in turn is based on "what works": we have a natural tendency (but only a tendency) to respect solid evidence and to consider well supported prepositions to be "true" due to evolution. Thus, our mental concept of "truth" is part the way down this chain; it's not the source.

A similar argument can be made for morality. It's a product of both genetic and cultural evolution. It's what allowed us and our tribes to succeed: by loving our children, cooperating with our peers, avoiding a war with the neighbouring tribe if you could, and fighting against them if you had to.

Since then we have gone from isolated tribes to a vast interconnected global community due to rapidly changing technology. The evolution of our cultural morality, and even more so our instinctive morality, has not kept pace with the rate at which technology has been engineered. Loving your children and your neighbour are still very useful, but if your sense of fighting for your "tribe" risks turning into global nuclear war, that's now a serious risk for the whole system. The solution then is to intelligently engineer our morality to ensure the successful and stable harmonious existence of ourselves as a global tribe.

Comment by Shane_Legg on Rebelling Within Nature · 2008-07-13T13:50:19.000Z · LW · GW

I see this as a continuation of the same theme: a kind of "frame of reference" issue.

For example, I suspect that time doesn't exist when you look at the universe from the most broad perspective. Instead, you have this kind of platonia on which time is just a relation between different points across one of its dimensions. But that doesn't mean that time doesn't exist within my personal frame of reference. I'm here experiencing time right now. Similarly, I know that my hand is mostly empty space, from a universal point of view, but that doesn't mean that it makes sense of me to relate to my hand as being empty space. In my frame of reference it's quite solid. Same for freewill: I understand that from the universal perspective it doesn't exist in some sense, but for me in my frame of reference it does. "I" am "free" to do what "I" decide to do. Viewed correctly there is no contradiction, just as there is no contradiction between the fact that my hand is "empty space" and yet quite solid.

Here again we have the same thing, but with morality. If we zoom out to the universal scale perhaps there is no morality. However, the universal scale is not where I am. Shooting my mother is still wrong according to my values and principles, just like how I have freewill, time exists and my hand is solid. My desire to preserve my mother's life may well have an evolutionary explanation, however that doesn't in any way invalidate my desire, or give me any reason to discard it, or even want to discard it.

Comment by Shane_Legg on What Would You Do Without Morality? · 2008-06-29T20:20:00.000Z · LW · GW

Dynamically linked:

"Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)"

How did you arrive at this conclusion? I said that discovering that all actions in life were worthless might eventually affect my behaviour. Via some leap in reasoning you arrive at the above. Care to explain this to me?

My guess is that if I knew that all actions were worthless I might eventually stop doing anything. After all, if there's no point in doing anything, why bother?

Comment by Shane_Legg on What Would You Do Without Morality? · 2008-06-29T09:52:02.000Z · LW · GW

Well, to start with I'd keep on doing the same thing. Just like I do if I discover that I really live in a timeless MWI platonia that is fundamentally different to what the world intuitively seems like.

But over time? Then the answer is less clear to me. Sometimes I learn things that firstly affect my world view in the abstract, then the way I personally relate to things, and finally my actions.

For example, evolution and the existence of carnivores. As I child I'd see something like a hawk tearing the wings off a little baby bird. I'd think that the hawk was very nasty and I'd want to intervene. But once I understood that this is what the hawk must do to survive, and indeed this process of weeding out the weak both keeps the sparrow population under control and helps improve their overall genetic fitness. Moreover, without trillions of similar brutal acts life would never have evolved at all. Well, with a certain level of discomfort, I can accept this baby bird getting violently killed.

Now, I'm not saying that after learning that all utility functions equal zero that I'd eventually totally change my behaviour. I don't know. But I imagine that it could effect the way I think about the world in ways that might eventually affect my behaviour.

Comment by Shane_Legg on No Universally Compelling Arguments · 2008-06-26T11:13:00.000Z · LW · GW

I think I understood... but, I didn't find the message coming through as clearly as usual.

I'm uncomfortable with you talking about "minds" because I'm not sure what a mind is.

Comment by Shane_Legg on The Design Space of Minds-In-General · 2008-06-25T16:00:35.000Z · LW · GW

@ Silas:

I assume you mean "doesn't run" (python isn't normally a compiled language).

Regarding approximations of Solomonoff induction: it depends how broadly you want to interpret this statement. If we use a computable prior rather than the Solomonoff mixture, we recover normal Bayesian inference. If we define our prior to be uniform, for example by assuming that all models have the same complexity, then the result is maximum a posteriori (MAP) estimation, which in turn is related to maximum likelihood (ML) estimation. Relations can also be established to Minimum Message Length (MML), Minimum Description Length (MDL), and Maximum entropy (ME) based prediction (see Chapter 5 of Kolmogorov complexity and its applications by Li and Vitanyi, 1997).

In short, much of statistics and machine learning can be view as being computable approximations of Solomonoff induction.

Comment by Shane_Legg on The Design Space of Minds-In-General · 2008-06-25T13:38:46.000Z · LW · GW

@ Silas:

Given that AIXI is uncomputable, how is somebody going to discuss implementing it?

An approximation, sure, but an actual implementation?

Comment by Shane_Legg on The Design Space of Minds-In-General · 2008-06-25T09:20:22.000Z · LW · GW

@ Eli:

Yeah, my guess is that AIXI-tl can be broken. But AIXI? I'm pretty sure it can be broken in some senses, but whether these senses are very meaningful or significant, I don't know.

And yes, my "proof" that FAI would fail failed. But it also wasn't a formal proof. Kind of a lesson in that don't you think?

So until I see a proof, I'll take your statement about AIXI being "awfully stupid" as just an opinion. It will be interesting to see if you can prove yourself to be smarter than AIXI (I assume you don't view yourself as below awfully stupid).

Comment by Shane_Legg on The Design Space of Minds-In-General · 2008-06-25T08:13:32.000Z · LW · GW

@ Eli:

"Arguably Marcus Hutter's AIXI should go in this category: for a mind of infinite power, it's awfully stupid - poor thing can't even recognize itself in a mirror."

Have you (or somebody else) mathematically proven this?

(If you have then that's great and I'd like to see the proof, and I'll pass it on to Hutter because I'm sure he will be interested. A real proof. I say this because I see endless intuitions and opinions about Solomonoff induction and AIXI on the internet. Intuitions about models of super intelligent machines like AIXI just don't cut it. In my experience they very often don't do what you think they will.)

Comment by Shane_Legg on Bloggingheads: Yudkowsky and Horgan · 2008-06-08T13:11:05.000Z · LW · GW

I think Horgan's questions were good in that they were a straight forward expression of how many sceptics think. My own summary of this thinking goes something like this:

The singularity idea sounds kind of crazy, if not plain out ridiculous. Super intelligent machines and people living forever? I mean... come on! History is full of silly predictions about the future that turned out to be totally wrong. If you want me to take this seriously you're going to have to present some very strong arguments as to why this is going to happen.

Although I agree with most of what Eli said, rhetorically it sounded like he was avoiding this central question with a series of quibbles and tangents. This is not going to win over many sceptics' minds.

I think it's an important question to try to answer as directly and succinctly as possible -- a longish "elevator pitch" that forms a good starting point for discussion with a sceptic. I'll think about this and try to write a blog post.

Comment by Shane_Legg on Thou Art Physics · 2008-06-06T22:21:38.000Z · LW · GW

@ Eliezer:

... which is why I don't believe that I have classical free will.

Comment by Shane_Legg on Thou Art Physics · 2008-06-06T21:58:33.000Z · LW · GW

@ Eliezer:

I don't understand your comment. In case it wasn't clear: I don't believe in the existence of free will in the classical sense.

Comment by Shane_Legg on Thou Art Physics · 2008-06-06T13:46:04.000Z · LW · GW

@ a. y. mous

Randomness doesn't give you any free will. Imagine that every time you had to make a decision you flipped a coin and went with the coin's decision. Your behaviour would follow a probability distribution and wouldn't be deterministic, however you still wouldn't have any free will. You'd be a slave to the outcomes of the coin tosses.

Comment by Shane_Legg on Thou Art Physics · 2008-06-06T12:34:19.000Z · LW · GW

@ a. y. mous.

I don't see the straw man. In the classical sense "freewill" means that there is something outside of the system that is free to make decisions (at least this is my understanding of it). If you see yourself, your will, your decision making process and everything as all existing within the system and thus governed by physics, then that answers your question: in a classical sense the answer is no. There are many other ways to define "freewill", however, and under some of these definitions the answer to the question will be "yes". Thus, rather than focusing on whether the answer is "yes" or "no", you should first worry about what the question really means. Once you have straightened that out, your answer could be "yes", "no" or that your question no longer makes any sense, i.e. it is a "wrong question".