My hour-long interview with Yudkowsky on "Becoming a Rationalist"

post by lukeprog · 2011-02-06T03:19:25.796Z · LW · GW · Legacy · 22 comments

It makes for good Less Wrong introductory material to point people to, since there are lots of people who won't read long article online but will listen to a podcast on the way to work: LINK.

Apologies for the self-promotion, but it could hardly be more relevant to Less Wrong...

22 comments

Comments sorted by top scores.

comment by jimrandomh · 2011-02-06T15:33:24.918Z · LW(p) · GW(p)

Mostly stuff that's familiar if you've read enough of Eliezer's articles, but this bit was interesting:

LUKE: Well Eliezer one last question. I know you have been talking about writing a book for quite a while and a lot of people will be curious to know how that’s coming along.

ELIEZER: So, I am about to finished with the first draft. The book seems to have split into two books. One is called How to Actually Change Your Mind and it is about all the biases that stop us from changing our minds. And all these little mental skills that we invent in ourselves to prevent ourselves from changing our minds and the counter skills that you need in order to defeat this self-defeating tendency and manage to actually change your mind.

It may not sound like an important problem, but if you consider that people who win Nobel prizes typically do so for managing to change their minds only once, and many of them go on to be negatively famous for being unable to change their minds again, you can see that the vision of people being able to change their minds on a routine basis like once a week or something, is actually the terrifying Utopian vision that I am sure this book will not actually bring to pass. But, it may none the less manage to decrease some of the sand in the gears of thought.

comment by Alicorn · 2011-02-06T03:44:29.207Z · LW(p) · GW(p)

Does a transcript exist?

Replies from: janos
comment by janos · 2011-02-06T03:54:36.571Z · LW(p) · GW(p)

It's provided in the linked page; you need to scroll down to see it.

comment by beriukay · 2011-02-06T13:47:05.982Z · LW(p) · GW(p)

I don't think you need to apologize for the self-promotion. I was happy to hear it, and glad to be told about it.

comment by timtyler · 2011-02-06T18:26:15.036Z · LW(p) · GW(p)

Yudkowsky gives the value of boredom as an example - as he did in value is fragile.

The problem here is that boredom is bound to be one of the basic AI drives. It will emerge as an instrumental value in practically any goal-directed intelligent system - since getting stuck doing the same thing repeatedly is a common bug that prevents agents from attaining their goals. If you have to do science, make technology, build spaceships, etc - then it is important to explore, and not to the same thing all the time - and goal-directed agents will realise that.

Of course, some things never get boring - even for humans. Food, sex, etc - but for the other things, you don't need to build boredom in as a basic value, a wide class of agents will get it automatically.

So, for example, a gold mining agent won't get bored of digging holes, but they will get bored doing unprofitable scientific research. Boredom is just nature's way of telling you that you have exhausted the local opportunities - and really should be doing something else.

Anyway, we don't have to worry too much about missing out things that will arise naturally through ordinary utility maximisation in a wide range of goal-directed agents. The idea that the universe will go up the tubes unless humans reverse-engineer boredom and manually program it into their machines is not correct. A wide class of machines that simply maximise utility will get boredom automatically - along with a desire for atoms, energy and space - unless we explicitly tell them not to assign instrumental value to those things.

Replies from: NancyLebovitz, Sniffnoy, Eliezer_Yudkowsky
comment by NancyLebovitz · 2011-02-06T22:59:26.658Z · LW(p) · GW(p)

The problem here is that boredom is bound to be is one of the basic AI drives. It will emerge as an instrumental value in practically any goal-directed intelligent system - since getting stuck doing the same thing repeatedly is a common bug that prevents agents from attaining their goals. If you have to do science, make technology, build spaceships, etc, etc, it is important to explore, and not to the same thing all the time - and goal-directed agents will realise that.

I think curiosity and the desire to do things that work would be better as motivations than avoiding boredom. Sometimes the best strategy is to keep doing the same thing (for example, the professional poker player who needs to keep playing the best known strategy even though they're in a year-long losing streak).

Replies from: timtyler
comment by timtyler · 2011-02-06T23:51:11.514Z · LW(p) · GW(p)

I think curiosity and the desire to do things that work would be better as motivations than avoiding boredom.

Do you have any speculations about why evolution apparently disagrees?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-02-07T02:04:18.878Z · LW(p) · GW(p)

No, but it's quite an interesting question. Evolution does go in for sticks as well as carrots, even though punishment has non-obvious costs among humans.

When I made my comment, I hadn't read the interview. I'm not sure about Eliezer's worst case scenario from lack of boredom-- it requires that there be a best moment which the AI would keep repeating if it weren't prevented by boredom. Is there potentially a best moment to tile the universe with? Could an AI be sure it had found the best moment?

Replies from: timtyler
comment by timtyler · 2011-02-07T08:15:53.118Z · LW(p) · GW(p)

The sticks are for things that are worse than sitting there doing nothing.

I figure boredom is like that - it has to work at the hedonic baseline - so it has to be a stick.

Is there potentially a best moment to tile the universe with? Could an AI be sure it had found the best moment?

Are its goals to find the "best moment" in the first place? It seems impossible to answer such questions without reference to some kind of moral system.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-02-07T11:32:58.722Z · LW(p) · GW(p)

My mistake-- here's the original:

So, if you lost the human notion of boredom and curiosity, but you preserve all the rest of human values, then it would be like… Imagine the AI that has everything but boredom. It goes out to the stars, takes apart the stars for raw materials, and it builds whole civilizations full of minds experiencing the most exciting thing ever, over and over and over and over and over again.

The whole universe is just tiled with that, and that single moment is something that we would find this very worthwhile and exciting to happen once. But it lost the single aspect of value that we would name boredom and went instead to the more pure math of exploration-exploitation where you spend some initial resources finding the best possible moment to live in and you devote the rest of your resources to exploiting that one moment over and over again.

So it's "most exciting moment", not "best moment".

Even that might imply a moral system. Why most exciting rather than happiest or most content or most heroic?

Your "The sticks are for things that are worse than sitting there doing nothing." still might mean that boredom could be problematic for an FAI.

It's at least possible that meeting the true standards of Friendliness (whatever they are) isn't difficult for a sufficiently advanced FAI. The human race and its descendants has the mixture of safety and change that suits them as well as such a thing can be done.

The FAI is BORED! We can hope that its Friendliness is a strong enough drive that it won't make its life more interesting by messing with humans. Maybe it will withdraw some resources from satisfying people to do something more challenging. How is this feasible if everything within reach is devoted to Friendly goals?

The AI could define self-care as part of Friendliness, even as very generous people acknowledge that they need rest and refreshment.

I'm beginning to see the FAI as potentially a crazy cat-lady, but perhaps creating/maintaining only as many people as it can take care of properly is one of the easier problems.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-07T13:33:58.037Z · LW(p) · GW(p)

So it's "most exciting moment", not "best moment".

I suspect that's an overstatement. Presumably it would be "most valuable moment," where the value of a moment is determined by all the other axes of value except novelty.

comment by Sniffnoy · 2011-02-06T21:48:46.529Z · LW(p) · GW(p)

Of course, some things never get boring - even for humans. Food, sex, etc

I find this assertion pretty questionable. And regardless, people can certainly get bored of the same type of food, etc.

Replies from: timtyler
comment by timtyler · 2011-02-07T19:14:27.823Z · LW(p) · GW(p)

The Methuselahites regularly claim that they would never get bored with being alive.

That might be an exaggeration - but the intended point was that rewarding and essential things are largely protected from boredom.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-07T17:24:24.465Z · LW(p) · GW(p)

Timtyler is trolling again, please vote down. This exact point was addressed in the interview and in http://lesswrong.com/lw/xr/in_praise_of_boredom/. The math of exploration-exploitation does not give you anything like humanlike boredom.

Replies from: Alexandros, Dr_Manhattan, timtyler
comment by Alexandros · 2011-02-09T13:08:49.107Z · LW(p) · GW(p)

I don't see how it's fair of you to encourage others to downvote a comment. If people agree with you on the evaluation of this comment, they will vote accordingly. If not, not.

By explicitly requesting downvotes, you are encouraging people that would not have taken this decision otherwise to downvote, circumventing the normal thought process. The Karma system allows each user one up/downvote per item, and you're supposed to up/downvote if you personally consider something worthwhile or not. By encouraging others to downvote, you are converting personal influence into a multi-downvote right for yourself.

You might of course believe that you, as founder and admin, represent the true volition of LessWrong, which is fair enough. You already are the judge of what gets promoted to the front page, which I don't have a problem with, as it is clearly stated. But in that case, you might as well directly access the database and bring the downvote level to what you'd like it to be. By inducing a downvote mob, turning the community against certain users, you are corroding community spirit. What you just did is analogous to the popular kid saying to another "Go away, we don't like you. Right everybody?". Again, if you don't want certain users here, as you have indicated with timtyler, you are free to ban them outright. But if you don't want to assert such a right, then you shouldn't be doing it indirectly either. This is dealing with the problem at the wrong level of abstraction.

This community has been built around your writings and contains a large amount of people who take your opinion as authority. I urge you to refrain from using this power as a community management tool.

Replies from: Dr_Manhattan, Eliezer_Yudkowsky
comment by Dr_Manhattan · 2011-02-09T13:33:45.760Z · LW(p) · GW(p)

Please upvote this comment.

Truly yours,

/Irony

:)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-09T15:07:46.563Z · LW(p) · GW(p)

I think that asking the community to downvote timtyler is a good deal less disruptive than an outright ban would be. It makes it clear that I am not speaking only for myself, which may or may not have an effect on certain types of trolls and trolling. And doing nothing is not a viable option..

Replies from: Alexandros, Vladimir_Nesov
comment by Alexandros · 2011-02-10T07:47:19.770Z · LW(p) · GW(p)

I know wellkept gardens die by pacifism. Notice I did not say you shouldn't moderate. They also die by micromanagement and pitting users against each other. I've been running forums (some wellkept, others not so much) for over 10 years now so believe me I've had similar conversations from the opposite side more than I care to remember. My intention is not to cause you grief. However, soliciting up/downvotes would be called gaming/voting-rings if done by anybody else. And by prompting downvotes, it's not clear that 'you are not speaking only for yourself'. If the community was against timtyler, they would downvote spontaneously, without the prodding. Now the community's signal has been muddied, which is part of the problem.

All I'm saying is ammend the rules or uphold them. Sidestepping them is not a good place to go. In any case, I think I've communicated my point as clear as I could have, so I'll leave it here.

Replies from: wedrifid
comment by wedrifid · 2011-02-10T10:50:12.909Z · LW(p) · GW(p)

I was persuaded by your comments and actually changed my votes accordingly. I am surprised.

comment by Vladimir_Nesov · 2011-02-09T15:25:47.944Z · LW(p) · GW(p)

It's too inconvenient to downvote everything. At some point, it just feels like not being worth the trouble, you've already done it too many times, besides you won't even read what the user writes. A community analogue of banning a user must be a judgment about the user, not a judgment about specific comments. The number of users who disapprove of the user, not of specific comments. This requires a new feature to do well. Could be as simple as vote up/down buttons on the user page.

comment by Dr_Manhattan · 2011-02-09T13:36:05.143Z · LW(p) · GW(p)

Agreed on content, disagreed on policy (same as Alexandros) = 0

comment by timtyler · 2011-02-07T18:52:15.440Z · LW(p) · GW(p)

It gives you pure boredom - of the type that you get when efficiently searching a space. That is why evolution makes humans that get bored in the first place.

Human boredom is an approximation of that - since humans have limited resources, and are imperfect. To the extent that humans are approximations of instrumentally-rational agents, then of course, human boredom resembles the pure boredom that you would get from efficiently searching a space.

This is not "universalizing anthropomorphic values" - as you claim - but rather the universal fact that agents don't explore a search space very effectively if they stay for too long in the same place.