Less Wrong Q&A with Eliezer Yudkowsky: Video Answers

post by MichaelGR · 2010-01-07T04:40:35.546Z · LW · GW · Legacy · 100 comments

On October 29th, I asked Eliezer and the LW community if they were interested in doing a video Q&A. Eliezer agreed and a majority of commenters were in favor of the idea, so on November 11th, I created a thread where LWers could submit questions. Dozens of questions were asked, generating a total of over 650 comments. The questions were then ranked using the LW voting system.

On December 11th, Eliezer filmed his replies to the top questions (skipping some), and sent me the videos on December 22nd. Because voting continued after that date, the order of the top questions in the original thread has changed a bit, but you can find the original question for each video (and the discussion it generated, if any) by following the links below.

Thanks to Eliezer and everybody who participated.

Update: If you prefer to download the videos, they are available here (800 MB, .wmw format, sort the files by 'date created').

Link to question #1.

Link to question #2.

Link to question #3.

Link to question #4.

Eliezer Yudkowsky - Less Wrong Q&A (5/30) from MikeGR on Vimeo.

Link to question #5.

(Video #5 is on Vimeo because Youtube doesn't accept videos longer than 10 minutes and I only found out after uploading about a dozen. I would gladly have put them all on Vimeo, but there's a 500 MB/week upload limit and these videos add up to over 800 MB.)

Link to question #6.

Link to question #7.

Link to question #8.

Link to question #9.

Link to question #10.

Link to question #11.

Link to question #12.

Link to question #13.

Link to question #14.

Link to question #15.

Link to question #16.

Link to question #17.

Link to question #18.

Link to question #19.

Link to question #20.

Link to question #21.

Link to question #22.

Link to question #23.

Link to question #24.

Link to question #25.

Link to question #26.

Link to question #27.

Link to question #28.

Link to question #29.

Link to question #30.

If anything is wrong with the videos or links, let me know in the comments or via private message.

100 comments

Comments sorted by top scores.

comment by MichaelGR · 2010-01-07T21:24:35.625Z · LW(p) · GW(p)

Bonus feature: If you 'pagedown' rapidly through all the videos, you get an Eliezer flipbook.

comment by FAWS · 2010-01-09T20:06:01.886Z · LW(p) · GW(p)

I wonder why Eliezer doesn't want to say anything concrete about his work with Marcello? ("Most of the real progress that has been made when I sit down and actually work on the problem is things I'd rather not talk about")

There seem to be only two plausible reasons:

  1. Someone else might use his work in ways he doesn't want them to.
  2. It would somehow hurt him, the SIAI or the cause of Friendly AI.

For 1. someone else stealing his work and finishing a provably friendly AI first would be a good thing, would it not? Losing the chance to do it himself shouldn't matter as much as the fate of the future intergalactic civilization to an altruist like him. Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design, are aware of the friendliness problem and take friendliness serious enough to mine the work on friendliness of the main proponent of friendliness for useful ideas. Wouldn't giving these people a relative advantage compared to e. g. connectivists be a good thing? Unless he thinks that AGI would then suddenly be very close while FAI still is far away... Or maybe he thinks a partial solution to the friendliness problem would make people overconfident and less cautious than they would otherwise be?

As for 2. the work so far might be very unimpressive, reveal embarrassing facts about a previous state of knowledge, or be subject to change and a publicly apparent change of opinion be deemed disadvantageous. Or maybe Eliezer fears that publicly revealing some things would psychologically commit him to them in ways that would be counterproductive?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-10T13:07:48.465Z · LW(p) · GW(p)

Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design

All FAIs are AGIs, most of the FAI problem is solving the AGI problem in particular ways.

comment by Bo102010 · 2010-01-07T05:46:01.299Z · LW(p) · GW(p)

What would be way cool is a description of the question along with the link, though I realize that might be a bit of work.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-07T05:59:34.477Z · LW(p) · GW(p)

1.

What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.

By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.

2.

Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):

http://yudkowsky.net/obsolete/bookshelf.html

Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).

3.

What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?

4.

Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?

5.

During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

6.

I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)

7.

What's your advice for Less Wrong readers who want to help save the human race?

8.

Autodidacticism

Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)

9.

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

10.

What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?

11.

If you were to disappear (freak meteorite accident), what would the impact on FAI research be?

Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?

12.

Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?

13.

How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?

14.

Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?

From the other direction, why aren't you an ultrafinitist?

15.

Why do you have a strong interest in anime, and how has it affected your thinking?

16.

What are your current techniques for balancing thinking and meta-thinking?

For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.

17.

Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)

18.

What progress have you made on FAI in the last five years and in the last year?

19.

How do you characterize the success of your attempt to create rationalists?

20.

What is the probability that this is the ultimate base layer of reality?

21.

Who was the most interesting would-be FAI solver you encountered?

22.

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

23.

In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/"mind hack" to cause people to support SIAI. You've also repeatedly said that the friendly AI problem is a "save the world" level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into "win by any means necessary" mode, saving the world is it.

24.

What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?

25.

I admit to being curious about various biographical matters. So for example I might ask:

What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?

26.

Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)

ETA: By AI I meant AGI.

27.

Do you feel lonely often? How bad (or important) is it?

(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?

28.

Previously, you endorsed this position:

Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.

One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it's difficult to directly manipulate all the subtle signals that indicate confidence to others.

What do you think about this kind of self-deception?

29.

In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that... There's simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?

ie, what if it turned out that The Law turned out to have the consequence of "to create a general mind is to create a conscious mind. No way around that"? Obviously that shifts the ethics a bit, but my question is basically if so, well... "now what?" what would have to be done differently, in what ways, etc?

30.

What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

Replies from: Technologos, MatthewB
comment by Technologos · 2010-01-07T08:16:24.585Z · LW(p) · GW(p)

You repeat #10 as #11; the question as cited by Eliezer is as follows:

If you got hit by a meteorite, what would be the impact on FAI research? Would other people be able to pick it up from there?

comment by MatthewB · 2010-01-07T07:08:36.348Z · LW(p) · GW(p)

In response to Eliezer's response on Video #5, indicating that smart people should be working on AI, and not String Theory.

I tend to agree, as those are fields which are not likely going to give us any new technologies that are going to make the world a safer place... and

Any work that speeds the arrival of AI will also speed the solution to any problems in sciences such as String Theory, as a recursively improving intelligence will be able to aid in the discovery of solutions much more rapidly than the addition of five or ten really smart people will aid in the discovery of solutions.

Replies from: Jack
comment by Jack · 2010-01-07T19:14:30.449Z · LW(p) · GW(p)

Shouldn't we hedge our bets a little? I don't know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn't 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.

Am I wrong?

comment by Kevin · 2010-01-08T02:09:26.433Z · LW(p) · GW(p)

20: What is the probability that this is the ultimate base layer of reality?

Eliezer gave the joke answer to this question, because this is something that seems impossible to know.

However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

The zero one infinity rule also makes it seem more unlikely this is the base level of reality. http://catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html

It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.

I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?

Replies from: Wei_Dai, Thomas, ArisKatsaris, rortian
comment by Wei Dai (Wei_Dai) · 2010-01-08T02:50:29.762Z · LW(p) · GW(p)

In my posts, I've argued that indexical uncertainty like this shouldn't be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.

Replies from: Eliezer_Yudkowsky, cousin_it, gwern
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-18T07:05:46.496Z · LW(p) · GW(p)

BTW, I agree with this.

comment by cousin_it · 2011-04-19T10:28:57.005Z · LW(p) · GW(p)

Coming back to this comment, it seems to be another example of UDT giving a technically correct but incomplete answer.

Imagine you have a device that will tell you, tomorrow at 12am, whether you are in a simulation or in the base layer. (It turns out that all simulations are required by multiverse law to have such devices.) There's probably not much you can do before 12am tomorrow that can cause important and far-reaching consequences. But fortunately you also have another device that you can hook up to the first. The second device generates moments of pleasure or pain for the user. More precisely, it gives you X pleasure/pain if you turn out to be in a sim, and Y pleasure/pain if you are in the base layer (presumably X and Y have different signs). Depending on X and Y, how do you decide whether to turn the second device on?

comment by gwern · 2010-02-18T03:39:40.130Z · LW(p) · GW(p)

Have you pulled it all together anywhere? I've sometimes seen & thought this Pascal's wager-like logic before (act as if your choices matter because if they don't...), but I've always been suspicious precisely because it looks too much to me like Pascal's wager.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-18T23:01:54.990Z · LW(p) · GW(p)

I've thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could't think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn't make sense to put a probability on "being in a simulation". (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)

I'm not sure I see what you mean by "Pascal's wager-like logic". Can you explain a bit more?

Replies from: Kevin, gwern
comment by Kevin · 2010-03-10T06:44:13.022Z · LW(p) · GW(p)

A top-level post on the application of TDT/UDT to the Simulation Argument would be worthwhile even if it was just a paragraph or two long.

Replies from: wedrifid
comment by wedrifid · 2010-03-10T09:48:53.417Z · LW(p) · GW(p)

A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!

comment by gwern · 2010-02-19T03:02:37.968Z · LW(p) · GW(p)

I've thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could't think of much to say.

I think that's enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.

I'm not sure I see what you mean by "Pascal's wager-like logic". Can you explain a bit more?

I mean that I read Pascal's Wager as basically 'p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p'. (Clumsy phrasing, I'm afraid.)

Your example sounds like that: 'believing you-are-not-being-simulated implies x utility (motivation for one's actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.' This seems to be a substitution of 'not-being-simulated' into the PW schema.

comment by Thomas · 2010-01-08T19:05:01.262Z · LW(p) · GW(p)

If the probability, that you are inside a simulation is p, what's the probability that your master simulator is also simulated?

How tall is this tower, most likely?

Replies from: Cyan
comment by Cyan · 2010-01-08T19:54:47.216Z · LW(p) · GW(p)

Being in a simulation within a simulation (nested to any level) implies being in a simulation. The proper decomposition is p = sum over all positive N of (probability of simulation nested to level N)

Replies from: Thomas, DanArmak
comment by Thomas · 2010-01-08T22:15:47.132Z · LW(p) · GW(p)

The top simulator has N operations to execute before his free enthalpy basin is empty.

Every level down, this number is smaller. Before long, there is impossible to create a nontrivial simulation inside the current. This is the bottom one.

This simulation tower is just a great way to squander all the free enthalpy you have. Is the top simulation master that stupid?

I doubt it.

Replies from: Kevin
comment by Kevin · 2010-01-09T06:13:32.131Z · LW(p) · GW(p)

In that sense, there's actually a significant risk to the singularity. Why should the simulation master (I usually facetiously use the phrase "our overlords" when referring to this entity) let us ever run a simulation that is likely to result in an infinitely nested simulation? Maybe that's why the LHC keeps blowing up.

comment by DanArmak · 2010-01-08T23:49:51.766Z · LW(p) · GW(p)

You also need to include scenarios for infinitely-high towers, or closed-loop towers, or branching and merging networks, or one simulation being run in several (perhaps infinitely many) simulating worlds, or the other way around...

I don't think we can assign a meaningful prior to any of these, and so we can't calculate the probability of being in a simulation.

Replies from: Kevin
comment by Kevin · 2010-01-09T06:15:19.501Z · LW(p) · GW(p)

I don't think the probability calculation is meaningful because the infinities mess it up. But you still need to ask, are you in the original 2010 or one of infinitely many possible ways to be in a simulated 2010? I can't assign a probability; but I have a strong intuition when comparing one to infinite.

comment by ArisKatsaris · 2011-04-19T11:28:48.926Z · LW(p) · GW(p)

The zero one infinity rule also makes it seem more unlikely this is the base level of reality.

The Zero-One-Infinity Rule hasn't been shown to apply to our reality, and even if it applied to our reality it would also permit "One".

It seems rather convenient that I am living in the most interesting period in human history.

Can you give us a list of most-to-least interesting periods in human history? You have an anglo name, and I think you're living in a particularly boring period of Anglo-American history. (If you had an Arab name, this might be an interesting period though, though not as interesting as if you were an Arab in the period of Mohammed or the first few Caliphs)

but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

You don't actually know what you would want with a transhuman mind. If simulations are fully conscious (the only sort of simulation relevant to our argument) I think that would be a particularly cruel thing for a transhuman mind to want.

comment by rortian · 2010-01-09T09:06:39.501Z · LW(p) · GW(p)

You are suggesting a world with much more energy then the one that we know. It seems you should assign a lower probability to there being a much higher energy universe.

Replies from: Kevin, Kevin
comment by Kevin · 2010-01-10T10:33:04.153Z · LW(p) · GW(p)

By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.

comment by Kevin · 2010-01-10T10:32:50.556Z · LW(p) · GW(p)

By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.

Replies from: rortian
comment by rortian · 2010-01-11T22:53:38.598Z · LW(p) · GW(p)

Wow, I really am curious why you think this would apply to spacial dimensions.

Replies from: Kevin
comment by Kevin · 2010-01-12T11:33:52.024Z · LW(p) · GW(p)

Why do you think there are only 3 or 4 or 5 or 6 or 8 or 12 or 42 or 248 or n spatial dimensions? If there actually are 42 spatial dimensions, I will accept it as the existence of God and clear evidence that he is a fan of Douglas Adams.

The extra dimensions could likely not impact our system of physics in any way we can detect. They are non-measurable sets.

Also, the Jargon File seems as likely a candidate for accidentally containing universal truth as anything.

Replies from: GuySrinivasan, rortian
comment by GuySrinivasan · 2010-02-18T08:46:01.002Z · LW(p) · GW(p)

In 1 or 2 dimensions, random walks return to the origin infinitely often. In 3 dimensions, they have but a 34% chance. There are nontrivial qualitative differences between numbers of spatial dimensions that we don't see when we think "2? 3? 5? 179? It's just a choice of N!"

comment by rortian · 2010-01-12T22:33:41.620Z · LW(p) · GW(p)

Why do you think there are only 3 or 4 or 5 or 6 or 8 or 12 or 42 or 248 or n spatial dimensions?

I think we have good reason to believe that we are in 3 spatial dimensions. But as you say:

The extra dimensions could likely not impact our system of physics in any way we can detect.

What exactly is the point of these dimensions? I see no reason to concede extra dimensions to make the fact that we are living in a simulation more probable.

comment by CronoDAS · 2010-01-07T08:33:20.315Z · LW(p) · GW(p)

Your answer to question #8 doesn't mention how you convinced your parents to let you drop out of school at age 12...

comment by arundelo · 2010-01-17T18:11:50.964Z · LW(p) · GW(p)

I couldn't figure out a way to "play all", so I put everything but the Vimeo one on a YouTube playlist.

comment by Alex Flint (alexflint) · 2010-01-10T13:51:55.020Z · LW(p) · GW(p)

Thanks for putting all this together! It would be great if you could put the question text above each of the videos in the post so readers can scan through and find questions they're most interested in.

comment by Wei Dai (Wei_Dai) · 2010-01-09T02:51:35.307Z · LW(p) · GW(p)

Re: autodidacticism & Bayesian enlightenment

For comparison, I did a lot of self-education, but also had a conventional education (ending with a BA in Computer Science). I think I was introduced to Bayesianism in a probability class in college, and it was also the background assumption in a couple of economics courses that I took for fun (Game Theory and Industrial Organization). It seems to me that choosing pure autodidacticism probably delayed Eliezer's Bayesian enlightenment by at least a couple of years.

comment by JulianMorrison · 2010-01-07T23:29:43.519Z · LW(p) · GW(p)

Society is supported by "hydraulic pressure", a myriad flows of wealth/matter/energy/information and human effort each holding the others up. It's a layered, cyclic graph - technology depends on the surplus food of agriculture, agriculture depends on the efficiencies of technology. It's a massively connected graph. It has non-obvious dependencies even at short range - think what computer gamers have done for Moore's law, or music pirates for broadband. It has dependencies across time. It has a lot of dependencies in which the supporter does not know and probably wouldn't much care about the supported - consider the existence of Freemind software, which was not written for SIAI.

This whole structure expends most of its effort supporting itself, most of the rest on motivator rewards, and SIAI gets the crumbs. You could realistically get lots more crumbs.

What is the information dynamics of spreading understanding of FAI as a problem? What technologies support communication, and what are their limitations? (Especially limitations in the ability to arrange huge data for optimally for narrow human input.) How to explore the space of information-connecting technologies? Given that most people have satisficed on a learning strategy that leaves you out entirely, how can you communicate urgency to them?

What economic flows support you in the above? Who supports them?

I think your answer in #5 trivializes the question.

comment by gelisam · 2010-01-07T21:11:53.966Z · LW(p) · GW(p)

Oh, so that's what Eliezer looks like! I had imagined him as a wise old man with long white hair and beard. Like Tellah the sage, in Final Fantasy IV.

Replies from: Eliezer_Yudkowsky, khafra, Corey_Newsome
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T00:08:00.036Z · LW(p) · GW(p)

I'll have you know that I work hard at not going down that road.

Replies from: Furcas
comment by Furcas · 2010-01-08T00:14:55.288Z · LW(p) · GW(p)

Do you mean the beard part or the getting old part?

comment by khafra · 2010-04-21T20:31:47.164Z · LW(p) · GW(p)

I believe Steve Rayhawk is SIAI's designated "Tellah the Sage."

comment by Corey_Newsome · 2010-01-10T18:04:17.405Z · LW(p) · GW(p)

Speaking of appearances, Eliezer makes me feel self-conscious about how un-white my teeth are.

comment by Zack_M_Davis · 2010-01-07T08:44:25.685Z · LW(p) · GW(p)

Re #11, whatever happened with Michael Wilson?

Replies from: Kazuo_Thow, Tyrrell_McAllister
comment by Kazuo_Thow · 2010-01-09T19:14:45.352Z · LW(p) · GW(p)

He's currently the technical director at Bitphase AI. From talking to him, it seems that his strategy is to make tools for speeding up eventual FAI development/implementation and also commercialize those tools to gain funding for FAI research.

comment by Tyrrell_McAllister · 2010-01-08T19:39:07.371Z · LW(p) · GW(p)

Who's Michael Wilson?

Replies from: Kaj_Sotala, Vladimir_Nesov, whpearson
comment by Kaj_Sotala · 2010-01-08T21:10:06.654Z · LW(p) · GW(p)

The writer of this mini-FAQ on AI, among other things.

"Further back, I was a research associate at the Singularity Institute for AI for a while, late 2004 to late 2005ish, I'm not involved with them at present but I wish them well."

comment by Vladimir_Nesov · 2010-01-08T19:52:15.647Z · LW(p) · GW(p)

Probably a True Michael.

comment by whpearson · 2010-01-08T19:56:03.360Z · LW(p) · GW(p)

He was active on SL4 back in ye olde days.

comment by Stuart_Armstrong · 2010-01-15T13:49:33.630Z · LW(p) · GW(p)

Thanks for the answers.

comment by Furcas · 2010-01-07T06:07:22.060Z · LW(p) · GW(p)

Wow, thank you for this!

Don't forget to rate each video as you're watching them, people!

comment by roland · 2010-01-07T23:52:32.253Z · LW(p) · GW(p)

From answer 5 there is a great quote from Eliezer:

Reality is one thing... your emotions are another.

About how we don't feel the importance of the singularity.

comment by Paul Crowley (ciphergoth) · 2010-01-07T09:28:49.677Z · LW(p) · GW(p)

I'd find it incredibly useful to be able to download these videos, so I can watch them on my TV rather than on the PC. I'm doing so one by one via a rather painful process that doesn't work for Vimeo at the moment; if anyone can make it easier that would be wonderful!

EDIT: A torrent of the videos would seem the most straightforward way.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-07T17:30:10.746Z · LW(p) · GW(p)

All the videos are available here (in their original .wmw format):

http://www.megaupload.com/?d=1Q35MN2F

Sort the files by "date created" to have them in order.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-08T10:42:21.488Z · LW(p) · GW(p)

Brilliant - thanks!

comment by MatthewB · 2010-01-07T12:37:12.627Z · LW(p) · GW(p)

In Video 12, Eliezer says that the SIAI is probably not going to be funding any Ad Hoc AI programs that may or may not produce any lightning bolts of AH-HA! or Eureka Moments.

He also says that he believes that any recursive self-improving AI that is created must be done so (created) to very high standards of precision (so that we don't die in the process)...

Given these two things. what exactly is the SIAI going to be funding?

Replies from: Kutta
comment by Kutta · 2010-01-07T20:05:58.643Z · LW(p) · GW(p)

Given these two things. what exactly is the SIAI going to be funding?

These projects, for example...

Replies from: drcode, MatthewB
comment by drcode · 2010-01-08T02:43:04.496Z · LW(p) · GW(p)

Hmm... that list of projects worries me a little...

It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question "OK, I'm a Christian, now what should I do?" The fact that they don't address any follow up questions really hurts their credibility.

Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.

Replies from: Christian_Szegedy, SoullessAutomaton
comment by Christian_Szegedy · 2010-01-08T02:58:07.608Z · LW(p) · GW(p)

Working on papers submitted to peer-reviewed scientific journals is not marketing but research.

If SIAI wants to build some credibility then it needs some publications in scientific journals. Doing so could help to ensure further funding and development of actual implementations.

I think that it is a very good idea to first formulate and publish the theoretical basis for the work they intend to do, rather than just saying: we need money to develop component X of our friendly AI.

Of course a possible outcome will be that the scientific community will deem the research shallow, unoriginal or unrealistic to implement. However, it is necessary to publish the ideas before they can be reviewed.

So my take on this is that SIAI is merely asking for a chance to demonstrate their skills rather than for blind commitment.

comment by SoullessAutomaton · 2010-01-08T03:02:39.763Z · LW(p) · GW(p)

I expect that developing AI to the desired standards is not currently a project that can be moved forward by throwing money at it (at least not money at the scale SIAI has to work with).

I can't speak for SIAI, but were I personally tasked with "arrange the creation an AI that will start a positive singularity" my strategy for the next several years at least would center on publicity and recruiting.

comment by MatthewB · 2010-01-08T12:17:39.565Z · LW(p) · GW(p)

I do not think I am as pessimistic as drcode about the work that I see the SIAI doing. At first, it did strike me as similar to the televangelist, but then I began thinking that all of the works on the SIAI projects list could very well influence people who are going to be doing the hard work of putting code to machine (Hopefully, as I will be doing eventually).

I think it was Soulless Automaton below who suggested that the SIAI is probably not yet to the point where they can make grants to doing the actual work of creating AGI/FAI.

comment by Kevin · 2010-01-20T13:13:46.546Z · LW(p) · GW(p)

Specifically in response to #11, it sounds like you really need more help but can't find anyone right now. What about more broadly reaching out to mathematicians of sufficient caliber?

One idea: throw a mini-conference for super-genius level mathematicians. Whether or not they believe in the possibility of AI, a lot of them would probably be delighted to come if you gave them free airfare, hotel stay, and continental breakfast. Would this be productive?

comment by JulianMorrison · 2010-01-08T00:26:02.214Z · LW(p) · GW(p)

On UFAI, you should liaise with Shane Legg, his recent estimate for brain-structure-copying AI of human level but not subject to FAI style proofs - he puts the peak chance around 2028. This would be AI that duplicates brain algorithms with similar conventional AI algorithms, not a neuron-for-neuron copy.

comment by Sting · 2024-02-24T03:02:42.422Z · LW(p) · GW(p)

None of the YouTube videos seem to be linked in the post, but they are available here: https://www.youtube.com/@MichaelGrahamRichard/videos

comment by bogdanb · 2010-01-10T21:40:30.276Z · LW(p) · GW(p)

I'm really curious, why exactly was this interview made via video?

It seems much less useful than, well, posts and textual comments.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-10T21:42:45.032Z · LW(p) · GW(p)

Video takes more time to consume, but it is more natural for humans to consume. It makes the material more friendly or somesuch. We get to take advantage of all the channels of communication that aren't just the text.

comment by MatthewB · 2010-01-07T06:17:24.058Z · LW(p) · GW(p)

Just a quick question here...

While I agree with everything that Eliezer is saying (in the videos up to #5. I have not yet watched the remaining 25 videos yet), I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.

For instance, he, rightly, makes the claim that this point in history is crunch time for our species (although I have some specific questions about the specific consequences he believes might befall us if we fail), and for the inter-galactic civilization to which we will eventually give birth.

Now, I completely understand what he is saying here.

But, Joe Sixpack is going to think us a bunch of lunatics to be worrying about things like AI (whether it is friendly or not), and other existential risks to life, when he needs to pay less taxes so that he can employ another four workers. Never mind that Joe Sixpack is about the most irrational man on earth, he votes for other, equally irrational men, who eventually get in the way of our goals by marginalizing us due to statements about "the Intergalactic civilization which we will eventually be responsible for."

It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and "kill all humans" (To quote Bender).

Replies from: Eliezer_Yudkowsky, None, MatthewB
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T00:09:07.824Z · LW(p) · GW(p)

People who want to quote me out of context already have plenty of ammunition. I say screw it.

Replies from: MatthewB
comment by MatthewB · 2010-01-08T11:54:24.502Z · LW(p) · GW(p)

Well... OK Then. I think my whole point was that you/we/the Singularity movement in general, needs to be prepared for an eventual use of quotes taken out of context. To be prepared for it.

I have no problem with outlining eventual goals, and their reasoning, even if it sounds insane (to an uneducated listener), yet it would be a good idea to have the groundwork prepared for such an eventuality. I was hoping that such groundwork was on someone's mind, is this the case?

Replies from: Kevin
comment by Kevin · 2010-01-08T13:33:35.694Z · LW(p) · GW(p)

I think you're right to point out how crazy this seems to outsiders. This website reads like nonsense to most people.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-08T17:42:19.085Z · LW(p) · GW(p)

That's why FAQs and About pages and such should be written with newcomers in mind, and address the "Yes it sounds crazy, but here's why it might not be" question that they will first ask.

comment by [deleted] · 2011-08-18T15:49:56.064Z · LW(p) · GW(p)

I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.

It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and "kill all humans" (To quote Bender).

I'm actually more worried about very high status reasonably intelligent individuals in positions of power, who will use out of context quotes to preserve their self-image of being a good and moral persons, by refusing to re-evaluate priorities because that would violate their tribal identity and their rationale for why they have so far "deserved" all the high status that they have.

Imagine a supreme court judge, in fact imagine the outlier who is closest to the ideal from the current set of all judges ever, the best possible judge that could stumble into the position by currently existing social structures, trying to decide if something related to the FAI project is legal or not.

Frankly, that scares the s**t out of me.

comment by MatthewB · 2010-01-07T17:36:32.164Z · LW(p) · GW(p)

I am curious as to why the above comment was down-voted. I do not understand what was either irrational or possibly offensive to anyone within the comment.

Replies from: Vladimir_Nesov, Cyan
comment by Vladimir_Nesov · 2010-01-07T17:45:30.530Z · LW(p) · GW(p)

I downvoted the comment for stating the overly obvious: not because it makes any particular mistake, but to signal that I don't want many comments like this to appear. Correspondingly, it's a weak signal, and typically one should wait several hours for the rating disagreement on comments to settle, for example your comment is likely to be voted up again if someone thinks it is a kind of comment that shouldn't be discouraged.

Replies from: MatthewB
comment by MatthewB · 2010-01-07T17:51:14.858Z · LW(p) · GW(p)

You don't want to see comments asking about the possible repercussions of certain forms of language?

I did do some editorializing at the end of the comment, but the majority of the comment was meant as a question about publicizing the need for friendly AI due to the need to be responsible for a possible inter-galactic civilization. As this would tend to portray us as lunatics, even if there is a very good rationale behind it (Eliezer's and other's arguments about the potential of friendly AI and the intelligence explosion that results from it are very sound, and the arguments for intelligence expanding from Earth as we make our way outward are just as sound). My point was more along the lines of:

Couldn't this be communicated in a way that will not sound insane to the Normals?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-07T18:24:33.255Z · LW(p) · GW(p)

Couldn't this be communicated in a way that will not sound insane?

This is an obvious concern, and much more general and salient than this particular situation, so just stating it explicitly doesn't seem to contribute anything.

Relevant links: Absurdity heuristic, Illusion of transparency.

Replies from: MatthewB
comment by MatthewB · 2010-01-07T18:43:20.196Z · LW(p) · GW(p)

I had thought that the implicature in that question was more than just rhetorical stating of something that I hoped would be obvious.

It was meant to be a way of politely asking about things such as:

Was this video meant just for LW, or do random people come by the videos on YouTube, or where-ever else they might wind up linked?

How popular is this blog and do I need to be more careful about mentioning such things due to lurkers?

Shouldn't someone be worrying explicitly about public image (and if there is, what are they doing about it)?

Etc.

Lastly, I read the link on the Absurdity Heuristic, yet, I am not so certain why it is relevant; The importance of the Absurd in learning or discovery?

comment by Cyan · 2010-01-07T17:47:18.716Z · LW(p) · GW(p)

Maybe Searle's a lurker? I think the pranks are the problem (ETA: nope), although I personally find them hilarious.

Replies from: MatthewB
comment by MatthewB · 2010-01-07T17:53:38.118Z · LW(p) · GW(p)

I think that the Searle comment was on a different thread, which shouldn't have any bearing on this one.

And, looking back... I can see why someone may have objected.

Replies from: Cyan
comment by Cyan · 2010-01-07T17:56:09.911Z · LW(p) · GW(p)

Dur, I'm an idiot.

comment by Cyan · 2010-01-08T04:36:06.552Z · LW(p) · GW(p)

Eliezer and I continue to look rather alike. I still don't have a full beard, but I put on some weight last year and my face pudged up a bit, accentuating the similarity. I took a short vid of myself with a Flip camcorder and ran it next to my laptop screen while running one of the YouTube vids, and it was pretty uncanny. Incidentally, elizombies.jpg is nowhere to be found... :-( .

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-01-08T19:50:39.354Z · LW(p) · GW(p)

It still shows in the post Zombies: The Movie.

Here is the link straight to the picture: http://lesswrong.com/static/imported/2008/04/19/elizombies.jpg

Replies from: Cyan
comment by Cyan · 2010-01-08T19:57:39.702Z · LW(p) · GW(p)

Thanks. I have no Google-fu, apparently.

comment by NancyLebovitz · 2010-01-08T04:06:12.862Z · LW(p) · GW(p)

Question 25: I'm surprised that Orthodox Judaism would disincline people to choose cryonics-- I thought it's a religion which is strongly oriented towards living this life well rather than towards an afterlife.

I read about an ethics of life extension conference where the only people who were unambiguously in favor of life extention were the Orthodox Jews.

What am I missing?

Replies from: Psy-Kosh, Blueberry
comment by Psy-Kosh · 2010-01-08T10:42:30.616Z · LW(p) · GW(p)

What you're missing is "...blah blah blah, proper Jewish burial, in accordance with the will of god... blah blah blah... no 'disrespecting' dead bodies ... blah blah blah... moshiach ("the messiah") will come 'real soon now', and bring back the dead, their bodies being regrown from a tiny indestructible bone that exists at the base of the spine..."

That should give you a small sample. :P

comment by Blueberry · 2010-01-08T05:27:53.507Z · LW(p) · GW(p)

I'm surprised that Orthodox Judaism would disincline people to choose cryonics-- I thought it's a religion which is strongly oriented towards living this life well rather than towards an afterlife.

Well, yes, exactly. Cryonics is about living this life well, not about an afterlife. An afterlife is what happens after you're gone. The more you cared about an afterlife, the less you'd be inclined to extend your life or be immortal.

This might be confusing because it looks like you're cryonically preserved after you "die", but really, you haven't actually died yet. Death means you can no longer be resuscitated.

comment by JulianMorrison · 2010-01-08T00:40:53.505Z · LW(p) · GW(p)

I'm surprised you found my "success creating rationalists" Q confusing. What are the factors of success? How many, how good, how successful are the teaching techniques, can the techniques scale to more than just a clique (or to trees-of-cliques), is the teacher-pupil-teacher cycle properly closed, and so on.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-08T01:25:28.117Z · LW(p) · GW(p)

I'm surprised you found my "success creating rationalists" Q confusing.

Here's the entirety of your original question:

How do you characterize the success of your attempt to create rationalists?

The precisions that you added here would certainly have helped make things clearer.

Replies from: JulianMorrison
comment by JulianMorrison · 2010-01-08T01:33:45.486Z · LW(p) · GW(p)

I thought the implications of "success" in the context of "create rationalists" were clear. Or, that a person setting out to generate implications would produce a stochastic approximation of the ones that interested me. (And I was also interested in the shape of that approximation.)

comment by CronoDAS · 2010-01-08T00:08:27.126Z · LW(p) · GW(p)

Regarding #5, I think the world needs more tuberculosis drugs more urgently than it needs more FAI research. The future will take care of itself; I don't expect to be around to see it anyway.

Replies from: Nick_Tarleton, MichaelGR, Nick_Tarleton
comment by Nick_Tarleton · 2010-01-08T11:19:06.638Z · LW(p) · GW(p)

The future will take care of itself

We've been over this before.

comment by MichaelGR · 2010-01-08T01:34:17.240Z · LW(p) · GW(p)

The future will take care of itself; I don't expect to be around to see it anyway.

What if both those things are false?

(The first because of survivorship bias, the second because of advances in medical science (ie. curing the diseases of aging, SENS) or because of possible breakthroughs in AGI/brain scanning/etc making things happen quickly)

Replies from: CronoDAS
comment by CronoDAS · 2010-01-08T06:48:00.398Z · LW(p) · GW(p)

Well, if the future doesn't take care of itself, then I definitely won't be around to see it. ;)

And I don't know if my being around to see it would be a good thing. I can't imagine the distant future needing me any more than the present needs men like Nathan Bedford Forrest or any random ancient Roman gladiator.

What would the average educated person from 1800 think about today? How many things would they be horrified by? Let's see...

Interracial marriages?
Divorce being commonplace and accepted?
The Bible not being taught in schools?
Children talking back to their parents?
Pornography?
Women in the workforce?
Gay rights?

I'm sure that the list could go on and on, and I'd also expect that I'd be as horrified by our future as our ancestors would be by our present.

Replies from: DanArmak, MichaelGR, Nick_Tarleton, Kutta
comment by DanArmak · 2010-01-08T23:45:13.338Z · LW(p) · GW(p)

Incidentally an ancient Roman, gladiator or otherwise, would not be very surprised by any of the things you listed.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-09T00:01:37.104Z · LW(p) · GW(p)

Yes, the Romans wouldn't be very disturbed by most of the above. Except the things about children and women, perhaps. They'd probably consider us too soft in a lot of ways...

comment by MichaelGR · 2010-01-08T14:59:21.715Z · LW(p) · GW(p)

Well, if the future doesn't take care of itself, then I definitely won't be around to see it. ;)

My point is that it might or might not "take care of itself", we shouldn't be so sure either way, which is why we should do what we can to nudge it in the right direction (by, f.ex., working on existential risks and FAI, among other things).

What would the average educated person from 1800 think about today? How many things would they be horrified by?

And how many things would they find amazing and worth living for (many of which we take for granted and don't even notice anymore)?

I'm sure that the list could go on and on, and I'd also expect that I'd be as horrified by our future as our ancestors would be by our present.

As Kutta says, this isn't a time machine scenario (unless cryonics are involved, I suppose). The future would come one day at a time, as it has always done throughout your life.

comment by Nick_Tarleton · 2010-01-08T10:54:02.636Z · LW(p) · GW(p)

Well, if the future doesn't take care of itself, then I definitely won't be around to see it. ;)

But the whole point of trying to reduce existential risk is that this may not be true.

comment by Kutta · 2010-01-08T10:19:46.823Z · LW(p) · GW(p)

And I don't know if my being around to see it would be a good thing. I can't imagine the distant future needing me any more than the present needs men like Nathan Bedford Forrest or any random ancient Roman gladiator.

There is no time machine utilized here; you just live into the future normally. Aside from that, you should be able to explain most of those horrifying things to Roman Gladiators as good things given there's enough time and effort. If I'm teleported to the future and see all kind of horrifying things around me, this evidence that the future is a bad future is somewhat discounted because first I have to rule out the possibility that the "horrors" I see are manifestations or side effects of moral progress.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-08T23:58:44.940Z · LW(p) · GW(p)

There is no time machine utilized here; you just live into the future normally.

Well, I think the most plausible way for me to live to see more than 120 years after my date of birth involves cryonics - and that might as well be time travel into the future.

If I'm teleported to the future and see all kind of horrifying things around me, this evidence that the future is a bad future is somewhat discounted because first I have to rule out the possibility that the "horrors" I see are manifestations or side effects of moral progress.

I would think so too; if all goes reasonably well, the future would be better for those that live in it, but that doesn't mean I won't be disturbed by it. And people in the future would probably judge me guilty of either contributing to or failing to prevent horrible crimes, much the same as we consider the ancient Romans to have been responsible for many horrible things. I don't want to be put on trial for eating factory farmed meat, for example.

Replies from: None
comment by [deleted] · 2011-08-18T16:04:27.759Z · LW(p) · GW(p)

I don't want to be put on trial for eating factory farmed meat, for example.

Behold the radiant beauty that is nullum crimen sine lege (specifically forbidding ex post facto laws as many modern legal systems do). Of course while this is pretty widely embraced by most decent places to live, it in practice isn't really robust since we've seen violations of this principle on a massive scale in recent history.

But a future that upheld it consistently would be pretty neat. Or so it seems to me when naively looking at it.

comment by Nick_Tarleton · 2010-01-08T11:17:56.822Z · LW(p) · GW(p)

The future will take care of itself

http://lesswrong.com/lw/uk/beyond_the_reach_of_god/