The Yudkowsky Ambition Scale

post by loup-vaillant · 2012-09-12T15:08:06.292Z · LW · GW · Legacy · 61 comments

Contents

61 comments

From Hacker News.

  1. We're going to build the next Facebook!
  2. We're going to found the next Apple!
  3. Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
  4. Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you?
  5. This is going to be the equivalent of the invention of electricity if it works out.
  6. We're going to make an IQ-enhancing drug and produce basic change in the human condition.
  7. We're going to build serious Drexler-class molecular nanotechnology.
  8. We're going to upload a human brain into a computer.
  9. We're going to build a recursively self-improving Artificial Intelligence.
  10. We think we've figured out how to hack into the computer our universe is running on.

This made me laugh, but from the look of it, I'd say there is little work to do to make it serious. Personally, I'd try to shorten it so it is punchier and more memorable.

61 comments

Comments sorted by top scores.

comment by Oscar_Cunningham · 2012-09-13T09:20:07.433Z · LW(p) · GW(p)

I can't find the comment of Eliezer that inspired this but:

The "If-you-found-out-that-God-existed scale of ambition".

1) "Well obviously if I found out God exists I'd become religious, go to church on Sundays etc."

2) "Actually, most religious people don't seem to really believe what their religion says. If I found out that God existed I'd have to become a fundamentalist, preaching to save as many people from hell as I could."

3) "Just because God exists, doesn't mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn't give up until I'd thought about the problem and tried all possible courses of action."

4) "God is massively powerful. Sure I'd kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God's power and use it to do good."

Replies from: DanArmak, advancedatheist, advancedatheist
comment by DanArmak · 2012-09-13T16:54:14.148Z · LW(p) · GW(p)

6) "Good. I already planned to become God if possible. Now I have an existence proof."

Replies from: DanArmak
comment by DanArmak · 2012-09-13T19:06:19.620Z · LW(p) · GW(p)

7) "That's strange, I don't remember creating that god... It must have grown from my high school science experiment when I wasn't looking."

comment by advancedatheist · 2012-09-14T04:42:16.087Z · LW(p) · GW(p)

3) "Just because God exists, doesn't mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn't give up until I'd thought about the problem and tried all possible courses of action."

But if you succeed in pulling everyone from hell, what would give their existences meaning and purpose? I mean, you just can't thwart god's sovereign will for his creatures without consequences. God created them for damnation as their telos from the very beginning, just as he created others to receive totally undeserved salvation.

Replies from: Mestroyer
comment by Mestroyer · 2012-09-14T07:29:42.135Z · LW(p) · GW(p)

I would rather have no purpose (originating in myself or in someone else) than have the outside-given purpose of suffering. If they cared about anything when they got out of hell, that would be their purpose though.

But I would expect them all to be insane from centuries of torture.

Replies from: Multiheaded
comment by Multiheaded · 2012-09-14T13:05:55.050Z · LW(p) · GW(p)

That was a bit of misplaced sarcasm, I assume.

Replies from: advancedatheist
comment by advancedatheist · 2012-09-14T14:47:51.800Z · LW(p) · GW(p)

I tried to imagine what a Calvinist would say.

comment by advancedatheist · 2012-09-14T04:33:50.998Z · LW(p) · GW(p)

4) "God is massively powerful. Sure I'd kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God's power and use it to do good."

In other words, you want to convert god into a Krell Machine that works properly?

Replies from: ChristianKl
comment by ChristianKl · 2014-02-27T20:38:42.957Z · LW(p) · GW(p)

That's Eliezers life mission. Preventing an UFAI and instead having an FAI.

comment by khafra · 2012-09-12T17:33:08.549Z · LW(p) · GW(p)

My ambition is infinite but not limitless. I don't think I can re-arrange the small natural numbers.

Replies from: Will_Newsome, Exiles
comment by Will_Newsome · 2012-09-12T18:02:37.245Z · LW(p) · GW(p)

Quoting Michael Vassar and myself; I think we independently thought it.

comment by Exiles · 2012-09-12T23:13:01.838Z · LW(p) · GW(p)

https://twitter.com/nicktarleton/status/115615378188668928

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2012-09-14T20:17:54.286Z · LW(p) · GW(p)

That's me quoting Michael Vassar

comment by Athrelon · 2012-09-12T16:14:14.653Z · LW(p) · GW(p)

Great concept.

Also, a great example of how to singlehandedly reframe a discussion - a skill that may be a rare advantage of LWers in the social-influence sphere.

comment by radical_negative_one · 2012-09-12T16:53:05.274Z · LW(p) · GW(p)

Just one suggestion: come up with a new goal to put at the top of the list, and shift the rest down. That way, "how to hack into the computer our universe is running on" would be "up to 11" on the list.

The new #1 item could be something like "We're going to make yet another novelty t-shirt store!"

Replies from: Athrelon, Pentashagon
comment by Athrelon · 2012-09-12T17:16:23.298Z · LW(p) · GW(p)

Since it's basically a log scale in terms of outcomes, the T-shirt store might be a 0.

-10 would be "I will make a generic post on LW."

It would be a fun exercise to flesh out the negative side of the scale.

Replies from: radical_negative_one, faul_sname, Raiden
comment by faul_sname · 2012-09-12T18:21:51.338Z · LW(p) · GW(p)

-15: I will specify a single item on the negative side of the scale.

Replies from: TimS
comment by TimS · 2012-09-12T18:25:01.281Z · LW(p) · GW(p)

-20: I will critique a potential addition to the list without adding a suggestion of my own.

Replies from: faul_sname, Emile, SilasBarta
comment by faul_sname · 2012-09-12T18:36:25.071Z · LW(p) · GW(p)

-21:

comment by Emile · 2012-09-12T18:32:09.384Z · LW(p) · GW(p)

That's not a very interesting item, it's too similar to the -15 one.

comment by SilasBarta · 2012-09-12T19:07:56.222Z · LW(p) · GW(p)

-20 - 2j: I will object to being called "miss" ("Thank you, miss"), without offering an alternative form of address, or thinking about what the proper one would be, and then after a lot of back-and-forth, agree that "miss" was appropriate in that context.

j = sqrt(-1) -- this is kinda orthogonal to ambition, but same counterproductiveness

comment by Raiden · 2012-09-13T01:17:39.810Z · LW(p) · GW(p)

-25: It briefly occurs to me to think about a generic post on LW.

comment by Pentashagon · 2012-09-12T17:21:08.722Z · LW(p) · GW(p)

Nah.

11 We think we've figured out how to hack into the computer ALL the universes are running on.

Replies from: shminux
comment by Shmi (shminux) · 2012-09-12T18:53:19.461Z · LW(p) · GW(p)

12 create your own universe tree.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-09-12T21:04:13.567Z · LW(p) · GW(p)

13: the entire level 4 Tegmark multiverse.

14: newly discovered level 5 Tegmarkian multiverse.

Replies from: Incorrect
comment by Incorrect · 2012-09-12T21:13:01.933Z · LW(p) · GW(p)

15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n

comment by Thomas · 2012-09-12T17:42:08.826Z · LW(p) · GW(p)

99+ percent alive don't intend to reach even number 1. They consider it as a megalomania of a sort.

Never the less, we must do 9, regardless of almost everybody's opinion. Man got to do, what man got to do.

Replies from: khafra
comment by khafra · 2012-09-12T22:45:14.861Z · LW(p) · GW(p)

To be fair, if 1% of people think they can found a company that defines the way more than 10% of humans relate to each other for several years, 99.9999% of them are vastly overconfident.

comment by lsparrish · 2012-09-12T18:26:40.993Z · LW(p) · GW(p)

Nice! I'm thinking my idea of a self-adjusting currency that uses a peer to peer proof of work algorithm which solves useful NP problems as a side effect and incorporates automated credit ratings based on debt repayment and contract fulfillment rates is probably in the 3 range. But if I hook it up to a protein folding game that teaches advanced biochemistry to akrasiatic gamers as a side effect it could be boosted up to the 6 range.

Replies from: SilasBarta, evand, gwern
comment by SilasBarta · 2012-09-14T00:28:08.022Z · LW(p) · GW(p)

If you ignore the credit rating system, and replace its hash algorithm with variable-length (expanding) one, that's basically what Bitcoin is. (Inversion of variable-length collision-resistant hash functions is NP-hard. I had to ask on that one.)

[EDIT: That question has been dead for a while, but now that I posted a link, it got another answer which basically repeats the first answer and needlessly retreads why I had to rephrase the question in a way such that the hash inversion problem has a variable size so that asymptotic difficulty becomes meaningful, thus being non-responsive to the question as now phrased. I hope it wasn't someone from here that clicked that link.]

They've made a lot of progress getting games to derive protein-folding results, but I think there's a lot of room for improvement there (better fidelity to the laws of the protein folding environment so players can develop an "intuition" of what shapes work, semiotics that are more suggestive of the dynamics of their subsystems, etc).

comment by evand · 2012-09-14T03:28:52.756Z · LW(p) · GW(p)

I trust you've looked into Ripple? It strikes me as fairly interesting, though the implementation is, at present, uninspiring.

comment by gwern · 2012-09-14T01:35:41.027Z · LW(p) · GW(p)

I've been musing about the same sort of proof-of-work algorithm, but I haven't come up with a good actual system yet - there's no obvious way to decentralizedly get a guaranteed-hard new useful problem.

Replies from: lsparrish
comment by lsparrish · 2012-09-15T00:34:05.460Z · LW(p) · GW(p)

Interesting! I was actually inspired by some of your IRC comments.

I am thinking the problems would be produced by peers and assigned to one another using a provably random assignment scheme. When assigned a problem, each peer has the option to ignore or attempt to solve. If they choose ignore they are assigned another one. Each time this happens to a problem is treated by the network as evidence that the problem is a hard one. If someone solves a scored-as-hard problem, they get a better chance of winning the block. (This would be accomplished by appending the solution as a nonce in a bitcoin-like arrangement and setting the minimum difficulty based on the hardness ranking.)

Replies from: gwern
comment by gwern · 2012-09-15T00:56:22.849Z · LW(p) · GW(p)

Hm. It never occurred to me that provable randomness might be useful... As stated, I don't think your scheme works because of Sybil attacks:

  1. I come up with some easy NP problem or one already solved offline
  2. I pass it around my 10,000 fake IRC nodes who all sign it,
  3. and present the solved solution to the network
  4. $$$
comment by Manfred · 2012-09-12T16:48:37.085Z · LW(p) · GW(p)

It's interesting that 2 isn't particularly easier than 9, assuming 9 is possible. The scale is in the effect, and though there are differences in difficulty, they're not the point.

Replies from: DanArmak
comment by DanArmak · 2012-09-12T18:34:59.528Z · LW(p) · GW(p)

2 has been done many times in human history (for some reasonably definition of what companies count as "previous Apples"). 9 has never been done. Why do you think 9 is no harder than 2, assuming it is possible?

Replies from: Manfred
comment by Manfred · 2012-09-12T20:04:24.934Z · LW(p) · GW(p)

9 has been done many times in human history too, for some reasonable definition of "create a better artificial optimizer."

Anyhow, to answer your question, I'm just guessing, based on calling "difficulty" something like marginal resources per rate of success. If you gave me 50 million dollars and said "make 2 happen," versus if you gave me 50 million dollars and said "make 9 happen," basically. Sure, someone is more likely to do 2 in the next few years than 9, ceteris paribus. But a lot more resources are on 2 (though there's a bit of a problem with this metric since 9 scales worse with resources than 2).

Replies from: evand
comment by evand · 2012-09-12T20:06:51.654Z · LW(p) · GW(p)

That's why 9 specifies "recursively self-improving", not "build a better optimizer", or even recursively improving optimizer. The computer counts for recursively improving, imho, it just needs some help, so it's not self-improving.

Replies from: TheOtherDave, Manfred
comment by TheOtherDave · 2012-09-12T21:48:44.211Z · LW(p) · GW(p)

Presumably, if anyone ever solves 9, so did their mom.
Which is not in fact intended as a "your mom" joke, but I don't see any way around it being read that way.

comment by Manfred · 2012-09-12T20:17:40.990Z · LW(p) · GW(p)

If self-improving intelligence is somwehere on the hierarchy of "better optimizers," you just have to make better optimizers, and eventually you can make a self-improving optimizer. Easy peasy :P Note that this used the assumption the it's possible, and requires you to be charitable about interpreting "hierarchy of optimizers."

comment by NancyLebovitz · 2012-09-12T21:29:12.170Z · LW(p) · GW(p)

When I posted about the possibility of raising the sanity waterline enough to improve the comments at youtube, it actually felt wildly amibitious.

Where would achieving that much fit on the list?

Replies from: see
comment by see · 2012-09-13T04:49:38.484Z · LW(p) · GW(p)

I think, given how many millions of minds it would have to affect and how much sanity increase it would require, it sounds a lot like 6 in practice. (Unless the approach is "Build a company big enough to buy Google, and then limit comments to people who are sane", in which case, 2.)

Replies from: DanArmak
comment by DanArmak · 2012-09-13T16:55:20.083Z · LW(p) · GW(p)

Or you could build a Youtube competitor that draws most users away from Youtube, which is between 0.5 and 1.

comment by [deleted] · 2012-09-13T00:28:14.978Z · LW(p) · GW(p)

You'll need at least two levels below 1 to make it really useful.

  1. I'm going to watch TV
  2. I'm going to have a career
  3. I'm going to start a successful company
  4. I'm going to build the next facebook ...
comment by Shmi (shminux) · 2012-09-12T21:13:44.951Z · LW(p) · GW(p)

Any past examples of level 6 and up?

Replies from: evand
comment by evand · 2012-09-12T21:48:09.670Z · LW(p) · GW(p)

Level 6 seems like it could include both language and writing. For stuff beyond that, I think you have to look at accomplishments by non-human entities. Bacteria would seem to count for level 7, humans for 8 and possibly 9 (TBD).

comment by FiftyTwo · 2012-09-12T16:46:20.906Z · LW(p) · GW(p)

Nice. A possible extension would be to have other less impressive achievements measured as decimals (We're going to incrementally improve distribution efficiency in this sector) and negative numbers for bad things...

comment by siodine · 2012-09-12T16:45:03.574Z · LW(p) · GW(p)

I wonder where "We're going to modify the process of science so that it recursively self-improves for the purpose of maximizing its benefit to humanity" would be? Would it be less or more ambitious than SI's goal (even though it should accomplish SI's goal by working towards FAI)?

Replies from: ema
comment by ema · 2012-09-12T18:48:57.487Z · LW(p) · GW(p)

I would put it lower than 9 because a general AI is science as software. Which means it is already contained in 9.

comment by Will_Newsome · 2012-09-12T17:23:12.778Z · LW(p) · GW(p)

This scale needs to go to about 100 at this rate.

Replies from: faul_sname, Miller
comment by faul_sname · 2012-09-12T18:34:42.573Z · LW(p) · GW(p)

10 (hacking the physics of the universe), 11 (hacking the source of the computational power running the universe, if applicable), or 12 (gaining access to literally infinite computing power i.e. becoming a god) seem to be the highest you can go. How would you propose getting past 12?

Replies from: aaronde, shminux, Armok_GoB
comment by aaronde · 2012-09-12T22:47:57.548Z · LW(p) · GW(p)

Duh. You'd have to go beyond computing! Disprove the Church-Turing thesis by building an information processor more powerful than a Turing machine.

comment by Shmi (shminux) · 2012-09-12T18:51:53.669Z · LW(p) · GW(p)

Easy, create (and destroy for fun) your own universes and meta-universes, complete with their own demiurges who think that they are gods.

Replies from: None, Manfred, faul_sname
comment by [deleted] · 2012-09-12T19:57:17.547Z · LW(p) · GW(p)

I think I'm losing track of what 'ambition' is supposed to mean at this level.

comment by Manfred · 2012-09-12T20:05:43.231Z · LW(p) · GW(p)

Both can be simulated with infinite computing power.

Replies from: shminux
comment by Shmi (shminux) · 2012-09-12T21:11:03.782Z · LW(p) · GW(p)

Probably not with countably infinite, though.

Replies from: Manfred
comment by Manfred · 2012-09-12T23:26:47.140Z · LW(p) · GW(p)

True. And I guess picking out anything interesting in a created universe is an extra problem, though one you should be capable of solving at level 9 :P

comment by faul_sname · 2012-09-12T19:28:20.400Z · LW(p) · GW(p)

Ok, 13 or 14... Okay, I can sort of see how you might get to 100, given a few billion years to think of ideas.

comment by Armok_GoB · 2012-09-12T21:08:55.318Z · LW(p) · GW(p)

well, there's hypercomputations of various sorts, reaching and preventing bad things in specific/all other universes, changing math itself, etc.

comment by Miller · 2012-09-13T03:04:33.104Z · LW(p) · GW(p)

Will Newsome is somewhere between Eliezer and a recursively self-improving AI.