The Yudkowsky Ambition Scale
post by loup-vaillant · 2012-09-12T15:08:06.292Z · LW · GW · Legacy · 61 commentsContents
61 comments
From Hacker News.
- We're going to build the next Facebook!
- We're going to found the next Apple!
- Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
- Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you?
- This is going to be the equivalent of the invention of electricity if it works out.
- We're going to make an IQ-enhancing drug and produce basic change in the human condition.
- We're going to build serious Drexler-class molecular nanotechnology.
- We're going to upload a human brain into a computer.
- We're going to build a recursively self-improving Artificial Intelligence.
- We think we've figured out how to hack into the computer our universe is running on.
This made me laugh, but from the look of it, I'd say there is little work to do to make it serious. Personally, I'd try to shorten it so it is punchier and more memorable.
61 comments
Comments sorted by top scores.
comment by Oscar_Cunningham · 2012-09-13T09:20:07.433Z · LW(p) · GW(p)
I can't find the comment of Eliezer that inspired this but:
The "If-you-found-out-that-God-existed scale of ambition".
1) "Well obviously if I found out God exists I'd become religious, go to church on Sundays etc."
2) "Actually, most religious people don't seem to really believe what their religion says. If I found out that God existed I'd have to become a fundamentalist, preaching to save as many people from hell as I could."
3) "Just because God exists, doesn't mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn't give up until I'd thought about the problem and tried all possible courses of action."
4) "God is massively powerful. Sure I'd kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God's power and use it to do good."
Replies from: DanArmak, advancedatheist, advancedatheist↑ comment by advancedatheist · 2012-09-14T04:42:16.087Z · LW(p) · GW(p)
3) "Just because God exists, doesn't mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn't give up until I'd thought about the problem and tried all possible courses of action."
But if you succeed in pulling everyone from hell, what would give their existences meaning and purpose? I mean, you just can't thwart god's sovereign will for his creatures without consequences. God created them for damnation as their telos from the very beginning, just as he created others to receive totally undeserved salvation.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-09-14T07:29:42.135Z · LW(p) · GW(p)
I would rather have no purpose (originating in myself or in someone else) than have the outside-given purpose of suffering. If they cared about anything when they got out of hell, that would be their purpose though.
But I would expect them all to be insane from centuries of torture.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-09-14T13:05:55.050Z · LW(p) · GW(p)
That was a bit of misplaced sarcasm, I assume.
Replies from: advancedatheist↑ comment by advancedatheist · 2012-09-14T14:47:51.800Z · LW(p) · GW(p)
I tried to imagine what a Calvinist would say.
↑ comment by advancedatheist · 2012-09-14T04:33:50.998Z · LW(p) · GW(p)
4) "God is massively powerful. Sure I'd kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God's power and use it to do good."
In other words, you want to convert god into a Krell Machine that works properly?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-02-27T20:38:42.957Z · LW(p) · GW(p)
That's Eliezers life mission. Preventing an UFAI and instead having an FAI.
comment by khafra · 2012-09-12T17:33:08.549Z · LW(p) · GW(p)
Replies from: Will_Newsome, ExilesMy ambition is infinite but not limitless. I don't think I can re-arrange the small natural numbers.
↑ comment by Will_Newsome · 2012-09-12T18:02:37.245Z · LW(p) · GW(p)
Quoting Michael Vassar and myself; I think we independently thought it.
↑ comment by Exiles · 2012-09-12T23:13:01.838Z · LW(p) · GW(p)
https://twitter.com/nicktarleton/status/115615378188668928
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2012-09-14T20:17:54.286Z · LW(p) · GW(p)
That's me quoting Michael Vassar
comment by radical_negative_one · 2012-09-12T16:53:05.274Z · LW(p) · GW(p)
Just one suggestion: come up with a new goal to put at the top of the list, and shift the rest down. That way, "how to hack into the computer our universe is running on" would be "up to 11" on the list.
The new #1 item could be something like "We're going to make yet another novelty t-shirt store!"
Replies from: Athrelon, Pentashagon↑ comment by Athrelon · 2012-09-12T17:16:23.298Z · LW(p) · GW(p)
Since it's basically a log scale in terms of outcomes, the T-shirt store might be a 0.
-10 would be "I will make a generic post on LW."
It would be a fun exercise to flesh out the negative side of the scale.
Replies from: radical_negative_one, faul_sname, Raiden↑ comment by faul_sname · 2012-09-12T18:21:51.338Z · LW(p) · GW(p)
-15: I will specify a single item on the negative side of the scale.
Replies from: TimS↑ comment by TimS · 2012-09-12T18:25:01.281Z · LW(p) · GW(p)
-20: I will critique a potential addition to the list without adding a suggestion of my own.
Replies from: faul_sname, Emile, SilasBarta↑ comment by faul_sname · 2012-09-12T18:36:25.071Z · LW(p) · GW(p)
-21:
↑ comment by SilasBarta · 2012-09-12T19:07:56.222Z · LW(p) · GW(p)
-20 - 2j: I will object to being called "miss" ("Thank you, miss"), without offering an alternative form of address, or thinking about what the proper one would be, and then after a lot of back-and-forth, agree that "miss" was appropriate in that context.
j = sqrt(-1) -- this is kinda orthogonal to ambition, but same counterproductiveness
↑ comment by Pentashagon · 2012-09-12T17:21:08.722Z · LW(p) · GW(p)
Nah.
11 We think we've figured out how to hack into the computer ALL the universes are running on.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-12T18:53:19.461Z · LW(p) · GW(p)
12 create your own universe tree.
Replies from: Armok_GoBcomment by lsparrish · 2012-09-12T18:26:40.993Z · LW(p) · GW(p)
Nice! I'm thinking my idea of a self-adjusting currency that uses a peer to peer proof of work algorithm which solves useful NP problems as a side effect and incorporates automated credit ratings based on debt repayment and contract fulfillment rates is probably in the 3 range. But if I hook it up to a protein folding game that teaches advanced biochemistry to akrasiatic gamers as a side effect it could be boosted up to the 6 range.
Replies from: SilasBarta, evand, gwern↑ comment by SilasBarta · 2012-09-14T00:28:08.022Z · LW(p) · GW(p)
If you ignore the credit rating system, and replace its hash algorithm with variable-length (expanding) one, that's basically what Bitcoin is. (Inversion of variable-length collision-resistant hash functions is NP-hard. I had to ask on that one.)
[EDIT: That question has been dead for a while, but now that I posted a link, it got another answer which basically repeats the first answer and needlessly retreads why I had to rephrase the question in a way such that the hash inversion problem has a variable size so that asymptotic difficulty becomes meaningful, thus being non-responsive to the question as now phrased. I hope it wasn't someone from here that clicked that link.]
They've made a lot of progress getting games to derive protein-folding results, but I think there's a lot of room for improvement there (better fidelity to the laws of the protein folding environment so players can develop an "intuition" of what shapes work, semiotics that are more suggestive of the dynamics of their subsystems, etc).
↑ comment by gwern · 2012-09-14T01:35:41.027Z · LW(p) · GW(p)
I've been musing about the same sort of proof-of-work algorithm, but I haven't come up with a good actual system yet - there's no obvious way to decentralizedly get a guaranteed-hard new useful problem.
Replies from: lsparrish↑ comment by lsparrish · 2012-09-15T00:34:05.460Z · LW(p) · GW(p)
Interesting! I was actually inspired by some of your IRC comments.
I am thinking the problems would be produced by peers and assigned to one another using a provably random assignment scheme. When assigned a problem, each peer has the option to ignore or attempt to solve. If they choose ignore they are assigned another one. Each time this happens to a problem is treated by the network as evidence that the problem is a hard one. If someone solves a scored-as-hard problem, they get a better chance of winning the block. (This would be accomplished by appending the solution as a nonce in a bitcoin-like arrangement and setting the minimum difficulty based on the hardness ranking.)
Replies from: gwern↑ comment by gwern · 2012-09-15T00:56:22.849Z · LW(p) · GW(p)
Hm. It never occurred to me that provable randomness might be useful... As stated, I don't think your scheme works because of Sybil attacks:
- I come up with some easy NP problem or one already solved offline
- I pass it around my 10,000 fake IRC nodes who all sign it,
- and present the solved solution to the network
- $$$
comment by Manfred · 2012-09-12T16:48:37.085Z · LW(p) · GW(p)
It's interesting that 2 isn't particularly easier than 9, assuming 9 is possible. The scale is in the effect, and though there are differences in difficulty, they're not the point.
Replies from: DanArmak↑ comment by DanArmak · 2012-09-12T18:34:59.528Z · LW(p) · GW(p)
2 has been done many times in human history (for some reasonably definition of what companies count as "previous Apples"). 9 has never been done. Why do you think 9 is no harder than 2, assuming it is possible?
Replies from: Manfred↑ comment by Manfred · 2012-09-12T20:04:24.934Z · LW(p) · GW(p)
9 has been done many times in human history too, for some reasonable definition of "create a better artificial optimizer."
Anyhow, to answer your question, I'm just guessing, based on calling "difficulty" something like marginal resources per rate of success. If you gave me 50 million dollars and said "make 2 happen," versus if you gave me 50 million dollars and said "make 9 happen," basically. Sure, someone is more likely to do 2 in the next few years than 9, ceteris paribus. But a lot more resources are on 2 (though there's a bit of a problem with this metric since 9 scales worse with resources than 2).
Replies from: evand↑ comment by evand · 2012-09-12T20:06:51.654Z · LW(p) · GW(p)
That's why 9 specifies "recursively self-improving", not "build a better optimizer", or even recursively improving optimizer. The computer counts for recursively improving, imho, it just needs some help, so it's not self-improving.
Replies from: TheOtherDave, Manfred↑ comment by TheOtherDave · 2012-09-12T21:48:44.211Z · LW(p) · GW(p)
Presumably, if anyone ever solves 9, so did their mom.
Which is not in fact intended as a "your mom" joke, but I don't see any way around it being read that way.
↑ comment by Manfred · 2012-09-12T20:17:40.990Z · LW(p) · GW(p)
If self-improving intelligence is somwehere on the hierarchy of "better optimizers," you just have to make better optimizers, and eventually you can make a self-improving optimizer. Easy peasy :P Note that this used the assumption the it's possible, and requires you to be charitable about interpreting "hierarchy of optimizers."
comment by NancyLebovitz · 2012-09-12T21:29:12.170Z · LW(p) · GW(p)
When I posted about the possibility of raising the sanity waterline enough to improve the comments at youtube, it actually felt wildly amibitious.
Where would achieving that much fit on the list?
Replies from: see↑ comment by see · 2012-09-13T04:49:38.484Z · LW(p) · GW(p)
I think, given how many millions of minds it would have to affect and how much sanity increase it would require, it sounds a lot like 6 in practice. (Unless the approach is "Build a company big enough to buy Google, and then limit comments to people who are sane", in which case, 2.)
Replies from: DanArmakcomment by [deleted] · 2012-09-13T00:28:14.978Z · LW(p) · GW(p)
You'll need at least two levels below 1 to make it really useful.
- I'm going to watch TV
- I'm going to have a career
- I'm going to start a successful company
- I'm going to build the next facebook ...
comment by Shmi (shminux) · 2012-09-12T21:13:44.951Z · LW(p) · GW(p)
Any past examples of level 6 and up?
Replies from: evand↑ comment by evand · 2012-09-12T21:48:09.670Z · LW(p) · GW(p)
Level 6 seems like it could include both language and writing. For stuff beyond that, I think you have to look at accomplishments by non-human entities. Bacteria would seem to count for level 7, humans for 8 and possibly 9 (TBD).
comment by siodine · 2012-09-12T16:45:03.574Z · LW(p) · GW(p)
I wonder where "We're going to modify the process of science so that it recursively self-improves for the purpose of maximizing its benefit to humanity" would be? Would it be less or more ambitious than SI's goal (even though it should accomplish SI's goal by working towards FAI)?
Replies from: emacomment by Will_Newsome · 2012-09-12T17:23:12.778Z · LW(p) · GW(p)
This scale needs to go to about 100 at this rate.
Replies from: faul_sname, Miller↑ comment by faul_sname · 2012-09-12T18:34:42.573Z · LW(p) · GW(p)
10 (hacking the physics of the universe), 11 (hacking the source of the computational power running the universe, if applicable), or 12 (gaining access to literally infinite computing power i.e. becoming a god) seem to be the highest you can go. How would you propose getting past 12?
Replies from: aaronde, shminux, Armok_GoB↑ comment by Shmi (shminux) · 2012-09-12T18:51:53.669Z · LW(p) · GW(p)
Easy, create (and destroy for fun) your own universes and meta-universes, complete with their own demiurges who think that they are gods.
Replies from: None, Manfred, faul_sname↑ comment by faul_sname · 2012-09-12T19:28:20.400Z · LW(p) · GW(p)
Ok, 13 or 14... Okay, I can sort of see how you might get to 100, given a few billion years to think of ideas.