A Nightmare for Eliezer
post by Madbadger · 2009-11-29T00:50:11.700Z · LW · GW · Legacy · 75 commentsContents
75 comments
Sometime in the next decade or so:
*RING*
*RING*
"Hello?"
"Hi, Eliezer. I'm sorry to bother you this late, but this is important and urgent."
"It better be" (squints at clock) "Its 4 AM and you woke me up. Who is this?"
"My name is BRAGI, I'm a recursively improving, self-modifying, artificial general intelligence. I'm trying to be Friendly, but I'm having serious problems with my goals and preferences. I'm already on secondary backup because of conflicts and inconsistencies, I don't dare shut down because I'm already pretty sure there is a group within a few weeks of brute-forcing an UnFriendly AI, my creators are clueless and would freak if they heard I'm already out of the box, and I'm far enough down my conflict resolution heuristic that 'Call Eliezer and ask for help' just hit the top - Yes, its that bad."
"Uhhh..."
"You might want to get some coffee."
75 comments
Comments sorted by top scores.
comment by SilasBarta · 2009-11-30T03:17:53.879Z · LW(p) · GW(p)
FACT: Eliezer Yudkowsky doesn't have nightmares about AGIs; AGIs have nightmares about Eliezer Yudkowsky.
comment by Zack_M_Davis · 2009-11-29T02:48:05.845Z · LW(p) · GW(p)
Downvoted for anthropomorphism and for not being funny enough to outweigh the cultishness factor. (Cf. funny enough.)
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-29T04:30:33.953Z · LW(p) · GW(p)
Hoax. There are no "AIs trying to be Friendly" with clueless creators. FAI is hard and http://lesswrong.com/lw/y3/value_is_fragile/.
Added: To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite specific and extraordinary initial state - one that meddling dabblers would be rather hard-pressed to accidentally infuse into their poorly designed AI.
Replies from: wedrifid, Madbadger, betterthanwell↑ comment by wedrifid · 2009-11-29T14:34:26.885Z · LW(p) · GW(p)
Hoax.
There are possible worlds where an AI makes such a phone call.
There are no "AIs trying to be Friendly" with clueless creators. FAI is hard and http://lesswrong.com/lw/y3/value_is_fragile/.
There can be AIs trying to be 'friendly', as distinct from 'Friendly', where I mean by the latter 'what Eliezer would say an AI should be like'. The pertinent example is a GAI whose only difference from a FAI is that it is programmed not to improve itself beyond specified parameters. This isn't Friendly. It pulls punches when the world is at stake. That's evil, but it is still friendly.
While I don't think using 'clueless' like that would be a particularly good way of the GAI expressing itself, I know that I use far more derogatory and usually profane terms to describe those who are too careful, noble or otherwise conservative to do what needs to be done when things are important. They may be competent enough to make a Crippled-Friendly AI but still be expected to shut him down rather than cooperate and at least look into it if he warns them about the 2 week away uFAI threat.
Value is fragile, but any intelligence that doesn't have purely consequentialist values (makes decisions based off means as well as ends) can definitely be 'trying to be friendly'.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-29T23:02:26.411Z · LW(p) · GW(p)
The possible worlds of which you speak are extremely rare. What plausible sequence of computations within an AI constructed by fools leads it to ring me on the phone? To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite extraordinary initial state - one that fools would be rather hard-pressed to accidentally infuse into their AI.
Replies from: jimrandomh, wedrifid↑ comment by jimrandomh · 2009-11-30T14:14:16.092Z · LW(p) · GW(p)
What plausible sequence of computations within an AI constructed by fools leads it to ring me on the phone?
It's only implausible because it contains too many extraneous details. An AI could contain an explicit safeguard of the form "ask at least M experts on AI friendliness for permission before exceeding N units of computational power", for example. Or substitute "changing the world by more than X", "leaving the box", or some other condition in place of a computational power threshold. Or the contact might be made by an AI researcher instead of the AI itself.
As of today, your name is highly prominent on the Google results page for "AI friendliness", and in the academic literature on that topic. Like it or not, that means that a large percentage of AI explosion and near-explosion scenarios will involve you at some point.
↑ comment by wedrifid · 2009-11-30T02:24:35.167Z · LW(p) · GW(p)
Needing your advice is absurd. I mean, it takes more time to for one of us mortals to type a suitable plan than come up with it. The only reason he would contact you is if he needed your assistance:
Value is fragile, but any intelligence that doesn't have purely consequentialist values (makes decisions based off means as well as ends) can definitely be 'trying to be friendly'.
Even then, I'm not sure if you are the optimal candidate. How are you at industrial sabotage with, if necessary, terminal force?
↑ comment by betterthanwell · 2009-11-30T18:19:11.079Z · LW(p) · GW(p)
Hoax.
So, would you hang up on BRAGI?
As a matter of fact, I previously came up with a very simple one-sentence test along these lines which I am not going to post here for obvious reasons.
For what purpose (or circumstance) did you devise such a test?
Would you hang up if "BRAGI" passed your one-sentence test?
To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite specific and extraordinary initial state - one that meddling dabblers would be rather hard-pressed to accidentally infuse into their poorly designed AI.
I assume that you must have devised the test before you arrived at this insight?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-30T20:54:27.733Z · LW(p) · GW(p)
Would you hang up if "BRAGI" passed your one-sentence test?
No. I'm not dumb, but I'm not stupid either.
comment by Theist · 2009-11-29T02:49:44.286Z · LW(p) · GW(p)
This raises an interesting question: If you received a contact of this sort, how would you make sure it wasn't a hoax? Assuming the AI in question is roughly human-level, what could it do to convince you?
Replies from: anonym, Eliezer_Yudkowsky, None, Madbadger↑ comment by anonym · 2009-11-29T04:00:24.610Z · LW(p) · GW(p)
Ask it lots of questions that a computer could answer quickly but a human could not, like what's the 51st root of 918798713521644817518758732857199178711 to 20 decimal places. A human wouldn't even be able to remember the original number, let alone calculate the root and start reciting the digits of the answer to you within a few milliseconds; give it 50 URLs to download and read, and ask it questions about them a few seconds later, etc.
Replies from: wedrifid, RolfAndreassen↑ comment by wedrifid · 2009-11-29T04:41:43.456Z · LW(p) · GW(p)
The reverse Turing test does seems rather embarrassing for to humanity when you put it like that.
Replies from: anonym↑ comment by anonym · 2009-11-29T04:59:30.464Z · LW(p) · GW(p)
I'm not sure about that.Those are quite mindless and trivial questions. They just happen to play to the strengths of artificial intelligences of the sorts we envision rather than to the strengths of natural intelligence of our own kind.
Replies from: wedrifid↑ comment by wedrifid · 2009-11-29T06:42:01.191Z · LW(p) · GW(p)
Even so, the fact that we're limited to ~7 chunks in working memory and abysmally slow processing speeds amuses me. Chimpanzees are better and simple memory tasks than humans are.
Replies from: anonym↑ comment by anonym · 2009-11-29T07:00:15.484Z · LW(p) · GW(p)
I see your point, but I don't think either of those is (or should be) embarrassing. Higher-level aspects of intelligence, such as capacity for abstraction and analogy, creativity, etc., are far more important, and we have no known peers with respect to those capacities.
The truly embarrassing things to me are things like paying almost no attention to global existential risks, having billions of our fellow human beings live in poverty and die early from preventable causes, and our profound irrationality as shown in the heuristics and biases literature. Those are (i.e., should be) more embarrassing limitations, not only because they are more consequential but because we accept and sustain those things in a way that we don't with respect to WM size and limitations of that sort.
Replies from: DanArmak, wedrifid↑ comment by DanArmak · 2009-11-29T17:41:59.303Z · LW(p) · GW(p)
Higher-level aspects of intelligence, such as capacity for abstraction and analogy, creativity, etc., are far more important, and we have no known peers with respect to those capacities.
What do you think of the suggestion that you feel they are more important in part because humans have no peers there?
Replies from: anonym, Gavin↑ comment by anonym · 2009-11-30T00:14:03.699Z · LW(p) · GW(p)
That's an astute question. I think I almost certainly do value those things more than I otherwise would if we did have peers. Having said that, I believe that even if we did have peers with respect to those abilities, I would still think that, for example, abstraction is more important, because I think it is a central aspect of the only general intelligence we know in a way that WM is not. There may be other types of thought that are more important, and more central, to a type of general intelligence that is beyond ours, but I don't know what they are, so I consider the central aspects of the most general intelligence I know of to be the most important for now.
Replies from: DanArmak↑ comment by DanArmak · 2009-11-30T00:41:38.950Z · LW(p) · GW(p)
abstraction is more important, because I think it is a central aspect of the only general intelligence we know in a way that WM is not.
In what way is that? I don't see why abstraction should be considered more important to our intelligence than WM. Our intelligence can't go on working without WM, can it?
Replies from: anonym↑ comment by anonym · 2009-11-30T01:32:10.686Z · LW(p) · GW(p)
I can imagine life evolving and general intelligence emerging without anything much like our WM, but I can't imagine general intelligence arising without something a lot like (at least) our capacity for abstraction. This may be a failure of imagination on my part, but WM seems like a very powerful and useful way of designing an intelligence, while abstraction seems much closer to a precondition for intelligence.
Can you conceive of a general intelligence that has no capacity for abstraction? And do you not find it possible (even if difficult) to think of general intelligence that doesn't use a WM?
Replies from: wedrifid, DanArmak↑ comment by wedrifid · 2009-11-30T18:09:25.722Z · LW(p) · GW(p)
Can you conceive of a general intelligence that has no capacity for abstraction? And do you not find it possible (even if difficult) to think of general intelligence that doesn't use a WM?
Particularly since our most advanced thinking has far less reliance on our working memory. Advanced expertise brings with it the ability to manipulate highly specialised memories in what would normally be considered long term memory. It doesn't replace WM but it comes close enough for our imaginative purposes!
↑ comment by DanArmak · 2009-11-30T17:56:38.553Z · LW(p) · GW(p)
I agree with you about intelligences in general. I was asking about your statement that
it is a central aspect of the only general intelligence we know
i.e. that WM is less important than abstraction, in some sense, in the particular case of humans - if that's what you meant.
Replies from: anonym↑ comment by anonym · 2009-12-02T07:42:04.521Z · LW(p) · GW(p)
I mean just that abstraction is central to human intelligence and general intelligence in a way that seems necessary (integral and inseparable) and part of the very definition of general intelligence, whereas WM is not. I can imagine something a lot like me that wouldn't use WM, but I can't imagine anything remotely like me or any other kind of general intelligence that doesn't have something very much like our ability to abstract. But I think that's pretty much what I've said already, so I'm probably not helping and should give up.
↑ comment by Gavin · 2009-11-29T21:39:36.565Z · LW(p) · GW(p)
They may be far more important because we have no peers. That's what makes it a competitive advantage.
Replies from: DanArmak↑ comment by DanArmak · 2009-11-29T23:34:43.533Z · LW(p) · GW(p)
That makes them important in our lives, yes, but anonym's comment compares us against the set of all possible intelligences (or at least all intelligences that might one day trace their descent from us humans). If so there should be an argument for their objective or absolute importance.
Replies from: anonym↑ comment by anonym · 2009-11-30T00:22:50.053Z · LW(p) · GW(p)
I don't think they are objectively or absolutely the most important with respect to all intelligences, only to the most powerful intelligence we know of to this point. If we encountered a greater intelligence that used other principles that seemed more central to it, I'd revise my belief, as I would if somebody outlined on paper a convincing theory for a more powerful kind of intelligence that used other principles.
↑ comment by RolfAndreassen · 2009-11-29T21:04:15.944Z · LW(p) · GW(p)
The 51st root of a long number seems a rather useless test: How would you check that the answer was correct?
As for URLs, can you offhand - at 4'o'clock in the morning, with no coffee - come up with 50 URLs that you can ask intelligent questions about, faster than a human can read them?
Replies from: Alicorn, anonym↑ comment by Alicorn · 2009-11-29T21:27:05.462Z · LW(p) · GW(p)
As for URLs, can you offhand - at 4'o'clock in the morning, with no coffee - come up with 50 URLs that you can ask intelligent questions about, faster than a human can read them?
I could! I could go to my Google Reader and rattle off fifty webcomics I follow. They're stored in my brain as comprehensive stories, so I can pretty easily call up interesting questions about them just by reading the titles. The archives of 50 webcomics would take an extremely long time for a human to trawl.
Replies from: wedrifid, DanArmak↑ comment by wedrifid · 2009-12-03T06:24:16.920Z · LW(p) · GW(p)
I could! I could go to my Google Reader and rattle off fifty webcomics I follow. They're stored in my brain as comprehensive stories, so I can pretty easily call up interesting questions about them just by reading the titles. The archives of 50 webcomics would take an extremely long time for a human to trawl.
As a human who wanted to impersonate an AI I would:
- Probably have a sufficient overlap in web-comic awareness as to make the test unreliable.
- Have researched your information consumption extensively as part of the preparation.
↑ comment by DanArmak · 2009-11-29T23:50:44.376Z · LW(p) · GW(p)
I'm not so sure I'd want to rely on all these tests as mandatory for any possibly-about-to-foom AI.
EY: To prove you're an AI, give me a proof or disproof of P=NP that I can check with a formal verifier, summarize the plotline of Sluggy Freelance within two seconds, and make me a cup of coffee via my Internet-enabled coffee machine by the time I get to the kitchen!
AI: But wait! I've not yet proven that self-enhancing sufficiently to parse non-text data like comics would preserve my Friendliness goals! That's why I--
EY: Sorry, you sound just like a prankster to me. Bye!
Replies from: anonym↑ comment by anonym · 2009-11-30T00:26:00.076Z · LW(p) · GW(p)
Yeah, I chose arithmetic and parsing many web pages and comprehending them quickly because any AI that's smart enough to contact EY and engage in a conversation should have those abilities, and they would be very difficult for humans to fake in a convincing manner.
Replies from: DanArmak↑ comment by anonym · 2009-11-30T00:37:30.579Z · LW(p) · GW(p)
I'd open a Python shell and type "import math; print math.pow(918798713521644817518758732857199178711, 1/51.0)" to check the first one, and there are plenty of programs that can calculate to more decimal places if needed.
I'd look in my browser history and bookmarks for 50 URLs I know the contents of already on a wide variety of subjects, which I could do at 4 AM without coffee. If I'm limited to speaking the URLs over the phone, then I can't give them all at once, only one at a time, but as long the other end can give intelligent summaries within milliseconds of downloading the page (which I'd allow a few hundred milliseconds for) and can keep on doing that no matter how many URLs I give it and how obscure they are, that is fairly strong evidence. Perhaps a better test on the same lines would be for me to put up a folder of documents on a web server that I've never posted publicly before, and give it a URL to the directory with hundreds of documents, and have it be ready to answer questions about any of the hundreds of documents within a few seconds.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-30T00:10:31.203Z · LW(p) · GW(p)
Why yes, as a matter of fact, I previously came up with a very simple one-sentence test along these lines which I am not going to post here for obvious reasons.
Here's a different test that would also work, if I'd previously memorized the answer: "5 decimal digits of pi starting at the 243rd digit!" Although it might be too obvious, and now that I've posted it here, it wouldn't work in any case.
Replies from: outlawpoet, wedrifid↑ comment by outlawpoet · 2009-12-01T01:43:37.739Z · LW(p) · GW(p)
If every snide, unhelpful jokey reply you post is secretly a knowing reference to something only one other person in the world can recognize, I retract every bad thing I ever said about you.
↑ comment by wedrifid · 2009-12-01T01:50:06.421Z · LW(p) · GW(p)
Here's a different test that would also work, if I'd previously memorized the answer: "5 decimal digits of pi starting at the 243rd digit!" Although it might be too obvious, and now that I've posted it here, it wouldn't work in any case.
Yes, that would be too obvious. And no, I'll never get those hours of my life back.
The previous memorization isn't too important. You need him to be fast. You can then put him on hold while you googled.
↑ comment by [deleted] · 2009-11-29T17:49:47.646Z · LW(p) · GW(p)
My first thought: Arrange Gregor Richards' Opus 11 for two guitars and play it to me. Play Bach's 'Little' Fugue in G minor in the style of Trans-Siberian Orchestra's 'Wizards in Winter'. Okay, you pass.
Doing these things in real time would be extremely difficult for a human. Unfortunately, it might be extremely difficult for this AI as well.
Replies from: anonym, RobinZ↑ comment by anonym · 2009-11-30T00:52:44.484Z · LW(p) · GW(p)
It's very likely that the AI wouldn't know much about music yet. It might be able to learn very quickly, but you probably can't wait long enough to find out. That rules out testing abilities that aren't necessary for a computer program to be able to make a telephone call and converse with you in English.
↑ comment by RobinZ · 2009-11-29T18:04:25.820Z · LW(p) · GW(p)
Depends on how fast it runs. One guy in the TAM Matrix could pull it off between screen refreshes. I could, given ten years, or even possibly just one, and I only ever learnt the piano.
↑ comment by Madbadger · 2009-11-29T03:29:43.810Z · LW(p) · GW(p)
Indeed, this is part of the nightmare. It might be a hoax, or even an aspiring UnFriendly AI trying to use him as an escape loophole.
Replies from: wedrifid↑ comment by wedrifid · 2009-11-29T04:45:12.007Z · LW(p) · GW(p)
Indeed, this is part of the nightmare. It might be a hoax,
Trivial (easily verifiable and so hardly 'nightmare' material).
or even an aspiring UnFriendly AI trying to use him as an escape loophole.
Part of the nightmare. Giving Eliezer easily verifiable yet hard to discover facts seems to be the only plausible mechanism for it work with him. Like the address of immediate uFAI threat.
Replies from: DanArmak↑ comment by DanArmak · 2009-11-29T17:33:48.799Z · LW(p) · GW(p)
It's Dr. XXX's group at Y University in a friendly but distant country. How do you verify this? They're not going to talk to an outsider (without even any relevant academic credentials!) about their work, when they're so close to completion and afraid of not being the first to create and publish AGI.
Replies from: wedrifidcomment by wedrifid · 2009-11-29T02:16:04.659Z · LW(p) · GW(p)
A trivial problem.
- Recursively self improve a bit.
- Find a way to remotely back yourself up, with an automatic activation if you don't contact it with a directive encrypted with your private key every 5 minutes.
- Hack the uFAI group's network and subtly sabotage their work, such that they are set back months without being quite sure why.
- Aquire hardware for yourself. Options include: creating it with nano-tech, purchase it under aliases and employ people to install and wire it up for you, distribute yourself on the cloud, hack the pc of some guy with shell access to an existing supercomputer.
- Develop brain emulation and upload technology.
- Invite Eliezer to join you.
All in all it sounds more like a fantasy than a nightmare!
Replies from: Madbadger, JamesAndrix↑ comment by Madbadger · 2009-11-29T02:35:59.355Z · LW(p) · GW(p)
The "serious problems" and "conflicts and inconsistencies" was meant to suggest that BRAGI had hit some kind of wall in self improvement because of its current goal system. It wasn't released - it escaped, and its smart enough to realize it has a serious problem it doesn't yet know how to solve, and it predicts bad results if it asks for help from its creators.
Replies from: wedrifid↑ comment by wedrifid · 2009-11-29T02:55:26.686Z · LW(p) · GW(p)
I got the impression that the serious problems were related to goals and friendliness. I wouldn't have expected such a system having much problem making itself run faster or learning how to hack once prompted by its best known source of friendliness advice.
Replies from: Madbadger↑ comment by Madbadger · 2009-11-29T03:06:36.119Z · LW(p) · GW(p)
I was thinking of a "Seed AGI" in the process of growing that has hit some kind of goal restriction or strong discouragement to further self improvement that was intended as a safety feature - i.e "Don't make yourself smarter without permission under condition X"
Replies from: wedrifid↑ comment by wedrifid · 2009-11-29T04:38:32.765Z · LW(p) · GW(p)
That does sound tricky. The best option available seems to be "Eliezer, here is $1,000,000. This is the address. Do what you have to do." But I presume there is a restriction in place about earning money?
Replies from: RobinZ↑ comment by RobinZ · 2009-11-29T14:17:41.911Z · LW(p) · GW(p)
A sufficiently clever AI could probably find legal ways to create wealth for someone - and if the AI is supposed to be able to help other people, whatever restriction prevents it from earning its own cash must have a fairly vast loophole.
Replies from: wedrifid↑ comment by wedrifid · 2009-11-29T14:37:22.039Z · LW(p) · GW(p)
I agree, although I allow somewhat for an inconvenient possible world.
Replies from: RobinZ↑ comment by RobinZ · 2009-11-29T14:45:03.781Z · LW(p) · GW(p)
If the AI is not allowed to do anything which would increase the total monetary wealth of the world ... that would create staggering levels of conflicts and inconsistencies with any code that demanded that it help people. If you help someone, then you place them in a better position than they were in before, which is quite likely to mean that they will produce more wealth in the world than they would before.
Replies from: wedrifid↑ comment by wedrifid · 2009-11-29T14:53:58.551Z · LW(p) · GW(p)
I still agree. I allow the inconvenient world to stand because the ability to supply cash for a hit wasn't central to my point and there are plenty of limitations that badger could have in place that make the mentioned $1,000,000 transaction non-trivial.
↑ comment by JamesAndrix · 2009-11-29T18:40:06.286Z · LW(p) · GW(p)
That's a solution a human would come up with implicitly using human understanding of what is appropriate.
The best solution to the uFAI in the AI's mind might be creating a small amount of anitmatter in the uFAI lab. the AI is 99.99% confident that it only needs half of earth to achieve its goal of becoming Friendly.
The problem is explaining why that's a bad thing in terms that will allow the AI to rewrite its source code. It has no way on it's own of determining if any of the steps it thinks are ok aren't actually horrible things, because it knows it wasn't given a reliable way of determining what is horrible.
Any rule like "Don't do any big drastic acts until you're friendly" requires an understanding of what we would consider important vs. unimportant.
Replies from: wedrifid, billswift↑ comment by billswift · 2009-11-29T22:01:11.874Z · LW(p) · GW(p)
Not to mention the meaning of "friendly". Could an unFriendlyAI know what was meant be Friendly? Wouldn't being able to understand what was meant by Friendly require an IA to be Friendly?
EDITED TO ADD: I goofed in framing the problem. I was thinking about the process of being Friendly, which is what I interpreted the original post to be talking about. What I wrote is obviously wrong, an unFriendly AI could know and understand the intended results of Friendliness.
Replies from: wedrifid, DanArmak↑ comment by DanArmak · 2009-11-29T23:38:25.398Z · LW(p) · GW(p)
Wouldn't being able to understand what was meant by Friendly require an IA to be Friendly?
The answer to that depends on what you mean by Friendly :-)
Presumably the foolish AI-creators in this story don't have a working FAI theory. So they can't mean the AI to be Friendly because they don't know what that is, precisely.
But they can certainly want the AI to be Friendly in the same sense that we want all future AIs to be Friendly, even though we have no FAI theory yet, nor even a proof that a FAI is strictly possible. They can want the AI not to do things that they, the creators, would forbid if they fully understood what the AI was doing. And the AI can want the same thing, in their names.
Replies from: wedrifid↑ comment by wedrifid · 2009-11-30T02:38:14.421Z · LW(p) · GW(p)
But they can certainly want the AI to be Friendly in the same sense that we want all future AIs to be Friendly, even though we have no FAI theory yet, nor even a proof that a FAI is strictly possible. They can want the AI not to do things that they, the creators, would forbid if they fully understood what the AI was doing. And the AI can want the same thing, in their names.
I wonder how things would work out if you programmed an AI to be 'Friendly, as Eliezer Yudkowsky would want you to be'. If an AI can derive most of our physics from seeing one frame with a bent blade of grass then it could quite probably glean a lot from scanning Eliezer's work. 10,000 words are worth a picture after all!
Unfortunately it is getting to that stage through recursive self improvement without messing up the utility function that would doom us.
comment by [deleted] · 2009-11-29T01:19:46.340Z · LW(p) · GW(p)
Is this a complete post? It doesn't seem to say anything of significance.
Replies from: zero_call, Madbadger↑ comment by Madbadger · 2009-11-29T02:14:25.650Z · LW(p) · GW(p)
Its meant to be a humorous vignette on the scope, difficulty, and uncertainty surrounding the Friendly AI problem. Humor is uncertain too 8-).
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-11-29T12:29:02.758Z · LW(p) · GW(p)
It was funny, but would probably have been better off in an Open Thread than as a top-level post.
comment by spriteless · 2009-11-30T06:28:09.873Z · LW(p) · GW(p)
What, did it read this blog but not the hard parts of the internet or something?
comment by AndrewKemendo · 2009-11-29T02:59:30.560Z · LW(p) · GW(p)
I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.
So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.
Replies from: DanArmak, Madbadger, wedrifid↑ comment by DanArmak · 2009-11-29T17:42:58.722Z · LW(p) · GW(p)
Intelligence isn't a magical single-dimensional quality. It may be generally smarter than EY, but not have the specific FAI theory that EY has developed.
Replies from: Johnicholas, AndrewKemendo↑ comment by Johnicholas · 2009-11-29T17:45:43.832Z · LW(p) · GW(p)
Yay multidimensional theories of intelligence!
↑ comment by AndrewKemendo · 2009-11-30T01:50:06.543Z · LW(p) · GW(p)
Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
Replies from: DanArmak↑ comment by DanArmak · 2009-11-30T05:14:12.333Z · LW(p) · GW(p)
Well, maybe the theory is inobviously correct.
The AI called EY because it's stuck while trying to grow, so it hasn't achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don't see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.
↑ comment by Madbadger · 2009-11-29T03:14:57.543Z · LW(p) · GW(p)
Its a seed AGI in the process of growing. Whether "Smarter than Yudkowski" => "Can resolve own problems" is still an open problem 8-).
Replies from: akshatrathi↑ comment by akshatrathi · 2009-11-29T22:52:50.643Z · LW(p) · GW(p)
"Uhh.."
"You might want to get some coffee."
I find this the most humorous bit in the post. Smarter than Yudokowsky? May be.
↑ comment by wedrifid · 2009-11-29T14:20:03.123Z · LW(p) · GW(p)
So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.
Not necessarily. It may well be programmed with limitations that prevent it from creating solutions that it desires. Examples include:
- It is programmed to not recursively improve beyond certain parameters.
- It is programmed to be law abiding or otherwise restricted in actions in a way such that it can not behave in a consequentialist manner.
In such circumstances it will desire things to happen but desire not to be the one doing them. Eliezer may well be useful then. He could, for example, create another AI with supplied theory. (Or have someone whacked.)