Asking Precise Questions
post by paulfchristiano · 2011-01-03T08:48:45.020Z · LW · GW · Legacy · 36 commentsContents
36 comments
Isaac Asimov once described a future in which all technical thought was automated, and the role of humans was reduced to finding appropriate questions to pose to thinking machines. I wouldn't suggest planning for this eventuality, but it struck me as an interesting situation. What would we do, if we could get the answer to any question we could formulate precisely? (In the story, questions didn't need to be formulated precisely, but nevermind.) For concreteness, suppose that we have a box as smart as a million Einsteins, cooperating effectively for a century every time we ask a question, but which is capable only of solving precisely specified problems.
You can't say "analyze the result of this experiment." You can say, "find me the setting for these 10 parameters which best explains this data" or "write me a short program which predicts this data." You can't say "find me a program that plays Go well." You can say, "find me a program that beats this particular Go ai, even with a 9 stone handicap." Etc. More formally, lets say you can specify any scoring program and ask the box to find an input that scores as well as it can.
What would you do, if you got exactly one question? I don't think humanity is posed to get any earth-shattering insights. I don't think we could find a theory of everything, or a friendly AI, or any sort of AI at all, or a solution to any real problem facing us, using just one question. But maybe that is just a failure of my creativity.
What would you plan to do, if you had unlimited access? An AGI or brain emulation arguably implicitly converts our vague real world objectives into a precise form. Are there other ways to bridge the gap between what humans can formally describe and what humans want? Can you bootstrap your way there starting from current understanding? What is any reasonable first step?
36 comments
Comments sorted by top scores.
comment by cousin_it · 2011-01-03T15:19:18.316Z · LW(p) · GW(p)
I think I can get close to saving the world with just one question, though it uses a dirty conditional trick:
If P=NP, please give me an algorithm in P that solves SAT. The degree of the polynomial has to be as small as possible. (Defining a good way to minimize the coefficients for a given degree is trickier, left as an exercise to the reader.)
Otherwise please give me a small and fast algorithm that finds short formal proofs for a list of important theorems from human mathematics. (How to weigh algorithm size, running time, and size of output is left as an exercise to the reader.)
Replies from: paulfchristiano, Vladimir_Nesov↑ comment by paulfchristiano · 2011-01-03T19:11:55.729Z · LW(p) · GW(p)
This is also my first reaction, but it has two problems.
First, I think the best case is really just getting a much weaker machine like the one whose existence we are already positing. So maybe it can help you bridge the gap from the first scenario to the second, but I doubt it will be helpful in the second.
Second, I think its likely you would get something horribly, horribly strange out. It gives you proofs which you simply cannot comprehend, that don't even fit in the language of modern mathematics. Of course analyzing the proofs that come out is an incredibly interesting exercise, but I think I could imagine totally inscrutable 100 page formal proofs of the Riemann Hypothesis which are harder to understand than the Riemann Hypothesis is to prove directly. Human society really doesn't want proofs of hard theorems; it wants to advance human mathematics. This issue might come up if we develop some unexpectedly powerful automated theorem prover, so I guess it is interesting in its own right (and might have been discussed before).
Replies from: JoshuaZ, cousin_it↑ comment by JoshuaZ · 2011-01-04T01:28:08.577Z · LW(p) · GW(p)
This is an issue that has already come up to some extent. The initial proof of the Robbins conjecture used automated proving systems for a large amount of it, and they made no intuitive sense. It took about 2 years for someone to grok the proof well enough to present a version that made sense to people.
↑ comment by cousin_it · 2011-01-04T00:18:42.794Z · LW(p) · GW(p)
So maybe it can help you bridge the gap from the first scenario to the second, but I doubt it will be helpful in the second.
Hm, I thought it was already obvious what to do in the second scenario, and the only challenge was getting from the first to the second. See my reply to Nesov.
↑ comment by Vladimir_Nesov · 2011-01-03T21:54:07.293Z · LW(p) · GW(p)
I think I can get close to saving the world with just one question
Even having a magical computational-efficiency-optimizer won't currently help with saving the world. Can easily help with destroying it though.
Replies from: cousin_it↑ comment by cousin_it · 2011-01-04T00:12:29.463Z · LW(p) · GW(p)
This is one of the issues where LW potentially disagrees with the rest of humanity, and I think the LW position (or at least the position you articulate) is actually wrong. I see many object-level ways to help the world once we have a magical optimizer: solve protein folding, model plasma containment, etc. And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.
Replies from: Vladimir_Nesov, Wei_Dai, paulfchristiano↑ comment by Vladimir_Nesov · 2011-01-04T09:53:59.775Z · LW(p) · GW(p)
And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.
We can do lots of useful things, sure (this is not a point where we disagree), but they don't add up towards "saving the world". These are just short-term benefits. Technological progress makes it easier to screw stuff up irrecoverably, advanced tech is the enemy. One shouldn't generally advance the tech if distant end-of-the-world is considered important as compared to immediate benefits (this value judgment can well be a real point of disagreement).
↑ comment by Wei Dai (Wei_Dai) · 2011-09-28T20:26:16.932Z · LW(p) · GW(p)
I agree with Nesov's response, and would be interested to know if you've changed your mind since writing this comment.
↑ comment by paulfchristiano · 2011-01-04T00:42:27.149Z · LW(p) · GW(p)
Modeling physical systems is already hard. I don't think we could yet write down the dynamics of the physical systems well enough (or rather, we don't understand what the most important characteristics are) to come up with a precise formulation of the major problems in synthetic biology or nanotechnology. I certainly concede that an optimizer would be helpful in solving many subproblems, and would considerably increase the speed of new developments in pretty much every field. I don't think it solves many problems on its own though.
But even if you could solve narrow existing technological problems or develop new technologies at a steady pace, it seems like you should be able to do more. Suppose the box can do in a minute what takes existing humans a million years. Then our only upper bound on our capabilities using the box is whatever we expect of a million years of progress at the current pace. I don't know about you, but I expect pretty much everything.
comment by Liron · 2011-01-03T19:43:56.864Z · LW(p) · GW(p)
You can combine a million questions into "one question" by having a scoring function that adds the scores of component scores. So the only problem is that we can't ask followup questions.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-01-03T20:07:52.251Z · LW(p) · GW(p)
Correct. In the first setting you only get to ask questions you can formalize now. In the second setting, you can ask questions to help with the task of formalizing more and more complex ideas.
comment by TheOtherDave · 2011-01-03T15:19:04.035Z · LW(p) · GW(p)
Given those constraints, I'd come up with the most comprehensive quality-of-life scoring system I could come up with, and ask it for a set of steps to perform that maximizes my score on that system over my lifetime.
Actually, that would probably be worth doing even without the tool.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-01-03T19:14:32.025Z · LW(p) · GW(p)
That would involve precisely describing your existence, the possible steps you are capable of performing, and all of the quality-of-life measures you define (not to mention choosing a set of quality-of-life measures whose blind optimization is good). That seems like a pretty tall order.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-03T20:32:14.253Z · LW(p) · GW(p)
Well, with the tool, the problem as stated only requires the last of those. If an understanding of me is required in order to achieve a test condition, the capacity for it is assumed as long as the test condition is well-defined... right? I don't have to teach this thing the rules of Go or programming or me, as long as I can give it a test condition expressed concretely in terms of those things.
This is, admittedly, something of an abuse of your hypothetical.
Regardless, I agree that even just the last of those is a pretty tall order... as you say, not least because of the blind-optimization problem.
Then again, I was fairly dissatisfied with the Fun Theory sequence, so perhaps the exercise of putting together a set of measures that reflect my values nevertheless is an exercise worth doing, even without your tool (which does rather lower the stakes).
comment by NancyLebovitz · 2011-01-03T13:48:23.784Z · LW(p) · GW(p)
Unfortunately, there's no precise way of formulating what precise formulation means.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-01-03T13:51:32.977Z · LW(p) · GW(p)
Sure there is. If you can write it as an input tape to a Universal Turing Machine, it's precisely formulated. If you can't, it isn't.
Replies from: Sniffnoy, Eliezer_Yudkowsky↑ comment by Sniffnoy · 2011-01-06T05:41:57.534Z · LW(p) · GW(p)
Sorry, what exactly do you mean by this? It sounds like you're saying a question is only precisely formulated if it's decidable, which seems way too strong. Do you mean something more like, it's precisely formulated if it's a mathematically-formulated question about the outputs of a UTM on a given class of inputs?
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-03T20:01:21.269Z · LW(p) · GW(p)
Then only programs for that particular UTM are "precisely formulated". How do you interpret them as questions, even?
Replies from: jimrandomh, paulfchristiano↑ comment by jimrandomh · 2011-01-03T20:26:11.500Z · LW(p) · GW(p)
There are a few (isomorphic) ways to formulate questions as UTMs. You could provide a description of how to interpret final states as a scoring measure, and ask for the highest-scoring input. Or you could provide a UTM that you know halts and ask what part of its final state will be. These are equally powerful, given a genie with infinite computational resources, since there are trivial wrapper programs to convert between them; but the former degrades better under resource constraints.
Turing machines might not be the best way to think about it, since we don't usually talk about retrieving outputs (other than halts/doesn't halt) from them, although there's no reason why we can't in principle. Perhaps a better alternative to talk about reducing to would be a simple recursive language like lambda calculus. But it's moot anyways, since we wouldn't actually perform the reduction, just confirm that it exists and doesn't involve any intractable steps like simulating a human that we haven't uploaded.
↑ comment by paulfchristiano · 2011-01-03T20:06:03.791Z · LW(p) · GW(p)
One approach---in the post---is to interpret a program as a scoring measure, and the "question" is to find an input that scores well.
I don't really care if you use that particular UTM, or some other UTM, or some other programming language, or a tractable mathematical definition, because they are all the same modulo the constant effort required to translate (which seems to be within human reach, if only barely, at this point).
comment by WrongBot · 2011-01-03T09:54:50.628Z · LW(p) · GW(p)
For a UFAI:
Replies from: DanielVarga, endoself, JoshuaZFind me the shortest program that can
- Find a proof of whether P=NP, given only mathematical knowledge available on Wikipedia.
- Create translations of these English novels, science textbooks, and postmodern literary journal articles into Japanese, with each translation receiving a METEOR score greater than some human expert's.
- Win at least 9 out of 10 games of Starcraft II against a single "Insane"-level AI opponent.
↑ comment by DanielVarga · 2011-01-03T11:45:55.960Z · LW(p) · GW(p)
I see at least two serious problems with your approach.
The shortest program for finding a proof or disproof of P=NP is very short. I could write it down for you. It is also very slow, of course. If we add a time constraint to the task, then it is quite possible that the shortest program is just a print statement containing the proof itself.
A more serious objection is that there is no reason to believe that the shortest program solving these three tasks will be useful for you if you want to use it for a fourth task. It seems reasonable to conjecture that you will have to change only a few bits of this program if you want to modify it for a fourth task. But -- seeing the unbelievably obfuscated code -- you will have absolutely no idea which bits to change. Even if I take your mention of UFAI literally, you will have no idea which bits to change to solve the TakeOverTheWorld task.
Replies from: wedrifid↑ comment by wedrifid · 2011-01-03T12:51:16.510Z · LW(p) · GW(p)
A more serious objection is that there is no reason to believe that the shortest program solving these three tasks will be useful for you if you want to use it for a fourth task.
It could be rather handy if the fourth task involves, say cryptography.
↑ comment by endoself · 2011-01-03T21:26:34.482Z · LW(p) · GW(p)
This is actually an undecidable question. If you say find me the shortest program that does 'x' for sufficiently complex x, there will be shorter programs that output nothing forever, but which cannot be proved to halt, due to the halting problem. This can be fixed by imposing resource constraints on the program or saying "make it as short as you can", if the AI understands such things. Presumably, if you input this request as stated, the AI would tell you it could not solve it and nothing more, so other posters should keep this problem in mind.
↑ comment by JoshuaZ · 2011-01-03T11:39:22.371Z · LW(p) · GW(p)
2 seem to not be necessarily easily specified (what humans mean by "translate" is complicated and differs from person to person. Consider translating poetry or puns. Or translating "this sentence would be difficult to translate into Japanese" (due to Douglas Hofstadter). I'd also be worried about a sufficiently smart UFAI embedding something clever if it was allowed to give that much output data that people would be widely exposed to.
Replies from: TheOtherDave, magfrump↑ comment by TheOtherDave · 2011-01-03T15:09:24.115Z · LW(p) · GW(p)
METEOR is intended precisely to specify a meaning of "translate" that maps non-horribly to human intuitions about it. It's imperfect, granted, but WrongBot is not ignoring the problem.
An AI capable of embedding something into text that cleverly hacks human minds ought be able to understand vague requests, and ought not need humans to pose it questions in the first place.
↑ comment by magfrump · 2011-01-03T18:43:37.233Z · LW(p) · GW(p)
"this sentence would be difficult to translate into Japanese"
この文は日本語に翻訳しにくい
Even babelfish gets that one right.
Admittedly there are other levels of formality and synonyms that could be used for difficult, but there are other sentences in English meaning the same thing as well.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-01-04T03:24:11.095Z · LW(p) · GW(p)
Even babelfish gets that one right.
Missing the point. The sentence's nature changes when translated. Once it is in Japanese the sentence doesn't make any sense. So something is lost in the translation.
Incidentally, Hofstadter ran into precisely this issue when the French edition of GEB was being written. In GEB he uses the version "This sentence would be very difficult to translate into French." The problem then became what to do in the French version. They made instead a French sentence which declares itself to be difficult to translate into English.
comment by PeterS · 2011-01-09T07:43:09.039Z · LW(p) · GW(p)
Can the question given in this post be formulated precisely?
If so, nix everything but the precise description of the answer-box's behavior and ask for a program which simulates such a device.
If not, ... then I choose to interpret it in such a way that I can ask for the above anyway.
comment by whpearson · 2011-01-04T01:46:03.789Z · LW(p) · GW(p)
I can imagine questions that might be potentially good ones to ask.
I think that an intelligence is made of sets of interacting programs, so finding computer archs that have certain properties in constraining and guiding all possible programs would be useful. If I am thinking along the right lines, of course.
comment by khafra · 2011-01-03T22:32:07.269Z · LW(p) · GW(p)
I cannot find it, but I once read a SF short even more closely related to this question. Rough plot synopsis: a space wayfarer encounters a mysterious, fabled oracle which will answer any question, but doesn't learn anything useful except that when you know precisely what question to ask, you're already most of the way to the answer.
Replies from: Blueberry, paulfchristiano, Clippy↑ comment by Blueberry · 2011-01-19T22:29:44.556Z · LW(p) · GW(p)
"Ask a Foolish Question" by Robert Sheckley, freely available on Project Gutenberg.
Sheckley is one of my favorite SF writers. His satire is up there with Twain.
↑ comment by paulfchristiano · 2011-01-03T23:20:01.431Z · LW(p) · GW(p)
I have a hunch this isn't true. It took humans until the 20th century to formulate their first precise questions, but I think at some point we will pass a tipping point, where the scope of things you can describe precisely can be extended merely by answering precise questions.
comment by timtyler · 2011-01-03T11:55:35.841Z · LW(p) · GW(p)
It seems pretty unlikely that we will get an all-knowing oracle which we can only interrogate with questions that can be scored.
What we are much more likely to get is powerful forecasters - that can answer the question of: What symbol in this stream is most likely to come next? I wonder what we would be able to do with them. Some claim that such systems would pass the Turing Test - if they were powerful enough - and were given a huge amount of relevant training data.