# Asking Precise Questions

post by paulfchristiano · 2011-01-03T08:48:45.020Z · score: 6 (7 votes) · LW · GW · Legacy · 36 commentsIsaac Asimov once described a future in which all technical thought was automated, and the role of humans was reduced to finding appropriate questions to pose to thinking machines. I wouldn't suggest planning for this eventuality, but it struck me as an interesting situation. What would we do, if we could get the answer to any question we could formulate precisely? (In the story, questions didn't need to be formulated precisely, but nevermind.) For concreteness, suppose that we have a box as smart as a million Einsteins, cooperating effectively for a century every time we ask a question, but which is capable only of solving precisely specified problems.

You can't say "analyze the result of this experiment." You can say, "find me the setting for these 10 parameters which best explains this data" or "write me a short program which predicts this data." You can't say "find me a program that plays Go well." You can say, "find me a program that beats this particular Go ai, even with a 9 stone handicap." Etc. More formally, lets say you can specify any scoring program and ask the box to find an input that scores as well as it can.

What would you do, if you got exactly one question? I don't think humanity is posed to get any earth-shattering insights. I don't think we could find a theory of everything, or a friendly AI, or any sort of AI at all, or a solution to any real problem facing us, using just one question. But maybe that is just a failure of my creativity.

What would you plan to do, if you had unlimited access? An AGI or brain emulation arguably implicitly converts our vague real world objectives into a precise form. Are there other ways to bridge the gap between what humans can formally describe and what humans want? Can you bootstrap your way there starting from current understanding? What is any reasonable first step?

## 36 comments

Comments sorted by top scores.

I think I can get close to saving the world with just one question, though it uses a dirty conditional trick:

If P=NP, please give me an algorithm in P that solves SAT. The degree of the polynomial has to be as small as possible. (Defining a good way to minimize the coefficients for a given degree is trickier, left as an exercise to the reader.)

Otherwise please give me a small and fast algorithm that finds short formal proofs for a list of important theorems from human mathematics. (How to weigh algorithm size, running time, and size of output is left as an exercise to the reader.)

This is also my first reaction, but it has two problems.

First, I think the best case is really just getting a much weaker machine like the one whose existence we are already positing. So maybe it can help you bridge the gap from the first scenario to the second, but I doubt it will be helpful in the second.

Second, I think its likely you would get something horribly, horribly strange out. It gives you proofs which you simply cannot comprehend, that don't even fit in the language of modern mathematics. Of course analyzing the proofs that come out is an incredibly interesting exercise, but I think I could imagine totally inscrutable 100 page formal proofs of the Riemann Hypothesis which are harder to understand than the Riemann Hypothesis is to prove directly. Human society really doesn't want proofs of hard theorems; it wants to advance human mathematics. This issue might come up if we develop some unexpectedly powerful automated theorem prover, so I guess it is interesting in its own right (and might have been discussed before).

This is an issue that has already come up to some extent. The initial proof of the Robbins conjecture used automated proving systems for a large amount of it, and they made no intuitive sense. It took about 2 years for someone to grok the proof well enough to present a version that made sense to people.

So maybe it can help you bridge the gap from the first scenario to the second, but I doubt it will be helpful in the second.

Hm, I thought it was already obvious what to do in the second scenario, and the only challenge was getting from the first to the second. See my reply to Nesov.

I think I can get close to saving the world with just one question

Even having a magical computational-efficiency-optimizer won't currently help with saving the world. Can easily help with destroying it though.

This is one of the issues where LW potentially disagrees with the rest of humanity, and I think the LW position (or at least the position you articulate) is actually wrong. I see many object-level ways to help the world once we have a magical optimizer: solve protein folding, model plasma containment, etc. And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.

And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.

We can do lots of useful things, sure (this is not a point where we disagree), but they don't add up towards "saving the world". These are just short-term benefits. Technological progress makes it easier to screw stuff up irrecoverably, advanced tech is the enemy. One shouldn't generally advance the tech if distant end-of-the-world is considered important as compared to immediate benefits (this value judgment can well be a real point of disagreement).

I agree with Nesov's response, and would be interested to know if you've changed your mind since writing this comment.

Modeling physical systems is already hard. I don't think we could yet write down the dynamics of the physical systems well enough (or rather, we don't understand what the most important characteristics are) to come up with a precise formulation of the major problems in synthetic biology or nanotechnology. I certainly concede that an optimizer would be helpful in solving many subproblems, and would considerably increase the speed of new developments in pretty much every field. I don't think it solves many problems on its own though.

But even if you could solve narrow existing technological problems or develop new technologies at a steady pace, it seems like you should be able to do more. Suppose the box can do in a minute what takes existing humans a million years. Then our only upper bound on our capabilities using the box is whatever we expect of a million years of progress at the current pace. I don't know about you, but I expect pretty much everything.

You can combine a million questions into "one question" by having a scoring function that adds the scores of component scores. So the only problem is that we can't ask followup questions.

Correct. In the first setting you only get to ask questions you can formalize now. In the second setting, you can ask questions to help with the task of formalizing more and more complex ideas.

Given those constraints, I'd come up with the most comprehensive quality-of-life scoring system I could come up with, and ask it for a set of steps to perform that maximizes my score on that system over my lifetime.

Actually, that would probably be worth doing even without the tool.

That would involve precisely describing your existence, the possible steps you are capable of performing, and all of the quality-of-life measures you define (not to mention choosing a set of quality-of-life measures whose blind optimization is good). That seems like a pretty tall order.

Well, *with* the tool, the problem as stated only requires the last of those. If an understanding of me is required in order to achieve a test condition, the capacity for it is assumed as long as the test condition is well-defined... right? I don't have to teach this thing the rules of Go or programming or me, as long as I can give it a test condition expressed concretely in terms of those things.

This is, admittedly, something of an abuse of your hypothetical.

Regardless, I agree that even just the last of those is a pretty tall order... as you say, not least because of the blind-optimization problem.

Then again, I was fairly dissatisfied with the Fun Theory sequence, so perhaps the exercise of putting together a set of measures that reflect my values nevertheless is an exercise worth doing, even without your tool (which does rather lower the stakes).

For a UFAI:

Find me the shortest program that can

- Find a proof of whether P=NP, given only mathematical knowledge available on Wikipedia.
- Create translations of these English novels, science textbooks, and postmodern literary journal articles into Japanese, with each translation receiving a METEOR score greater than some human expert's.
- Win at least 9 out of 10 games of Starcraft II against a single "Insane"-level AI opponent.

I see at least two serious problems with your approach.

The shortest program for finding a proof or disproof of P=NP is very short. I could write it down for you. It is also very slow, of course. If we add a time constraint to the task, then it is quite possible that the shortest program is just a print statement containing the proof itself.

A more serious objection is that there is no reason to believe that the shortest program solving these three tasks will be useful for you if you want to use it for a fourth task. It seems reasonable to conjecture that you will have to change only a few bits of this program if you want to modify it for a fourth task. But -- seeing the unbelievably obfuscated code -- you will have absolutely no idea which bits to change. Even if I take your mention of UFAI literally, you will have no idea which bits to change to solve the TakeOverTheWorld task.

A more serious objection is that there is no reason to believe that the shortest program solving these three tasks will be useful for you if you want to use it for a fourth task.

It could be rather handy if the fourth task involves, say cryptography.

This is actually an undecidable question. If you say find me the shortest program that does 'x' for sufficiently complex x, there will be shorter programs that output nothing forever, but which cannot be proved to halt, due to the halting problem. This can be fixed by imposing resource constraints on the program or saying "make it as short as you can", if the AI understands such things. Presumably, if you input this request as stated, the AI would tell you it could not solve it and nothing more, so other posters should keep this problem in mind.

2 seem to not be necessarily easily specified (what humans mean by "translate" is complicated and differs from person to person. Consider translating poetry or puns. Or translating "this sentence would be difficult to translate into Japanese" (due to Douglas Hofstadter). I'd also be worried about a sufficiently smart UFAI embedding something clever if it was allowed to give that much output data that people would be widely exposed to.

METEOR is intended precisely to specify a meaning of "translate" that maps non-horribly to human intuitions about it. It's imperfect, granted, but WrongBot is not ignoring the problem.

An AI capable of embedding something into text that cleverly hacks human minds ought be able to understand vague requests, and ought not need humans to pose it questions in the first place.

"this sentence would be difficult to translate into Japanese"

この文は日本語に翻訳しにくい

Even babelfish gets that one right.

Admittedly there are other levels of formality and synonyms that could be used for difficult, but there are other sentences in English meaning the same thing as well.

Even babelfish gets that one right.

Missing the point. The sentence's nature changes when translated. Once it is in Japanese the sentence doesn't make any sense. So something is lost in the translation.

Incidentally, Hofstadter ran into precisely this issue when the French edition of GEB was being written. In GEB he uses the version "This sentence would be very difficult to translate into French." The problem then became what to do in the French version. They made instead a French sentence which declares itself to be difficult to translate into English.

Can the question given in this post be formulated precisely?

If so, nix everything but the precise description of the answer-box's behavior and ask for a program which simulates such a device.

If not, ... then I choose to interpret it in such a way that I can ask for the above anyway.

I can imagine questions that might be potentially good ones to ask.

I think that an intelligence is made of sets of interacting programs, so finding computer archs that have certain properties in constraining and guiding all possible programs would be useful. If I am thinking along the right lines, of course.

I cannot find it, but I once read a SF short even more closely related to this question. Rough plot synopsis: a space wayfarer encounters a mysterious, fabled oracle which will answer any question, but doesn't learn anything useful except that when you know precisely what question to ask, you're already most of the way to the answer.

"Ask a Foolish Question" by Robert Sheckley, freely available on Project Gutenberg.

Sheckley is one of my favorite SF writers. His satire is up there with Twain.

I have a hunch this isn't true. It took humans until the 20th century to formulate their first precise questions, but I think at some point we will pass a tipping point, where the scope of things you can describe precisely can be extended merely by answering precise questions.

Unfortunately, there's no precise way of formulating what precise formulation means.

Sure there is. If you can write it as an input tape to a Universal Turing Machine, it's precisely formulated. If you can't, it isn't.

Sorry, what exactly do you mean by this? It sounds like you're saying a question is only precisely formulated if it's decidable, which seems way too strong. Do you mean something more like, it's precisely formulated if it's a mathematically-formulated question about the outputs of a UTM on a given class of inputs?

Then only programs for that particular UTM are "precisely formulated". How do you interpret them as questions, even?

There are a few (isomorphic) ways to formulate questions as UTMs. You could provide a description of how to interpret final states as a scoring measure, and ask for the highest-scoring input. Or you could provide a UTM that you know halts and ask what part of its final state will be. These are equally powerful, given a genie with infinite computational resources, since there are trivial wrapper programs to convert between them; but the former degrades better under resource constraints.

Turing machines might not be the best way to think about it, since we don't usually talk about retrieving outputs (other than halts/doesn't halt) from them, although there's no reason why we can't in principle. Perhaps a better alternative to talk about reducing to would be a simple recursive language like lambda calculus. But it's moot anyways, since we wouldn't actually perform the reduction, just confirm that it exists and doesn't involve any intractable steps like simulating a human that we haven't uploaded.

One approach---in the post---is to interpret a program as a scoring measure, and the "question" is to find an input that scores well.

I don't really care if you use that particular UTM, or some other UTM, or some other programming language, or a tractable mathematical definition, because they are all the same modulo the constant effort required to translate (which seems to be within human reach, if only barely, at this point).

It seems pretty unlikely that we will get an all-knowing oracle which we can only interrogate with questions that can be scored.

What we are *much* more likely to get is powerful forecasters - that can answer the question of: What symbol in this stream is most likely to come next? I wonder what we would be able to do with them. Some claim that such systems would pass the Turing Test - if they were powerful enough - and were given a *huge* amount of relevant training data.