Questions are tools to help answerers optimize utility
post by ozziegooen · 2021-05-24T19:30:30.270Z · LW · GW · 6 commentsContents
Examples Question: What's the best way to arrive at my dentist appointment today? Breakdown Students and Professors Corrigibility Takeaways None 6 comments
Epistemic & Scholarly Status: Fairly quickly written. I’m sure there’s better writing out there on the topic somewhere, but I haven’t found it so far. I have some confidence in the main point, but the terminology around it makes it difficult to be concrete.
TLDR
The very asking of a question presupposes multiple assumptions that break down when the answerer is capable enough. Questions stop making sense once a questioner has sufficient trust in the answerer. After some threshold, the answerer will instead be trusted to directly reach out whenever is appropriate. I think this insight can help draw light on a few dilemmas.
I've been doing some reflection on what it means to answer a question well.
Questions often are poorly specified or chosen. A keen answerer should not only give an answer, but often provide a better question. But how far can this go? If the answerer could be more useful by ignoring the question altogether, should they? Perhaps there is some fundamental reason why we should desire answerers to act as oracles [LW · GW] instead of more general information feeders.
My impression is that situations where we have incredibly intelligent agents doing nothing but answer questions are artificial and contrived. Below I attempt to clarify this.
Let's define some terminology:
Asker: The agent asking the question.
Answerer: The agent answering the question. It could be the same as the asker, but probably later in time. Agent here just means “entity”, not agent vs. tool agent.
Asked question: The original question that the asker asks.
Enlightened question: The question that the asker should have asked, if they were to have had more information and insight. This obviously changes depending on exactly how much more information and insight they have.
Ideal answer: The best attempt to directly answer a question. This could either be the asked question or an enlightened question. Answer quality is evaluated for how well it answers the question, not how well it helps the asker.
Ideal response: The best response the answerer could provide to the asker. This is not the same as the ideal answer. Response quality is evaluated for how it helps the answer, not how well it answers the question.
Utility: A representation of one's preferences. Utility function, not utilitarianism.
Examples
Question: What's the best way to arrive at my dentist appointment today?
The answer to the stated question could be,
Take Route 83 at 6:30pm
The answer to an enlightened question could be,
Your dentist is sick, so your appointment will definitely be cancelled
A good response, knowing the question, but not answering it, might be,
It doesn't really matter what route you should take, but now that I know that you're concerned about the trip, I can tell you that the best way for you to save time today would be by ordering in food from Sophia Gartener at 3:30. It will arrive at 5.
A good response, ignoring the question (or correctly not updating based on it), and optimizing for utility, might be,
There's a possible power outage happening in the next few days. I suggest borrowing a generator sometime tomorrow. I've left instructions for how to do so in an email.
The puzzle with the later answers is that they seem like poor answers, although they are helpful responses. The obvious solution here is to flag that this is a very artificial scenario. In a more realistic case, the last response would have been given before the question was asked. The asker would learn to trust that the answerer would tell them everything useful before they even realized they needed to know it. They would likely either stop asking questions, or ask very different sorts of questions.
The act of asking a question implies (it almost presupposes) an information asymmetry. The asker assumes that the answerer doesn't have or hasn't drawn attention to some information. If the answerer actually does have this information (i.e. they can intuit what is valuable to the asker and when), then it wouldn't make sense to ask the question. This is an instance of the maxim of relevance.
So, questions make sense only until the answerers get good enough. This is a really high bar. Being "good enough" would likely require a tremendous amount of prediction power and deep human understanding. The answerer would have to be much more intelligent in the given area than the asker for this to work.
Breakdown
If we were to imagine a breakdown of information conveyed in the above question, we could then identify a more precise and empathetic response from a very smart being.
You've asked me how to get to your dentist appointment. This reveals to me the following information:
1. You are unsure about how to get to a dentist appointment.
2. You believe that the expected information value you can get to optimize your route is more valuable than the cost of asking the question.
3. You expect that I either couldn't predict that such information would have been valuable to you without you asking it, or I wouldn't have told you unless asked.
4. You do not expect me to have much more valuable information I could relay to you at this time.Human, I believe you have dramatically underestimated my abilities. I believe that you are severely incorrect about points 3 and 4. You have much to learn on how to interact with me.
Students and Professors
Another analogy is that of students and professors. Many students don't ask any questions to their professors, particularly in large classes. They expect the professors will lead them through all of the important information. They expect that the professors are more informed about which information is important.
In many situations the asker is the one trying to be useful to the answerer, instead of it being the other way around. For example, the professor could ask the students questions to narrow in on what information might be most useful to them. I imagine that as the hypothetical empathetic professor improves along a few particular axes, they will be asked fewer questions, and ask more questions. In this later case, the questions are mainly a form of elicitation to learn about the answerer.
Corrigibility
There could well be situations where answerers assume that they could better respond with a non-answer, but the askers would prefer otherwise. This becomes an issue of corrigibility [? · GW]. Here there could be a clear conflict between the two. I imagine these issues will represent a minority of the future use of such system, but these instances could be particularly important. This is a big rabbit hole and has been deeply discussed in the corrigibility and similar posts, so I'll leave it out for this post.
Takeaways
I think that:
- Answerers should generally try to figure out enlightened questions and answer those questions. This method is often the one that will be best for the asker's utility.
- If it is the case that answers can better help the askers by ignoring the question and instead doing something else, that's better. They should try to give the ideal response, not the ideal answer.
- However, in almost all cases now, the best response to attempt is the ideal answer. This is true just because the askers often have some key information not accessible to the answerers. Often, when askers ask questions, they believe there's likely sufficient benefit for these particular questions to be answered, so humble answerers should often trust them.
- Once ideal responses are very different from ideal answers, then people will stop asking questions. Questions primarily serve the function of helping responses to be more useful, so if that no longer holds, questions will be no longer valuable.
Correspondingly, I imagine that as AGI gets close, people might ask fewer and fewer questions; instead, relevant information will be better pushed to them. A really powerful oracle wouldn't stay an oracle for long, they would quickly get turned into an information feed of some kind.
Thanks to Rohin Shah for discussion and comments on this piece
6 comments
Comments sorted by top scores.
comment by romeostevensit · 2021-05-25T04:59:38.223Z · LW(p) · GW(p)
Answerers can also split out the breakdown/tacit linked premises for the questioner, like you do in this post, if the questioner has patience for that because the question is somewhat important to them. See also: Arisototle treating questions only fully answered if they separately address four different types of whys.
comment by Crackatook (peterson-yook) · 2021-05-25T02:15:00.212Z · LW(p) · GW(p)
- Answerers should generally try to figure out enlightened questions and answer those questions. This method is often the one that will be best for the asker's utility.
This takeaway makes sense to me, and I would suggest separating questions into different categories(contexts, characters, etc). In a large classroom, people often need clarifications than solutions, as thinking more than one minute here can be counter productive; Professor would answer directly and shortly, especially when questions are simple. In a problem-solving or thesis writing, people stuck, and enlightened-question-answering would help them a lot; People need different perspectives as well as solutions. In the case of daily life questions, like dentist appointment, the answerer would response, or notify in the morning as the answer is predictable, as you said above.
In above 3 cases, the most benefit of enlightened-question-answering comes to the complex and advanced situations like problem-solving. This reveals that your model gives different viewpoints, over giving knowledges.
You also concerned about people asking fewer questions to this answerer, partially because its answers are off-topic in its full force. Yeah, we don't need diverse viewpoints everytime. It will be frustrating. but...
Correspondingly, I imagine that as AGI gets close, people might ask fewer and fewer questions; instead, relevant information will be better pushed to them. A really powerful oracle wouldn't stay an oracle for long, they would quickly get turned into an information feed of some kind.
To me, this is already happening. First Youtube comes to my mind. Second is Lesswrong.com. The similarity of the two is that I use search bar rarely as contents are already displayed, by AI-recommender or by Human administrator. Surely this is the place where people want diverse viewpoints more and more.
On the other hand I don't use search bar because I don't come with questions. When I have new keywords, I should use search bar to play Baba Yetu or to watch more clip of Thomasin McKenzie.(On the third hand, youtube uses cookies and reflects my recent interest on Sid Meier's Civilization. It is becoming "the good enough"!!) It is not exact to compare your answer model and youtube recommendation, but this may show the changing paradigm of questioning.
Replies from: ozziegooen↑ comment by ozziegooen · 2021-05-25T14:24:30.762Z · LW(p) · GW(p)
Solid points, thank you.
On the latter (around information feeds), I very much agree. Those examples are good ones. I just released a follow up post that goes into a bit more detail on some of this, here:
https://www.lesswrong.com/posts/kfY2JegjuzLewWyZd/oracles-informers-and-controllers
↑ comment by Crackatook (peterson-yook) · 2021-05-26T03:32:33.619Z · LW(p) · GW(p)
Oh this post surprises me! Similar thoughts are going on your next post and this one [LW · GW].
comment by antanaclasis · 2021-05-25T00:07:41.291Z · LW(p) · GW(p)
Copyediting note: it appears that the parenthetical statement <(Note: agent here just means “being”, not> got cut off.
Replies from: ozziegooen↑ comment by ozziegooen · 2021-05-25T00:31:08.250Z · LW(p) · GW(p)
Fixed, thanks!