Posts

Comments

Comment by sandfort on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-09-01T10:43:15.998Z · LW · GW

Correction:

It seems that in general, the less certain any counterfactual oracle is about its prediction, the more self-confirming it is. This is because the possible counterfactual worlds in which we have or acquire self-confirming beliefs regarding the prediction will have a high expected score

This is actually only true in certain cases, since in general many other counterfactual worlds could also have high expected scores. Specifically, it is true to the extent that the oracle is uncertain mostly about aspects of the world that would be affected by the prediction, and to the extent that self-confirming predictions lead to higher scores than any alternative.

Comment by sandfort on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-31T12:55:22.051Z · LW · GW

Submission (LB). The post's team-choosing example suggests a method for turning any low-bandwidth oracle into a counterfactual oracle : have output from the same set of possible outputs ; in case of erasure calculate for a randomly chosen and set if and to otherwise. Although the counterfactual low-bandwidth oracle is not any safer, it has the advantage of almost never requiring us to evaluate its score. Thus, by running multiple oracles in sequence (stopping the process after the first erasure event) we can (with high probability) receive the full series of answers as if from a high-bandwidth oracle.


For example, we can ask each oracle in turn for advice on how to make a more effective processor. If erasure occurs, we attempt to make a processor with the advice obtained up that point and use some measure of its performance as the score. If there is no erasure event, the final concatenation of answers forms a much safer guide to processor building than an equally large answer from a single oracle.

1. It seems that in general, the less certain any counterfactual oracle is about its prediction, the more self-confirming it is. This is because the possible counterfactual worlds in which we have or acquire self-confirming beliefs regarding the prediction will have a high expected score. Hence:
Submission (CF). Given a high-bandwidth counterfactual oracle, use a second counterfactual oracle with a shared erasure event to predict its score. If the predicted score's distance from its upper bound is greater than some chosen limit, discard the high-bandwidth prediction.

Comment by sandfort on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-30T20:39:38.435Z · LW · GW

Submission (CF). Use a counterfactual oracle to send a message to ourselves with a time delay. We choose an episode length and set of possible messages . The oracle outputs a time and message . We are automatically sent at time unless erasure occurs, in which case at the end of the episode we input a time and message which we wish we had received at . The oracle's reward is then if and otherwise. We do not find out that erasure has occurred until the end of the episode, ensuring that events before are the same as in the counterfactual world.

Here are some examples, which can work with a single possible message, used as an alert. EDIT: I made the examples slightly more concrete.

  • Set the episode length equal to the planned duration of a research program and send an alert when continuing the program would produce unworthwhile or dangerous results. The time delay ensures we keep the useful results obtained before , whereas receiving a warning at the start of the program could change the outcome. Of course, this relies on us being able to recognize dangerous results by the end of the episode and remaining in contact with the oracle until then.
  • Use a second counterfactual oracle to predict a series of future stock prices. If its erasure event does not occur, run a time-delayed oracle with the same episode length as it. We then use the predicted series to make investments. If the time-delayed oracle's erasure event occurs, we input the time at which we think our actions made the predictions remaining too inaccurate to use. When we receive an alert from the time-delayed oracle, we can discard the remaining predictions and/or rerun the oracles.

EDIT: Here are some potentially useful modifications we can make to the oracle:

  • Have the oracle accept input at any time during the episode, possibly multiple times. If erasure occurs, the last message entered during the episode is used as the target. For example, we could set a default message that is sent to the start of the episode as a safety measure unless we subsequently update it.
  • We can allow the episode to be ended early. Other than happening earlier, the scoring mechanism remains the same. If erasure did not occur, nothing happens when we try to end the episode (we would then know that the oracle has made a bad prediction). We might wish to end the episode early if we were waiting for a specific event, or in extreme situations in which we think we will lose access to the oracle.
  • Instead of the oracle predicting the message and time delay at the start of the episode, have it output at any time . Erasure/shutdown now also happens at , while scoring occurs at the end of the episode as before. This lets the oracle actively make use of observations from the start of the episode until , increasing its predictive power (if the oracle is sufficiently powerful, this should change not affect its output).