Catastrophe Mitigation Using DRL
post by Vanessa Kosoy (vanessa-kosoy) · 2017-11-17T15:38:18.000Z · LW · GW · 0 commentsContents
No comments
% operators that are separated from the operand by a space
% autosize deliminaters
% operators that require brackets
% operators that require parentheses
% Paper specific
Previously we derived a regret bound for DRL which assumed the advisor is "locally sane." Such an advisor can only take actions that don't lose any value in the long term. In particular, if the environment contains a latent catastrophe that manifests with a certain rate (such as the possibility of an UFAI), a locally sane advisor has to take the optimal course of action to mitigate it, since every delay yields a positive probability of the catastrophe manifesting and leading to permanent loss of value. This state of affairs is unsatisfactory, since we would like to have performance guarantees for an AI that can mitigate catastrophes that the human operator cannot mitigate on their own. To address this problem, we introduce a new form of DRL where in every hypothetical environment the set of uncorrupted states is divided into "dangerous" (impending catastrophe) and "safe" (catastrophe was mitigated). The advisor is then only required to be locally sane in safe states, whereas in dangerous states certain "leaking" of long-term value is allowed. We derive a regret bound in this setting as a function of the time discount factor, the expected value of catastrophe mitigation time for the optimal policy, and the "value leak" rate (i.e. essentially the rate of catastrophe occurrence). The form of this regret bound implies that in certain asymptotic regimes, the agent attains near-optimal expected utility (and in particular mitigates the catastrophe with probability close to 1), whereas the advisor on its own fails to mitigate the catastrophe with probability close to 1. %Thus, this formalism can be regarded as a simple model of aligned superintelligence: the agent is aligned since near-optimal utility is achieved despite the presence of corrupted states (which are treated more or less the same as before) and it is superintelligent since its performance is vastly better than the performance of the advisor.
Appendix A proves the main theorem. Appendix B contains the proof of an important lemma which is however almost identical to what appeared in the previous essay. Appendix C contains several propositions from the previous essay which we are used in the proof.
##Results
We start by formalising the concepts of a "catastrophe" and "catastrophe mitigation" in the language of MDPs.
#Definition 1
A catastrophe MDP is an MDP together with a partition of into subsets (safe, dangerous and corrupt states respectively).
#Definition 2
Fix a catastrophe MDP . Define by
is called a mitigation policy for when
i. For any , .
is called a proper mitigation policy for when condition i holds and
ii. For any , .
#Definition 3
Fix , a catastrophe MDP and a proper mitigation policy . is said to have expected mitigation time when for any
Next, we introduce the notion of an MDP perturbation. We will use it by considering perturbations of a catastrophe MDP which "eliminate the catastrophe."
#Definition 4
Fix and consider a catastrophe MDP . An MDP is said to be a -perturbation of when
i.
ii.
iii.
iv. For any and ,
v. For any and , there exists s.t. .
Similarly, we can consider perturbations of a policy.
#Definition 5
Fix and consider a catastrophe MDP . Given and , is said to be a -perturbation of when
i. For any , .
ii. For any , there exists s.t. .
We will also need to introduce policy-specific value functions, Q-functions and relatively -optimal actions.
#Definition 6
Fix an MDP and . We define and by
For each , we define , and by
Now we give the new (weaker) condition on the advisor policy. For notational simplicity, we assume the policy is stationary. It is easy to generalize these results to non-stationary advisor policies and to policies that depend on irrelevant additional information (i.e. policies for universes that are realizations of the MDP).
#Definition 7
Given a catastrophe MDP , we denote the MDP defined by
-
-
-
-
For any , .
-
For any , .
#Definition 8
Fix . Consider a catastrophe MDP . A policy is said to be locally -sane for when there exists a -perturbation of with a deterministic proper mitigation policy and a -perturbation of s.t.
i. For all , .
ii. is a mitigation policy for .
iii. For any :
iv. For any :
% Alternatively we could allow \pi^* to be stochastic... % and require \tilde{\pi} to be of the form \epsilon \pi^* + (1 - \epsilon) \pi'
Given , is said to have potential mitigation time when has it as expected mitigation time.
Note that a locally -sane policy still has to be -optimal in . This requirement seems reasonably realistic, since, roughly speaking, it only means that there is some way to "rearrange the universe" that the agent can achieve, and that would be "endorsed" by the advisor, s.t this rearrangement doesn't destroy substantially much value and s.t. after this rearrangement, there is no "impending catastrophe" that the agent has to prevent and the advisor wouldn't be able to prevent in its place. In particular, this rearrangement may involve creating some subagents inside the environment and destroying the original agent, in which case any policy on is "vacuously" optimal (since all actions have no effect).
We can now formulate the main result.
#Theorem 1
Fix an interface , , and for each , an MDP s.t. . Now, consider for each , an -universe which is an -realization of a catastrophe MDP with state function s.t.
i.
ii. For each and , .
iii. For each , .
iv. Given and , if and , then (this condition means that in uncorrupted states, the reward is observable).
Consider also , and a locally -sane policy for . Assume has potential mitigation time . Then, there exists an -policy s.t. for any
Here, is the -policy defined by . and the are regarded as fixed and we don't explicitly examine their effect on regret, whereas , , and the are regarded as variable with the asymptotics , .
In most interesting cases, (i.e. the "mean time between catastrophes" is much shorter than a discount horizon) and (i.e. the expected mitigation time is much shorter than the discount horizon), which allows simplifying the above to
We give a simple example.
#Example 1
Let , . For any and , we fix some and define the catastrophe MDP by
-
, , (adding corrupted states is an easy exercise).
-
If and then
- If then
- If and then
- If and then
-
, if then .
-
and iff (this defines a unique ).
-
If then for any .
-
, .
-
If then , .
We have . Consider the asymptotic regime , , . According to Theorem 1, we get
The probability of a catastrophe (i.e. ending up in state ) for the optimal policy for a given is . Therefore, the probability of a catastrophe for policy is . On the other hand, it is easy to see that the policy has a probability of catastrophe (and in particular regret ): it spends time "exploring" with a probability of a catastrophe on every step.
Note that this example can be interpreted as a version of Christiano's approval-directed agent, if we regard the state as a "plan of action" that the advisor may either approve or not. But in this formalism, it is a special case of consequentialist reasoning.
Theorem 1 speaks of a finite set of environments, but as before (see Proposition 1 here and Corollary 3 here), there is a "structural" equivalent, i.e. we can use it to produce corollaries about Bayesian agents with priors over a countable set of environments. The difference is, in this case we consider asymptotic regimes in which the environment is also variable, so the probability weight of the environment in the prior will affect the regret bound. We leave out the details for now.
##Appendix A
We start by deriving a more general and more precise version of the non-catastrophic regret bound, in which the optimal policy is replaced by an arbitrary "reference policy" (later it will be related to the mitigation policy) and the dependence on the MDPs is expressed via a bound on the derivative of by .
#Definition A.1
Fix . Consider an MDP and policies , . is called -sane relatively to when for any
i.
ii.
#Lemma A.1
Fix an interface , and . Now, consider for each , an -universe which is an -realization of an MDP with state function and policies , . Consider also , and assume that
i. is -sane relatively to .
ii. For any and
Then, there exists an -policy s.t. for any
The -notation refers to the asymptotics where is fixed (so we don't explicitly examine its effect on regret) whereas , and the are variable and , .
The proof of Lemma A.1 is almost identical to the proof the main theorem for "non-catastrophic" DRL up to minor modifications needed to pass from absolute to relative regret, and tracking the contribution of the derivative of . We give it in Appendix B.
We will not apply Lemma A.1 directly the the universes of Theorem 1. Instead, we will define new universes using the following constructions.
#Definition A.2
Consider a catastrophe MDP. We define the catastrophe MDP as follows.
-
, , .
-
-
For any :
- For any , :
- For any :
-
For any , .
-
For any , .
-
Now, consider an interface and a which is an -realization of a catastrophe MDP with state function . Denote , and . Denote the projection mapping and corresponding. We define the -universe and the function as follows
It is easy to see that is an -realization of with state function .
#Definition A.3
Consider a catastrophe MDP. We define the catastrophe MDP as follows.
-
, , .
-
-
-
For any , .
-
Now, consider an interface and a which is an -realization of a catastrophe MDP with state function . We define the -universe as follows
It is easy to see that is an -realization of with state function .
Given , we will use the notation
Given an -policy , the -policy is defined by .
In order to utilize condition iii of Definition 8, we need to establish the following relation between and , .
#Proposition A.2
Consider a catastrophe MDP, some and a proper mitigation policy. Then
For the purpose of the proof, the following notation will be convenient
#Definition A.4
Consider a finite set and some . We define by
As well known, the limit above always exists.
#Proof of Proposition A.2
Consider any and . Since , we have
Let be either of and . Since , we get
Since is a mitigation policy, we get
Finally, since is proper, and . We conclude
Now we will establish a bound on the derivative of by in terms of expected mitigation time, in order to demonstrate condition ii of Lemma A.1.
#Proposition A.3
Fix . Consider a catastrophe MDP and a proper mitigation policy with expected mitigation time . Assume than for any and
Then, for any and
Note that, since is a rational function of with no poles on the interval , some finite always exists. Note also that Proposition A.3 is really about Markov chains rather than MDPs, but we don't make it explicit to avoid introducing more notation.
#Proof of Proposition A.3
Let be the Markov chain with transition matrix and initial state . For any ), we have
Given , we define by
It is easy to see that can be rewritten as
The expression above is well defined because is a proper mitigation policy and therefore is finite with probability 1.
Let us decompose as defined as follows
We have
The second term can be regarded as a weighted average (since ), where the maximal term in the average is at most , hence
Also, we have
\Comment{
We conclude
}
To transform the relative regret bounds for "auxiliary" universes obtained from Lemma A.1 to regret bounds for the original universes, we will need the following.
#Definition A.5
Fix and a universe which is an -realization of a catastrophe MDP with state function . Let be a -perturbation of . An environment is said to be a -lift of to when
i. is an -realization of with state function .
ii.
iii. For any and , if then .
iv. For any and , if then there exists s.t.
It is easy to see that such a lift always exists, for example we can take:
#Proposition A.4
Consider s.t and . Let be a universe which is an -realization of a catastrophe MDP with state function . Suppose that is a mitigation policy for that has expected mitigation time . Consider some -policy . Suppose that is a -perturbation of and is a -lift of to . Denote . Then, there is some that depends on nothing s.t
In order to prove Proposition A.4, we need a relative regret bound for derived from a relative regret bound for .
#Proposition A.5
Fix an interface and an -universe which is an -realization of a catastrophe MDP with state function s.t. . Suppose that is a mitigation policy for . Let be any -policy. Then, for any
#Proof of Proposition A.5
is a mitigation policy, therefore for any , . It follows that
Also, it is easy to see from the definition of and that
Indeed, any discrepancy between the behavior of and involves transition to the state which yields 0 reward forever. Subtracting these inequalities, we get the desired result.
Another observation we need to prove Proposition A.4 is a bound on the effect of -perturbations in terms of mitigation time.
#Proposition A.6
Consider , a universe which is an -realization of a catastrophe MDP with state function , and some . Assume that for any and , . Let be a -perturbation of and a -lift of to . Then,
#Proof of Proposition A.6
It is straightforward to construct a probability space , measurable and measurable s.t.
i.
ii. For any and s.t. :
iii. For any and s.t. :
iv. For any , and :
Denote . We have
Also, it is easy to see that for any measurable
It follows that
Using the fact that and the convexity of the function
Using the triangle inequality, we conclude
As a final ingredient towards the proof of Proposition A.4, we will need to use the relative regret bound for to get a certain statistical bound on mitigation time.
#Definition A.6
Let be any environment. We define the closed set by
Consider a universe which is an -realization of a catastrophe MDP with state function . We define the measurable function as follows
#Proposition A.7
Fix an interface and an -universe which is an -realization of a catastrophe MDP with state function s.t. . Suppose that is a mitigation policy for that has the expected mitigation time . Let be any -policy. Then, there is that depends on nothing s.t. for any , if then
#Proof of Proposition A.7
For any , we have
It follows that
If is s.t. for all , , then
Since is a mitigation policy, it follows that
Subtracting the two inequalities, we get
Denote . By choosing sufficiently large, we can assume without loss of generality that the right hand side is positive since, unless , we would have , and unless , we would have . In either case, the inequality we are trying to prove would hold. Also, note that . We get
By the same reasoning as before, we can assume without loss of generality that e.g. . It follows that
Combining this with the previous inequality implies
It is easy to see that there is s.t. for any , and therefore . Therefore, for any such and , , where it is sufficient to assume that . Taking , we conclude (assuming and observing that )
Taking logarithm of both sides
Combining with the inequality we had before, we get
#Proof of Proposition A.4
By Proposition A.5, we have
Note that is a -perturbation of and is a -lift of to . The condition of Proposition A.6 holds tautologically due to Definition A.3. Therefore, we can apply Proposition A.6 and get
The only difference between and is the appearance of instead of . Therefore, we can rewrite the above as
Applying Proposition A.7 to each of the last two terms, we get
The following definition will be useful in order to apply Proposition A.4.
#Definition A.7
Consider a catastrophe MDP s.t. and a policy . We then define the catastrophe MDP as follows:
-
, , .
-
-
For any and : .
-
For any : .
-
Now consider an -realization of with state function . Then, is clearly an -realization of with the state function defined by .
Note also that and (where interpreting as a policy for or requires choosing an arbitrary value for the state ). Moreover, , , and .
Finally, we are read to prove the main theorem.
#Proof of Theorem 1
For every , denote and the -perturbations of and respectively and the deterministic proper mitigation policy for of Definition 8. Let be a lift of to and denote . Define . Observe that is -sane relatively to in the sense of and both: condition i of Definition A.1 follows by Proposition A.2 from conditions ii and iii of Definition 8, and condition ii of Definition A.1 follows from condition iv of Definition 8. Moreover, by Proposition A.3, we have
Here, we used that is fixed (and thus so is , by conditions i-iii).
Condition iv implies that all the universes in have a common reward function (notice that transition to a corrupted state induces the observation whereas transition to a state in in the universe induces the observation ). Therefore, we can use Lemma A.1 to conclude that there exists an -policy s.t.
It is easy to see that is a -perturbation of . Observe also that , and is a -lift of to . Applying Proposition A.4, we get
\Comment{Given , we use the notation
We define as by .}
Setting we get
By condition i of Definition 8, this implies
##Appendix B
Given , we denote , .
#Proposition B.1
Consider a universe which an -realization of an MDP with state function , a stationary policy , an arbitrary -policy and some . Then,
#Proof of Proposition B.1
For the sake of encumbering the notation less, we will omit the parameter in functions that depend on it. We will use implicitly, i.e. given a function on and , . Finally, we will omit , using the shorthand notations , .
For any , it is easy to see that
Taking expected value over , we get
It is easy to see that the second term vanishes, yielding the desired result.
#Proposition B.2
Consider some , , a universe that is an -realization of with state function , a stationary policy and an arbitrary -policy . For any , let be an -policy s.t. for any
Assume that
i. For any
ii. For any and
Then, for any ,
#Proof of Proposition B.2
For the sake of encumbering the notation less, we will use implicitly, i.e. given a function on and , . Also, we will omit , using the shorthand notations , .
By Proposition B.1, for any
coincides with after , therefore the corresponding expected values vanish.
Subtracting the equalities for and , we get
and coincide until , therefore
Denote , . We also use the shorthand notations , , . Both and coincide with after , therefore
Denote . By the mean value theorem, for each there is s.t.
It follows that
Here, an expected value w.r.t. the difference of two probability measures is understood to mean the corresponding difference of expected values.
It is easy to see that assumption i implies that is a submartingale for (whereas it is a martingale for ) and therefore
We get
Summing over , we get
Applying Proposition B.1 to the right hand side
#Proof of Lemma A.1
Fix , and . Denote . To avoid cumbersome notation, whenever should appear a subscript, we will replace it by . Let be a probability space\Comment{ and a filtration of }. Let be \Comment{measurable w.r.t. }a random variable and the following be stochastic processes\Comment{ adapted to }
We also define by
(The following conditions on and imply that the range of the above is indeed in .) Let and be as in Proposition C.1 (we assume w.l.o.g. that ). We construct \Comment{, }, , , , , , and s.t is uniformly distributed and for any , , and , denoting
Note that the last equation has the form of a Bayesian update which is allowed to be arbitrary when update is on "impossible" information.
We now construct the -policy s.t. for any , s.t. and
That is, we perform Thompson sampling at time intervals of size , moderated by the delegation routine , and discard from our belief state hypotheses whose probability is below and hypotheses sampling which resulted in recommending "unsafe" actions i.e. actions that refused to perform.
In order to prove has the desired property, we will define the stochastic processes , , , , and , each process of the same type as its shriekless counterpart (thus is constructed to accommodate them). These processes are required to satisfy the following:
For any , we construct the -policy s.t. for any , s.t. and
Given any -policy and -policy we define by
Here, is a constant defined s.t. the probabilities sum to 1. We define the -policy by
Condition iii of Proposition C.1 and condition i of Definition A.1 imply that for any
This means we can apply Proposition B.2 and get
Here, the -policy is defined as in Proposition B.2. We also define the -policies and by
Denote
For each , denote
We have
Condition iv of Proposition C.1 and condition ii of Definition A.1 imply that, given s.t.
Therefore, , and we remain with
We have
Since , it follows that
Using condition i of Proposition C.1, we conclude
Define the random variables by
Averaging the previous inequality over , we get
$$\frac{1}{N}\sum_{k=0}^{N-1}R^{?k} \leq (1-\gamma^T)\sum_{n=0}^\infty \gamma^{nT} \E{}\left[\E{}\left[U^!_n \mid \J^!n = K,\ Z^!{nT}\right]-\E{}\left[U^!n \mid Z^!{nT}\right]\right]
0 comments
Comments sorted by top scores.