Introduction to inaccessible information

post by Ryan Kidd (ryankidd44) · 2021-12-09T01:28:48.154Z · LW · GW · 6 comments

Contents

  What is inaccessible information?
  How might inaccessible information arise in ML models?
    Training incentives that favour the instrumental policy
    Learned deception
    Incompatibility with human world models
  What insight does inaccessible information offer for alignment research?
None
6 comments

This post was written under Evan Hubinger's direct guidance and mentorship, as a part of the Stanford Existential Risks Institute ML Alignment Theory Scholars (MATS) program [LW · GW].

TL;DR: If we want to understand how AI models make decisions, and thus assess alignment, we generally want to extract their internal latent information. That information might be inaccessible to us by default and defy easy extraction. Resolving this problem might be critical to AI alignment.

Paul Christiano's conception of 'inaccessible information' [AF · GW] seems to be an important and useful lens for interpreting many key concepts in AI alignment. In this post, I will attempt to explain (in an accessible way):

What is inaccessible information?

Roughly, inaccessible information is anything a model knows that an external agent cannot reliably learn. For instance, in the process of training, a sufficiently large model might acquire a complicated internal structure that is opaque to direct inspection. If we want to understand the process by which this 'black box' turns inputs into outputs, perhaps to learn new theories/heuristics [AF · GW], check for undesirable future behaviour [? · GW] or inspect the model's limitations (e.g. myopia [AF · GW]), it seems natural to also train the model to produce explanations for its outputs. However, the model might learn to produce convincing explanations that are not necessarily true representations of its input-output process. If true and false but convincing explanations are indistinguishable to the training mechanism (e.g. a human with a reward button), the model may converge to the instrumental policy [AF · GW]: 'give whatever explanation is most convincing'. This scenario is clearly counter to the desired outcome, as the model does not learn to give strictly true explanations and therefore its outputs are contingent on inaccessible information.

Generally, we might be unable to distinguish true and false but convincing explanations of a model's output if:

A related property to 'explainability' is 'transparency [AF · GW]', also called 'interpretability'. A model is transparent to an external agent if inspection of the model is sufficient to understand how it operates. From the perspective of such an external agent, more transparent models generally contain less inaccessible information. Model transparency can be increased by:

How might inaccessible information arise in ML models?

The training-induced structure of a machine learning model can be thought of as containing 'latent information' that shapes the model's output. Models that perform well on complicated tasks in diverse environments (e.g. 'What will Alice say next?') might acquire latent information that aids predictions (e.g. a model of Alice's thoughts). Larger models, or models that include highly compressed representations of the environment (i.e. big world, small map), are generally less accessible to direct inspection. A model's latent information might be difficult to render accessible by training it to give explanations or be otherwise transparent if, for example, there is:

Training incentives that favour the instrumental policy

In some situations, training might bias AI models towards dishonesty. An otherwise non-deceptive model trained to produce explanations for its outputs might converge to the instrumental policy if the model is powerful enough to produce false but convincing explanations and:

  1. Training incentivises a short policy description length; or
  2. True explanations are generally less convincing than false but convincing explanations.

The instrumental policy is short as it simply asks the model, 'What explanation is most convincing?' and leverages the already established power of the model to produce accessible output. A policy to produce true explanations likely requires a further 'translation mechanism' to explain latent information in an accessible way and thus has longer description length.

True explanations might be less convincing than false explanations sometimes if the latter seem more plausible due to incomplete data, or human biases and mistakes. For example, given the model prediction, 'Bob will refuse icecream,' the false explanation, 'Bob refuses icecream on cold days and today is cold,' might be a more convincing narrative about how the model operates than the true explanation, 'Bob refuses to eat icecream every third day and today is a refusal day.' In the latter case, the truth 'looks wrong'.

Christiano has proposed [AF · GW] that the instrumental policy might be avoided with the following training process:

  1. Change the learning process so that it does not have a strong inductive bias towards "predict human answers," by allowing the complexity of the honest question-answering to "pay for itself" by constraining the space of possible human-models.
  2. Introduce a bias towards the intended model by using a more complex labeling process to answer questions where a human answers incorrectly.
  3. Be really careful to avoid penalizing honest answers, by only judging comparisons between two answers where we are confident one is better than the other and getting the model to help us.

Learned deception

If an AI model acquires the capabilities of an 'optimiser [? · GW]' and learns to be deceptive, it might try to circumvent attempts to access its latent information. Deceptive alignment [? · GW] of a model occurs when the training objective is at odds with the model's internally optimised 'mesa-objective', causing the model to act deceptively in order to influence the training process and avoid modification. A deceptively aligned model may be instrumentally motivated to obfuscate its internal mechanism and give false but convincing explanations for some outputs when elicited, including its deception status. If such a model can trick or avoid our tests for deception, the model's alignment status and mesa-objective are inaccessible information.

More generally, a model's alignment status might be inaccessible independent of deception if we don't possess tests that can assess mesa-objective inner alignment [AF · GW] or training mechanisms [AF · GW] that can guarantee a model avoids acquiring pseudo-alignment [AF · GW] or even optimisation. Determining whether a model is performing optimisation might be inaccessible, particularly if the model is already deceptive and powerful.

Incompatibility with human world models

Even if we can reliably circumvent the failure modes associated with model honesty, we might fail to elicit true representations of latent information if an AI's potentially alien world model is not appropriately translated into human terms [AF · GW]. The simplest method for an AI to answer a question (e.g. 'Why is Alice coughing?') could involve generating predictive observations and directly mapping these to states in the human world model. However, some potential states (e.g. 'Sickness caused by exotic nanobots') might not be entirely representable in the human world model (which excludes 'exotic nanobots'). In this case, the explanation might default to, 'What would a human say given this observation?' (e.g. 'Alice is sick') rather than, 'What would a human say if they had the AI's world model and observations?' (e.g. 'Nanobots!').

Ideally, an AI will generate observations, infer states in its world model, crucially map these to states in the human world model, and then produce an accessible and true answer (e.g. 'Alice is sick because of [uninterpretable N], which will confound conventional diagnosis and might signify a technological catastrophe!'). It is possible that the mapping between AI and human world models is lossy and some aspect of the AI's decision making process is inaccessible (i.e. the [uninterpretable N]). Christiano argues [AF · GW] that such lossy mapping may not be a further problem for alignment, provided that the AI is competitive and consequences are accessible (e.g. even if exotic nanobot capabilities are not accessible, their impacts on humans are). Thus, an impact aligned [AF · GW] AI could forestall exotic failures with exotic capabilities [AF · GW] while merely avoiding accessible consequences.

What insight does inaccessible information offer for alignment research?

It seems to me that AI alignment efforts in general should not ignore the problem of inaccessible information. Developing powerful AI capabilities may require instilling in our models through training large amounts of decision-relevant latent information that is by default inaccessible. Verifying or incentivising acceptable [AF(p) · GW(p)] model behaviour likely requires accessing latent information or preventing the occurrence of certain inaccessible properties, such as deception. Focusing on resolving inaccessible information may yield useful tools and techniques for producing robustly capable and aligned models.

I think that powerful AI systems are likely to be inaccessible by default because:

  1. Acquiring highly complex latent information likely confers a competitive advantage [AF · GW] to an AI system in complex domains such as long-term forecasting or modelling humans. If AI systems with insufficient latent information are deployed in complex domains, they will likely be outcompeted by systems with more latent information, incentivising deployment of the latter. More latent information suggests a higher likelihood of inaccessibility, as there are larger circuits, a greater simplicity bias towards the instrumental policy and more potential for world model mismatches. Additionally, deception probably only occurs in sufficiently large and complex models.
  2. Rendering latent information accessible probably decreases the training competitiveness [AF · GW] of building a powerful AI system. For example, training a model to be transparent or give true explanations for its outputs probably increases training time beyond that necessary for developing capabilities. If training competitiveness is prioritised, less accessible models are likely to be deployed.
  3. Generally powerful AI systems are likely to be mesa-optimisers [? · GW], which may be instrumentally motivated to acquire deceptive behaviour and circumvent accessibility techniques. Even if aligned, mesa-optimisers may be more inscrutable to inspection than simpler models, such as decision trees, if the optimisation process is not hard-coded [AF · GW].
  4. Human values might be irreducibly complex and hard to define robustly. Even if an AI learns human values by default [AF · GW], the value function may be inaccessible, and thus difficult to verify, if human values are not simple to parametrise.

I think that proposed methods [AF · GW] for building safe, advanced prosaic AI likely require resolving instances of inaccessible information because:

  1. Auditing models with transparency tools requires that such tools can extract meaningful information from massive neural networks. If model training does not select for transparency, creating tools to render sufficiently large and obfuscated networks transparent might be infeasible. If transparency tools can reliably check an 'easily measurable' property of models that guarantees nondeception (e.g. possibly myopia), this might be easier than checking for deceptive instances when the model is potentially already attempting to circumvent such tests.
  2. Generating a gradient away from deception with relaxed adversarial training [AF · GW] seems contingent on whether the overseer (e.g. an amplified [? · GW] human) can inspect the model adequately enough to rule out unacceptable behaviour. Relaxed adversarial training might incentivise a model to become more transparent to the overseer, who could employ transparency tools to check an easily checkable property (e.g. myopia) and query the model for explanations. Overseers might be automated [AF · GW], but this seems risky to me if overseers are not themselves transparent to humans.
  3. Non-agentic microscope AI [AF · GW] and STEM AI [AF · GW], while ideally not capable of capable of deceptive mesa-optimisation, will likely use complex latent information to accomplish tasks. If we desire to understand the mechanism by which these models produce insight, perhaps to error-check or learn new knowledge acquiring techniques, we will have to deal with inaccessible information. Microscope and STEM AI are probably not competitive in domains where AI-enhanced humans are outcompeted by powerful agentic AI, incentivising deployment of the latter, which seem likely to be more inaccessible by default.

Generally solving the problem of inaccessible information seems sufficient to solve a significant part of prosaic AI alignment, provided we can act on this information. If we can elicit true explanations for model output during training and inspect relevant model properties with transparency tools, we can hopefully steer the training process away from deceptive alignment and even catch deception [AF · GW]. If our aligned AI model is competitive against unaligned AIs, the strategy stealing assumption [AF · GW] implies that we can use it to capture flexible influence over the future, even if it has to develop exotic capabilities.

It seems debatable whether generally solving inaccessible information is necessary for prosaic AI alignment if there exists:

However, it seems possible to me that the above safety predicates might in general be 'probabilistic' guarantees against unacceptability. For instance, our best transparency tools and training mechanisms might only be able to guarantee 99% probability that a model possesses a checkable property that almost certainly implies acceptability. As deploying unacceptable AI is a 'high risk' scenario, we would ideally like to have the strongest possible guarantee of acceptability, something like Christiano's ascription universality [AF · GW]: if an overseer understands everything that a model might reasonably know, the overseer is universal with respect to the model and all the model's latent information is accessible. Practically, universality is bounded by the overseer's epistemic model. If no further information could be reasonably found such that a human judge would trust the model over the overseer, the overseer 'epistemically dominates' the model. As of yet, there is no formal definition of ascription universality and it is unknown if universal overseers are possible in practise.

MIRI's recently announced Visible Thoughts Project [AF · GW] aims to build a dataset of human explanations for AI dungeon master prompts and train a dungeon master AI to give accessible explanations for its output. Unless steps are taken to address the failure modes detailed in the previous section, it seems unlikely to me that the trained explanations will necessarily converge to true explanations. In particular, a near-term dungeon master AI might converge to the instrumental policy and treat 'humans who read dungeon master explanations' to a convincing fiction in which they roleplay as 'AI inspectors'.

Given the enormous stakes of AI alignment and the apparent ubiquity of inaccessible information in prosaic AI, striving for something close to ascription universality seems important. Improving the self-explaining and transparency of models, and the inspection power of trusted overseers is critical to universality and will likely benefit acceptability verification even if universality is impractical. I think the feasibility of addressing inaccessible information is a crux as to whether aligning prosaic AI is practical. Therefore, attempting to address inaccessible information will both aid prosaic alignment efforts and offer a possible test to rule out safe prosaic AI.

6 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2021-12-11T21:50:37.137Z · LW(p) · GW(p)

If the information won't fit into human ways of understanding the world, then we can't name what it is we're missing. This always makes examples of "inaccessible information" feel weird to me - like we've cheated by even naming the thing we want as if it's somewhere in the computer, and instead our first step should be to design a system that at all represents the thing we want.

Replies from: ryankidd44
comment by Ryan Kidd (ryankidd44) · 2021-12-13T02:40:40.755Z · LW(p) · GW(p)

I think world model mismatches are possibly unavoidable with prosaic AGI [AF · GW], which might reasonably bias one against this AGI pathway. It seems possible that much of human and AGI world models would be similar by default if 'tasks humans are optimised for' is a similar set to 'tasks AGI is optimised for' and compute is not a performance-limiting factor, but I'm not at all confident that this is likely (e.g. maybe an AGI draws coarser- or finer-grained symbolic Markov blankets). Even if we build systems that represent the things we want and the things we do to get them as distinct symbolic entities in the same way humans do, they might fail to be competitive with systems that build their world models in an alien way (e.g. draw Markov blankets around symbolic entities that humans cannot factor into their world model due to processing or domain-specific constraints).

Depending on how one thinks AGI development will happen (e.g. is the strategy stealing assumption [AF · GW] important) resolving world model mismatches seems more or less a priority for alignment. If near-term performance competitiveness heavily influences deployment, I think it's reasonably likely that prosaic AGI is prioritised and world model mismatches occur by default because, for example, compute is likely a performance-limiting factor for humans on tasks we optimise AGI for, or the symbolic entities humans use are otherwise nonuniversal. I think AGI might generally require incorporating alien features into world models to be maximally competitive, but I'm very new to this field.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-13T06:25:52.744Z · LW(p) · GW(p)

Yup, this all sounds good to me. I think the trick is not to avoid alien concepts, but to make your alignment scheme also learn ways of representing the world that are close enough to how we want to the world to be modeled.

Replies from: ryankidd44
comment by Ryan Kidd (ryankidd44) · 2021-12-13T11:04:23.950Z · LW(p) · GW(p)

I think I agree. To the extent that a 'world model' is an appropriate abstraction, I think the levers to pull for resolving world model mismatches seem to be:

  • Post-facto: train an already capable (prosaic?) AI to explain itself in a way that accounts for world model mismatches via a clever training mechanism [AF · GW] and hope that only accessible consequences matter for preserving human option value; or
  • Ex-ante: build AI systems in an architecturally transparent manner such that properties of their world model can be inspected and tuned, and hope that the training process makes these AI systems competitive.

I think you are advocating for the latter, or have I misrepresented the levers?

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-13T13:16:28.838Z · LW(p) · GW(p)

Maybe I don't see a bright line between these things. Adding an "explaining module" to an existing AI and then doing more training is not so different from designing an AI that has an "explaining module" from the start. And training an AI with an "explaining module" isn't so different from training an AI with a "making sure internal states are somewhat interpretable" module.

I'm probably advocating something close to "Ex-ante," but with lots of learning, including learning that informs the AI what features of the world we want it to make interpretable to us.

comment by sage_bergerson · 2022-01-18T01:49:30.017Z · LW(p) · GW(p)

Things I'm confused about:

How can the mechanism by which the model outputs ‘true’ representations of its processing be verified?

Re ‘translation mechanism’: How could a model use language to describe its processing if it includes novel concepts, mechanisms, or objects for which there are no existing examples in human-written text? Can a model fully know what it is doing?

Supposing an AI was capable of at least explaining around or gesturing towards this processing in a meaningful way - would humans be able to interpret these explanations sufficiently such that the descriptions are useful?

Could a model checked for myopia be deceptive in its presentation of being myopic? How do you actually test this?

Things I'm sort of speculating about:

Could you train a model so that it must provide a textual explanation for a policy that is validated before it proceeds with actually updating its policy so essentially every part of its policy is pre-screened and it can only learns to take interpretable actions? Writing this out though I see how it devolves into providing explanations that sounds great but don’t represent what is actually happening.

Could it be fruitful to focus some effort on building quality human-centric-ish world models (maybe even just starting with something like topography) into AI models to improve interpretability (i.e. provide a base we are at least familiar with off of which they can build in some way as opposed to having no insight into their world representation)?