Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024)

post by mattmacdermott · 2024-09-01T07:46:26.647Z · LW · GW · 0 comments

This is a link post for https://yoshuabengio.org/2024/08/29/bounding-the-probability-of-harm-from-an-ai-to-create-a-guardrail/

Contents

  Bounding the probability of harm from an AI to create a guardrail
None
No comments

Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience.

The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions.

I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve various advances, e.g. solving ELK to get the harm estimates?

My answer to that is: yes, I think so. And I think Yoshua would probably agree.

Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety".


Bounding the probability of harm from an AI to create a guardrail

Published 29 August 2024 by yoshuabengio

As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action?

Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification.

With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at run-time to provide a guardrail against dangerous actions of an AI.

There are in general multiple plausible hypotheses that could explain past data and make different predictions about future events. Because the AI does not know which of these hypotheses is right, we derive bounds on the safety violation probability predicted under the true but unknown hypothesis. Such bounds could be used to reject potentially dangerous actions. Our main results involve searching for cautious but plausible hypotheses, obtained by a maximization that involves Bayesian posteriors over hypotheses and assuming a sufficiently broad prior. We consider two forms of this result, in the commonly considered iid case (where examples are arriving independent from a distribution that does not change with time) and in the more ambitious but more realistic non-iid case. We then show experimental simulations with results consistent with the theory, on toy settings where the Bayesian calculations can be made exactly, and conclude with open problems towards turning such theoretical results into practical AI guardrails.

Can a Bayesian Oracle Prevent Harm from an Agent? By Yoshua Bengio, Michael K. Cohen, Nikolay Malkin, Matt MacDermott, Damiano Fornasiere, Pietro Greiner and Younesse Kaddar, in arXiv:2408.05284, 2024.

This paper is part of a larger research program (with initial thoughts already shared in this earlier blog post that I have undertaken with collaborators that asks the following question: If we could leverage recent advances in machine learning and amortized probabilistic inference with neural networks to get good Bayesian estimates of conditional probabilities, could we obtain quantitative guarantees regarding the safety of the actions proposed by an AI? The good news is that as the amount of computational resources increases, it is possible to make such estimators converge towards the true Bayesian posteriors. Note how this does not require asymptotic data, but “only” asymptotic compute. In other words, whereas most catastrophic AI scenarios see things getting worse as the AI becomes more powerful, such approaches may benefit from the increase in computational resources to increase safety (or get tighter safety bounds).

The above paper leaves open a lot of challenging questions, and we need more researchers digging into them (more details and references in the paper):

0 comments

Comments sorted by top scores.