Seeking Collaborators

post by abramdemski · 2024-11-01T17:13:36.162Z · LW · GW · 15 comments

Contents

  Summary
  The non-summary
    Motivation
    Tiling Overview
    Reflective Oracles
    Logical Uncertainty
    Value Uncertainty
    Value Plurality
    Ontology Plurality
    Cooperation & Coordination
None
15 comments

I've been accepted as a mentor for the next AI Safety Camp. You can apply to work with me on the tiling problem. The goal will be to develop reflectively consistent UDT-inspired decision theories, and try to prove tiling theorems for them.

The deadline for applicants is November 17.

The program will run from January 11 to April 27. It asks for a 10 hour/week commitment.

I am not being funded for this.[1] You can support my work on Patreon.

My project description follows:

Summary

The Tiling Agents problem (aka reflective consistency) consists of analysing when one agent (the "predecessor") will choose to deliberately modify another agent (the "successor"). Usually, the predecessor and successor are imagined as the same agent across time, so we are studying self-modification. A set of properties "tiles" if those properties, when present in both predecessor and successor, guarantee that any self-modifications will avoid changing those properties.

You can think of this as the question of when agents will preserve certain desirable properties (such as safety-relevant properties) when given the opportunity to self-modify. Another way to think about it is the slightly broader question: when can one intelligence trust another? The bottleneck for avoiding harmful self-modifications is self-trust; so getting tiling results is mainly a matter of finding conditions for trust.

The search for tiling results has four main motivations:

These three application areas have a large overlap, and all four seem important.

The non-summary

Motivation

In the big picture, tiling seems like perhaps the single most well-motivated approach to theoretical AI safety: it allows us to directly formally address the question of how humans can justifiably trust AIs. However, extremely few people are directly working on this approach. Indeed, out of all the people who have worked on tiling in the past, the only person I’m currently aware of who continues to work on this is myself.

I think part of this is about timelines. Tiling results remain very theoretical. I am optimistic that significant improvements to tiling results are possible with some focused attention, but I think there is a long way to go before tiling results will be realistic enough to offer concrete advice which helps with the real problems we face. (At least, a long way to go to yield “non-obvious” advice.)

However, it still seems to me like more people should be working on this approach overall. There are many unpublished results and ideas, so my aim with this project is to get some people up to speed and in a position to make progress.

Tiling Overview

The basic idea of tiling is that an agent architecture is “no good” in some sense if we can show that an agent designed according to that architecture would self-modify to something which does not match the architecture, given the choice. This is a highly fruitful sort of coherence requirement to impose, in the sense that it rules out a lot of proposals, including most existing decision theories.

One motivation for this criterion is for building robust AI systems: if an AI system would remove its safety constraints given the chance, then we are effectively in a fight with it (we have to avoid giving it a chance). Hence, even if we don’t plan to allow the AI to self-modify, it seems wise to build safety precautions in a way which we can show to be self-preserving. This perspective on tiling focuses on AI-AI tiling.

A broader motivation is to study the structure of trust. AI alignment is, from this perspective, the study of how to build trustworthy systems. Tiling is the study of how one mindlike thing can trust another mindlike thing. If we can make tiling theorems sufficiently realistic, then we can derive principles of trust which can provide guidance about how to create trustworthy systems in the real world. This perspective focuses on human-AI tiling.

A third motivation imposes tiling as a constraint on decision theories, as a way of trying to understand rationality better, in the hopes of subsequently using the resulting decision theory to guide our decision-making (eg, with respect to AI risk). Tiling appears to be a rather severe requirement for decision theories, so (provided one is convinced that tiling is an important consistency requirement) it weeds out a lot of bad answers that might otherwise seem good. Novel decision-theoretic insights might point to crucial considerations which would otherwise have been missed.

Reflective Oracles

The first big obstacle to a theorem showing when Bayesian-style agents can be reflectively stable was the difficulty of Bayesian agents representing themselves at all. Tiling was first studied in the setting of formal logic, because logic has thoroughly researched tools for self-reference. Probability theory lacked such tools.

MIRI research carried out by Jessica Taylor, Paul Christiano, and Benja Fallenstein gave rise to the necessary tools to represent Bayesian agents capable of having hypotheses about themselves and other agents similar to themselves: Reflective Oracles (and very similarly, Paul’s reflective probability distribution). 

However, these conceptual tools solve the problem by specifying a computational complexity class in which a sort of “prophecy” is possible: computations can look ahead to the output of any computation, including themselves. We don’t live in a universe in which this flavor of computation appears feasible.

As a consequence, tiling results based on Reflective Oracles would not solve the “Vingean Reflection” problem: it does not show us how it is possible to trust another agent without being able to reason about that agent’s future actions in detail. It is easy to trust someone if you can anticipate everything they will do with precision. Unfortunately, it is not at all realistic.

Vingean Reflection is a crucial desideratum to keep in mind when looking for tiling results.

Logical Uncertainty

We call the more realistic domain, where an agent does not have the ability to run arbitrary computations to completion, “logical uncertainty” (although in hindsight, “computational uncertainty” would have been a clearer name).

MIRI’s Logical Induction (primarily due to Scott G) gives us a mathematical framework which slightly generalizes Bayesianism, to address logical uncertainty in a more suitable way. This gives rise to a “bounded rationality” perspective, where agents are not perfectly rational, but avoid a given class of easily recognizable inconsistencies.

We have a limited tiling result [AF · GW] for logical-induction-based decision theory (due to Sam). I hope to significantly improve upon this by building on research into logically uncertain updateless decision theory which Martin Soto and I collaborated on last summer (none of which has been published yet), and also by building on the idea of open-minded updatelessness [LW · GW].

This area contains the clearest open problems and the lowest-hanging fruit for new tiling theorems.

However, I can imagine this area of research getting entirely solved without yet providing significant insight into the human-AI tiling problem (that is, the AI safety problem). My intuition is that it primarily addresses AI-AI tiling, and specifically in the case where the “values” of the AI are entirely pinned down in a strong sense. Therefore, to derive significant insights about AI risks, it seems important to generalise tiling further, including more of the messiness of the real-world problems we face.

Value Uncertainty

Open-minded updatelessness [LW · GW] allows us to align with an unknown prior, in the same way that regular updatelessness allows us to align with a known prior. 

Specifying agents who are aligned with unknown values is the subject of value learning & Stuart Russel’s alignment program, focusing on assistance games (CIRL).

If we combine the two, we get a general notion of aligning with unknown preference structures. This gives us a highly general decision-theoretic concept which I hope to formally articulate and study over the next year. This is effectively a new type of uncertainty. You could call it meta-uncertainty, although that’s probably a poor name choice since it could point to so many other things. Perhaps “open-minded uncertainty”?

In particular, with traditional uncertainty, we can model an AI which is uncertain about human values and trying to learn them from humans; however, the humans themselves have to know their own values (as is assumed in CIRL). With open-minded uncertainty, I think there will be a much better picture of AIs aligning with humans who are themselves uncertain about their own values. My suspicion is that this will offer a clear solution to the corrigibility problem.

More generally, I think of this as a step on a trajectory toward stripping away traditional decision-theoretic assumptions and creating an “empty” tiling structure which can tile a broad variety of belief structures, value structures, decision theories, etc. If the AI just wants to do “what the humans would want in this situation” then it can conform to whatever policy humans “would want”, even if it is irrational by some definition.

Another important research thread here is how to integrate the insights of Quantilization; softly optimizing imperfect proxy values seems like a critical safety tool. Infrabayesianism appears to offer important theoretical insight into making quantilizer-like strategies tile.

Value Plurality

Here’s where we bring in the concerns of Critch’s negotiable reinforcement learning: an agent aligned to multiple stakeholders whose values may differ from each other. Several directions for moving beyond Critch’s result present themselves:

In terms of general approach: I want to take the formalism of logical induction and try to extend it to recognize the “traders” as potentially having their own values, rather than only beliefs. This resembles some of the ideas of shard theory.

Ontology Plurality

Critch’s negotiable RL formalism not only assumes that the beliefs and values of the stakeholders are known; it also assumes that the stakeholders and the AI agent all share a common ontology in which these beliefs and values can be described. To meaningfully address problems such as ontological crisis, we need to move beyond such assumptions, and model “where the ontology comes from” more deeply.

My ideas here are still rather murky and speculative. One thread involves extending the market metaphor behind logical induction, to model new stocks being introduced to the market. Another idea involves modeling the market structure using linear logic.

Cooperation & Coordination

“Value plurality” addresses the many-to-one alignment problem, IE, many stakeholders with one AI serving their interests. To mitigate risk scenarios associated with multipolar futures (futures containing multiple superhuman intelligences), we want to address the more general many-to-many case. This involves seeking some assurance that powerful AIs with differing values will cooperate with each other rather than engage in value-destroying conflict.

The basic picture of multi-agent rationality that we get from game theory appears very pessimistic in this regard. While cooperative equilibria are possible in some situations, those same situations usually support “folk theorems” which show that “almost any” equilibrium is possible, even really bad equilibria where no one gets what they want.

Two research threads which interest me here are Nisan’s higher-order game theory [LW · GW] and Payor’s Lemma [? · GW]. Nisan has some unpublished results about cooperation. My hope is that these results can also apply to more realistic formalisms by reinterpreting his hierarchy in terms of iterated policy selection. Meanwhile, Payor’s Lemma offers new hope for getting phenomena similar to Lobian cooperation to be more robust.

  1. ^

    I'm still seeking funding, although I have some options I am considering.

15 comments

Comments sorted by top scores.

comment by Towards_Keeperhood (Simon Skade) · 2024-11-02T13:44:46.259Z · LW(p) · GW(p)

Thanks for writing up some of the theory of change for the tiling agents agenda!

I'd be curious on your take on the importance of the Löbian obstacle: I feel like it's important to do this research for aligning full-blown RSI-to-superintelligence, but at the same time it introduces quite some extra difficulty, and I'd be more excited about research (which ultimately aims for pivotal-act level alignment) where we're fine assuming some "fixed meta level" in the learning algorithm but general enough that the object-level AI can get very powerful. It seems to me that this might make it easier to prove/heuristically-argue-for that the AI will end up with some desirable properties.

Relatedly, I feel like on arbital there were the categories "RSI" and "KANSI", but AFAICT not clearly some third category like "unknown-algorithm non-full-self-improving (UANFSI?) AI". (Where IMO current deep learning clearly fits into the third category, though there might be a lot more out there which would too.) I'm currently working on KANSI AI, but if I didn't I'd be a bit more excited about (formal) UANFSI approaches than full RSI theory, especially since the latter seems to have been tried more. (E.g. I guess I'd classify Vanassa Kosoy's work as UANFSI, but I didn't look much at it yet.) (Also there can still be some self-improvement for UANFSI AIs, but as said there would be some meta level that would be fixed.)

But possible I strongly misunderstand something (e.g. maybe the Löbian obstacle isn't that central?).

(In any case I think there ought to be multiple people continuing this line of work.)

Replies from: abramdemski, abramdemski
comment by abramdemski · 2024-11-03T15:30:23.972Z · LW(p) · GW(p)

I have lost interest in the Löbian approach to tiling, because probabilistic tiling results seem like they can be strong enough and with much less suspicious-looking solutions. Expected value maximization is a better way of looking at agentic behavior anyway. Trying to logically prove some safety predicate for all actions seems like a worse paradigm than trying to prove some safety properties for the system overall (including proving that those properties tile under as-reasonable-as-possible assumptions, plus sanity-checking what happens when those assumptions aren't precisely true). 

I do think Löb-ish reasoning still seems potentially important for coordination and cooperation, which I expect to feature in important tiling results (if this research program continues to make progress). However, I am optimistic about replacing Löb's Theorem with Payor's Lemma in this context.

I don't completely discount the pivotal-act approach, but I am currently more optimistic about developing safety criteria & designs which could achieve some degree of consensus amongst researchers, and make their way into commercial AI, perhaps through regulation.

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-11-03T18:03:57.557Z · LW(p) · GW(p)

Thanks!

(I don't fully understand yet what results your aiming for, but yeah makes sense that probabilistic guarantees make some stuff more feasible. Not sure whether there might be more relaxations I'd be fine to at least initially make.)

comment by abramdemski · 2024-11-03T15:31:53.857Z · LW(p) · GW(p)

How would you become confident that a UANFSI approach was NFSI?

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-11-03T18:10:19.160Z · LW(p) · GW(p)

Idk that could be part of finding heuristic arguments for desireable properties for what an UANFSI converges to. Possibly it's easier to provide probabilistic convergence guarantees for systems that don't do FSI so this would already give some implicit evidence. But we could also just say that it's fine if FSI happens as long as we have heuristic convergence arguments - like that UANFSI is just allowing for a broader class of algorithms which might make stuff easier - though i mostly don't expect we'd get FSI alignment through this indirect alignment path from UANFSI but that we'd get an NFSI AI if we get some probabilistic convergence guarantees.

(Also I didn't think much about it at all. As said I'm trying KANSI for now.)

Replies from: abramdemski
comment by abramdemski · 2024-11-04T17:11:59.482Z · LW(p) · GW(p)

I think there are some deeper insights around inner optimization that you are missing that would make you more pessimistic here. "Unknown Algorithm" to me means that we don't know how to rule out the possibility of inner agents which have opinions about recursive self-improvement. Part of it is that we can't just think about what it "converges to" (convergence time will be too long for interesting learning systems).

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-11-04T17:36:43.348Z · LW(p) · GW(p)

Hm interesting. I mean I'd imagine that if we get good heuristic guarantees for a system it would basically mean that all the not-perfectly-aligned subsystems/subsearches are limited and contained enough that they won't be able to engage in RSI. But maybe I misunderstand your point? (Like maybe you have specific reason to believe that it would be very hard to predict reliably that a subsystem is contained enough to not engage in RSI or so?)

(I think inner alignment is very hard and humans are currently not (nearly?) competent enough to figure out how to set up training setups within two decades. Like for being able to get good heuristic guarantees I think we'd need to at least figure out at least something sorta like the steering subsystem which tries to align the human brain, only better because it's not good enough for smart humans I'd say. (Though Steven Byrnes' agenda is perhaps a UANFSI approach that might have sorta a shot because it might open up possibilities of studying in more detail how values form in humans. Though it's a central example of what I was imagining when I coined the term.))

comment by Ryan Kidd (ryankidd44) · 2024-11-10T01:33:08.054Z · LW(p) · GW(p)

Can you make a Manifund.org grant application if you need funding?

comment by Johannes C. Mayer (johannes-c-mayer) · 2024-11-04T23:17:20.208Z · LW(p) · GW(p)

The way I would approach this problem (after not much thought): Come up with a concrete system architecture A of a maimizing computer program that has an explicit utility function, and is known to behave optimally. E.g. maybe it plays tic tac toe or 4-in a row optimally.

Now mutate the source code of A slightly such that it is no longer optimal to get a system B. The objective is not modified. Now B still "wants" to basically be A, in the sense that if it is a general enough optimizer and has access to selfmodification facilities, it would try to make itself be A, because A is better at optimizing the objective.

I predict by creating a setup where the delta between B and A is small, you can create a tractable problem, without sidestepping the core bottlecks, i.e. solving "correct selfmodification" for small delta between A and B, seems like it needs to solve some hard part of the problem. Once you solved it increase the delta, and solve it again.

Unsure about the exact setup for giving the systems the ability to selfmodify. I intuit one can construct a toy setup that can generate good insight such that B doesn't actually need to be that powerful, or that general of an optimizer.

comment by Towards_Keeperhood (Simon Skade) · 2024-11-03T18:23:02.650Z · LW(p) · GW(p)

How bottlenecked is your agenda by philosophy skills (like being good at thought experiments for deriving stuff like UDT, or like being good at figuring out the right ontology for thinking about systems or problems) vs math skill vs other stuff?

Replies from: abramdemski
comment by abramdemski · 2024-11-04T17:24:12.388Z · LW(p) · GW(p)

I think I've articulated a number of concrete subgoals that require less philosophical skill (they can be approached as math problems). However, in the big picture, novel tiling theorems require novel ideas. This requires philosophical skill.

comment by Towards_Keeperhood (Simon Skade) · 2024-11-02T13:12:47.982Z · LW(p) · GW(p)

You could also ask the SERI MATS team whether they still accept you as a mentor for the coming winter cohort (and allow you to have extended application phase where people can apply until mid November or sth). 

(I'd guess you might get slightly better candidates there, though I think applicant quality generally degraded somewhat since the first cohorts.) EDIT: I do not think this anymore.

Replies from: ryankidd44
comment by Ryan Kidd (ryankidd44) · 2024-11-10T03:54:02.722Z · LW(p) · GW(p)

@abramdemski [LW · GW] DM me :)

Replies from: Simon Skade
comment by Towards_Keeperhood (Simon Skade) · 2024-11-10T16:29:21.473Z · LW(p) · GW(p)

(haha cool. perhaps you could even PM Abram if he doesn't PM you. I think it would be pretty useful to speed up his agenda through this.)

comment by Kabir Kumar (kabir-kumar) · 2024-12-08T20:44:16.712Z · LW(p) · GW(p)

Thank you for continuing this very important work.