AI Services as a Research Paradigm

post by VojtaKovarik · 2020-04-20T13:00:40.276Z · LW · GW · 12 comments

This is a link post for https://docs.google.com/document/d/1Jk2GgJnF9pLIQqg9hgX0Tvdukq6MR2qmXW3FyvqiiUg/edit?usp=sharing

Contents

  Introduction
  Motivating ideas
  Contents of the document
  Some of our conclusions
None
12 comments

Introduction

I would hereby like to advertise a document Systems of Services as a Paradigm for AI Alignment . I hope that it can serve as a starting point for investigating AI alignment through the lens of systems of AI services. An alternative framing for the text is that it is a collection of pieces I have found particularly helpful for having productive conversations about the topic. In this post, I briefly recap the motivation behind the document and outline its contents. I also argue for why service-systems are a useful paradigm and mention my best guesses for promising future work.

Motivating ideas

As part of a recent collaboration, we wanted to look over technical problems in Comprehensive AI Services (CAIS [LW · GW]), then pick one of them and study it in more detail. However, it soon turned out that a lot more conceptual work was needed before we can formalize any of our thoughts in a way that would be useful. We did study Drexler's Reframing Superintelligence prior to embarking on this project, and the text indeed provides a lot of useful insights into the nature of advanced systems of AI services. However, it puts less emphasis on how to use the service framework for technical research. So from our starting position, we weren't sure how to define the basic concepts, how to model problems in CAIS, and what these problems even are.

We also decided to focus on a slightly different framing, mostly in two ways. First, Reframing Superintelligence primarily discusses comprehensive AI services, which are expected to be on par with a artificial general intelligence. Since we can reasonably expect AI X-risk to be associated with AI that is radically transformative, we wanted to deal with a broader class of systems. Second, it is likely that even very advanced AI services will be deeply entangled with services that are "implemented on humans". (Especially since we can view human organizations and institutions as clusters of services.) From this reason, the document studies hybrid systems of services that consist of both AI- and human services. Even if our interest is primarily in AI services, studying the whole system (including its non-AI parts) should make it easier to fully understand their impact on the world.

Contents of the document

While we did eventually make some progress on more specific problems, I thought it might be useful to write down a separate "introductory" text summarizing the things-that-seem-essential about the framework of service systems. Here is the document's slightly annotated table of contents:

  1. Introduction (motivating the approach; skippable if you have read this post)
  2. Basic Concepts (explaining what we mean by services, tasks, and systems of services, defining several useful concepts, giving many examples)
  3. Modelling Approaches (describing different types of models we might later want to come up with for service systems; introducing a "simple abstract model" for this framework)
  4. Research Questions ( a) how to think about, and generate lists of, problems and other research tasks for this paradigm; b) a list of research questions, not exhaustive but reasonably wide)
  5. Related Fields of Study (a list of existing research fields that seem particularly relevant)
  6. Research Suggestions (references for further reading, our guesses for what might be valuable to investigate)

Some of our conclusions

We think the paradigm of systems of services deserves further attention. In terms of technical alignment research, it might expose some new problems and provide a new point of view on the existing ones. Moreover, AI currently takes the form of services, rather than agents. As a result, the service-system paradigm might be more suitable for communicating with wider audience (e.g., the public, less technical fields like AI policy, and AI researchers from outside of the long-termist community).

We have no definitive answers for what work needs to be done in this area, but some of the useful directions seem to be:


Acknowledgement: This document has been written by myself (Vojta), based on ideas I collected from a collaboration with Cara Selvarajah, Chris van Merwijk, Francisco Carvalho, Jan Kulveit, and Tushant Jha during the 2019/2020 AI Safety Research Program. While they gave a lot of feedback to the text and many of the ideas are originally theirs, they might not necessarily agree with all arguments and framing presented here. All mistakes are mine.

12 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2020-04-20T22:26:39.874Z · LW(p) · GW(p)

Thoughts while reading:

To reduce potential duplication of effort, we highlight several existing research fields that can be utilized to address these problems (Section 6). We conclude with suggestions for future research (Section 7).

Sections 5 and 6, not 6 and 7

Informally, we can assume that some description of the world is given by context and view a task as something specified by an initial state and an end state (or states) - accomplishing the task amounts to causing a transformation from the starting state to one of the desired end states.

I feel like this definition is not capturing what I mean by a "task". Many "agent-like" things, such as "become supreme ruler of the world", seem like tasks according to this definition; many useless things like "twitching randomly" can be thought of as completing a "task" as defined here and so would be counted as "services".

(I don't have a much better definition, but I prefer to just use the intuitive notion of "task" rather than the definition you have.)

A related challenge is the frame problem - identifying which consequences of accomplishing a task are important, and which can be ignored.

I'm pretty sure that's not what the frame problem is. The frame problem is that by default you ignore consequences of your actions, and so you have to arduously specify all of the things that shouldn't change.

Finally, this generates some utilities for the humans.

I'm surprised this matters in this model, continuing to read... ah okay, this model isn't being used anywhere.

List of Research Questions

Looking at these, I feel like they are subquestions of "how do you design a good society that can handle technological development" -- most of it is not AI-specific or CAIS-specific.

Replies from: MichaelDennis, VojtaKovarik, VojtaKovarik
comment by MichaelDennis · 2020-04-21T22:22:07.858Z · LW(p) · GW(p)
Looking at these, I feel like they are subquestions of "how do you design a good society that can handle technological development" -- most of it is not AI-specific or CAIS-specific.

For me this is the main point of CAIS. It reframes many AI Safety problems in terms of "make a good society" problems, but now you can consider scenarios involving only AI. We can start to answer the question of "how do we make a good society of AIs?" with the question "How did we do it with humans?". It seems like human society did not have great outcomes for everyone by default. Making human society function took a lot of work, and failed a lot of times. Can we learn from that and make AI Society fail less often or less catastrophically?

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-04-22T20:33:14.748Z · LW(p) · GW(p)

Yeah to be clear I do think it is worth it for people to be thinking about these problems from this perspective; I just don't think they need to be AI researchers.

Replies from: MichaelDennis
comment by MichaelDennis · 2020-04-23T18:43:11.153Z · LW(p) · GW(p)

Yeah, I understand that. My point is that the same way society didn't work by default, systems of AI won't work by default, and that the interventions that will be needed will require AI researchers. That is, it's not just about setting up laws, norms, contracts, and standards for managing these systems. It is about figuring out how to make AI systems which interact with each other in the way that humans do in the presence of laws, norms, standards and contracts. Someone who is not an AI research would have no hope in solving this, since they cannot understand how AI systems will interact, and cannot offer appropriate interventions.

Replies from: VojtaKovarik
comment by VojtaKovarik · 2020-04-24T10:27:34.184Z · LW(p) · GW(p)

It seems to me like you might each be imagining a slightly different situation.

Not quite certain what the difference is. But it seems like Michael is talking about setting up well the parts of the system that are mostly/only AI. In my opinion, this requires AI researchers, in collaboration with experts from whatever-area-is-getting-automated. (So while it might not fall only under the umbrella of AI research, it critically requires it.) Whereas - it seems to me that - Rohin is talking more about ensuring that the (mostly) human parts of society do their job in the presence of automatization. For example, how to deal with unemployment when parts of the industry get automated. (And I agree that I wouldn't go looking for AI researches when tackling this.)

comment by VojtaKovarik · 2020-04-21T02:24:09.472Z · LW(p) · GW(p)

Fixed the wrong section numbers and frame problem description.

Informally, we can assume that some description of the world is given by context and view a task as something specified by an initial state and an end state (or states) - accomplishing the task amounts to causing a transformation from the starting state to one of the desired end states.

I feel like this definition is not capturing what I mean by a "task". Many "agent-like" things, such as "become supreme ruler of the world", seem like tasks according to this definition; many useless things like "twitching randomly" can be thought of as completing a "task" as defined here and so would be counted as "services".

Could it be that the problem is not in the "task" part but in the definition service? If I consider the task of building me a house that I will like, I can envision a very service-like way of doing that (ask me a bunch of routine questions, select house-model correspondingly, then proceed to build it in a cook-book manner by calling on other services). But I can also imagine going about this in a very agent-like manner.

(Also, "twitching randomly" seems like a perfectly valid task, and a twitch-bot as a perfectly valid service. Just a very stupid one that nobody would want to build or pay for. Uhm, probably. Hopefully.)

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-04-22T20:31:40.811Z · LW(p) · GW(p)

It seems like what you're trying to get at is some notion of a difference between a service and an agent. My objection is primarily that the specific definitions you chose don't seem to point at the essential differences between a service and an agent. I don't have a strong opinion as to whether the problem is with the definition of "task" or of "service"; just that together they don't seem to point at the right thing.

comment by VojtaKovarik · 2020-04-21T11:26:40.620Z · LW(p) · GW(p)

List of Research Questions

Looking at these, I feel like they are subquestions of "how do you design a good society that can handle technological development" -- most of it is not AI-specific or CAIS-specific.

It is intentional that not all the problems are technical problems - for example, I expect that not tackling unemployment due to AI might indirectly make you a lot less safe (it seems prudent to not be in a civial war or war when you are attempting to finish building AGI). However, you are right that the list might nevertheless be too broad (and too loosely tied to AI).

Anyway: As a smaller point, I feel that most of the listed problems will get magnified as you introduce more AI services, or they might gain important twists. As a larger point: Am I correct to understand you as implying that "technical AI alignment researchers should primarily focus on other problems" (modulo qualifications)? My intuition is that this doesn't follow, or at least that we might disagree on the degree to which this needs to be qualified to be true. However, I have not yet thought about this enough to be able to elaborate more right now :(. A bookmark that seems relevant is the following prompt:

Conditional on your AI system never turning into an agent-like AGI, how is "not dying and not losing some % of your potential utility because of AI" different from "how do you design a good society that can handle the process of more and more things getting automated"?

(This should go with many disclaimers, first among those the fact that this is a prompt, not an implicit statement that I fully endorse.)

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-04-22T20:25:13.758Z · LW(p) · GW(p)
Am I correct to understand you as implying that "technical AI alignment researchers should primarily focus on other problems" (modulo qualifications)?

Kind of? I think it's more like "these are indeed problems, and someone should focus on them, but I wouldn't call it technical AI alignment" (and as a result, I wouldn't call people working on them "technical AI alignment researchers"). For many of these problems, if I wanted to find people to work on them, I would not look for AI researchers (and instead look for economists, political theorists, etc).

Like, I kind of wish this document had been written without AI / AI safety researchers in mind.

comment by Rohin Shah (rohinmshah) · 2020-04-20T22:27:29.159Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

The <@CAIS report@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@) suggests that future technological development will be driven by _systems of AI services_, rather than a single monolithic AGI agent. However, there has not been much followup research since the publication of the report. This document posits that this is because the concepts of tasks and services introduced in the report are not amenable to formalization, and so it is hard to do research with them. So, it provides a simple abstract model of a system of services, in which each service takes some inputs, produces some outputs, and is trained with some loss function, and these services are interconnected with each other. (Some of the services could be provided by humans instead of AI systems.) It then lists several research problems that could be tackled under this generic view.

Planned opinion:

I was expecting a list of questions that were specific to AI somehow, but the actual list is very broad and feels to me like subquestions of "how do you design a good society in the face of technological development". For example, it includes unemployment, system maintenance, the potential of blackmail, side-channel attacks, etc.
Replies from: VojtaKovarik
comment by VojtaKovarik · 2020-04-21T01:52:11.754Z · LW(p) · GW(p)

I agree with your points in the suggested summary. However, I feel like they are not fully representative of the text. But, as the author, I might be imagining the version of the document in my head rather than the one I actually wrote :-).

  • My estimate is that after reading it, I would gain the impression that the text revolves around the abstract model. Which I thought wasn't the case; definitely wasn't the intention.
  • Also, I am not sure if it is intended that your summary doesn't mention the examples and the "classifying research questions" subsection (which seems equally important to me as the list it generates).
  • Finally, from your planned opinion, I might get the impression that the text suggests no technical problems at all. I think that some of them either are technical problems (e.g., undesired appearance of agency, preventing error propagation and correlated failures, "Tools vs Agents" in Section 6) or have important technical components (all the problems listed as related changes in environment, system, or users). Although whether these are AI specific is arguable.

Side-note 1: I also think that most of the classical AI safety problems also appear in systems of AI services (either in individual services, or in "system-wide variants"). But this is only mentioned in the text briefly, since I am not yet fully clear on how to do the translation between agent-like AIs and systems of AI services. (Also, on the extent to which such translation even makes sense.)

Side-note 2: I imagine that many "non-AI problems" might become "somewhat-AI problems" or even "problems that AI researchers need to deal with" once we get enough progress in AI to automate the corresponding domains.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2020-04-22T20:43:22.314Z · LW(p) · GW(p)

Sorry for not checking here before the newsletter went out :/

My estimate is that after reading it, I would gain the impression that the text revolves around the abstract model. Which I thought wasn't the case; definitely wasn't the intention.

Hmm, I didn't mean to imply this.

Also, I am not sure if it is intended that your summary doesn't mention the examples and the "classifying research questions" subsection (which seems equally important to me as the list it generates).

That was somewhat intended -- words are at a premium in the newsletter, so I have to make decisions about what to include. However, given that you find the classification subsection is equally important, I'll at least add that in.

Finally, from your planned opinion, I might get the impression that the text suggests no technical problems at all.

That's a fair point, I hadn't realized that.

I've made the following changes to the LW version of the newsletter:

AI Services as a Research Paradigm [AF · GW](Vojta Kovarik) (summarized by Rohin): The CAIS report (AN #40) suggests that future technological development will be driven by systems of AI services, rather than a single monolithic AGI agent. However, there has not been much followup research since the publication of the report. This document posits that this is because the concepts of tasks and services introduced in the report are not amenable to formalization, and so it is hard to do research with them. So, it provides a classification of the types of research that could be done (e.g. do we consider the presence of one human, or many humans?), a list of several research problems that could be tackled now, and a simple abstract model of a system of services that could be built on in future work.
Rohin's opinion: I was expecting a research paradigm that was more specific to AI, but in reality it is very broad and feels to me like an agenda around "how do you design a good society in the face of technological development". For example, it includes unemployment, system maintenance, the potential of blackmail, side-channel attacks, prevention of correlated errors, etc. None of this is to say that the problems aren't important -- just that given how broad they are, I would expect that they could be best tackled using many different fields, rather than being important for AI researchers in particular to focus on.