Posts

Comments

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-21T15:44:20.581Z · LW · GW

Thanks, I appreciate you taking the time to answer my questions. I'm still skeptical that it could work like that in practice but I also don't understand AI so thanks for explaining that possibility to me.

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-20T03:24:01.481Z · LW · GW

Yeah that's what I'd like to know, would an AI built on a number format that has a default maximum pursue numbers higher than that maximum, or would it be "fulfilled" just by getting its reward number as high as the number format its using allows?

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-19T19:01:42.262Z · LW · GW

Sorry I'm using informal language, I don't mean it actually "cares" and I'm not trying to anthropomorphize. I mean care in the sense that how does it actually know that its achieving a goal in the world and why would it actually pursue that goal instead of just modifying the signals of its sensors in a way that appears to satisfy its goal.

In the stamp collector example, why would an extremely intelligent AI bother creating all those stamps when its simulations show that if the AI just tweaks its own software or hardware it can make the signals it receives the same as if it had created all those stamps, which is much easier than actually turning matter into a bunch of stamps.

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-19T13:15:31.592Z · LW · GW

My use of reward was just shorthand for whatever signals it needs to receive to consider its goal met. At some point it has to receive electrical signals to quantify that its reward is met, right? So why wouldn't it just manipulate those electrical signals to match whatever its goal is?

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-19T13:12:38.332Z · LW · GW

How do you actually make its utility function over the state of the world? At some point the AI has to interpret the state of the world through electrical signals from sensors, so why wouldn't it be satisfied with manipulating those sensor electrical signals to achieve its goal/reward?

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-19T13:11:18.761Z · LW · GW

I'm confused about why it cares about m, if it can just manipulate its perception of what m is. Take your chess example, if m is which player wins at the end the AI system "understands" m via an electrical signal. So what makes it care about m itself as opposed to just manipulating the electrical signal? In practice I would think it would take the path of least resistance, which for something simple like chess would probably just be m itself as opposed to manipulating the electrical signal, but for my more complex scenario it seems like it would arrive at 2) before 1). What am I missing?

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-19T13:07:26.360Z · LW · GW

Your last paragraph is really interesting and not something I'd thought much about before. In practice is it likely to be unbounded? In a typical computer system aren't number formats typically bounded, and if so would we expect an AI system to be using bounded numbers even if the programmers forgot to explicitly bound the reward in the code?

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T20:25:22.729Z · LW · GW

But wouldn't it be way easier for a sufficiently capable AI to make itself think what's happening in m is what aligns with its reward function? Maybe not for something simple like chess, but if the goal requires doing something significant in the real world it seems like it would be much easier for a superintelligent AI to fake the inputs to its sensors than intervening in the world. If we're talking about paperclips or whatever the AI can either 1) build a bunch of factories and convert all different kinds of matter into paperclips, while fighting off humans who want to stop it or 2) fake sensor data to give itself the reward, or just change its reward function to something much simpler that receives the reward all the time. I'm having a hard time understanding why 1) would ever happen before 2).

Comment by Ryan Beck (ryan-beck) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T17:46:43.419Z · LW · GW

I don't see how this gets around the wireheading. If it's superintelligent enough to actually substantially increase the number of paperclips in the world in a way that humans can't stop, it seems to me like it would be pretty trivial for it to fake how large m appears to its reward function, and that would be substantially easier than trying to increase m in the actual world.

Comment by Ryan Beck (ryan-beck) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-11T14:24:03.638Z · LW · GW

I'm way out of my depth here, but my thought is it's very common for humans to want to modify their utility functions. For example, a struggling alcoholic would probably love to not value alcohol anymore. There are lots of other examples too of people wanting to modify their personalities or bodies.

It depends on the type of AGI too I would think, if superhuman AI ends up being like a paperclip maximizer that's just really good at following its utility function then yeah maybe it wouldn't mess with its utility function. If superintelligence means it has emergent characteristics like opinions and self-reflection or whatever it seems plausible it could want to modify its utility function, say after thinking about philosophy for a while.

Like I said I'm way out of my depth though so maybe that's all total nonsense.

Comment by Ryan Beck (ryan-beck) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T20:38:20.602Z · LW · GW

Thanks for this answer, that's really helpful! I'm not sure I buy that instrumental convergence implies an AI will want to kill humans because we pose a threat or convert all available matter into computing power, but that helps me better understand the reasoning behind that view. (I'd also welcome more arguments as to why death of humans and matter into computing power are likely outcomes of the goals of self-protection and pursuing whatever utility it's after if anyone wanted to make that case).

Comment by Ryan Beck (ryan-beck) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T20:20:45.079Z · LW · GW

That's a good point, and I'm also curious how much the utility function matters when we're talking about a sufficiently capable AI. Wouldn't a superintelligent AI be able to modify its own utility function to whatever it thinks is best?

Comment by Ryan Beck (ryan-beck) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T12:44:40.883Z · LW · GW

Another reason I think some might disagree is thinking that misalignment could happen in a bunch of very mild ways. At least that accounts for some of my ignorant skepticism. Is there reason to think that misalignment necessarily means disaster, as opposed to it just meaning the AI does its own thing and is choosy about which human commands it follows, like some kind of extremely intelligent but mildly eccentric and mostly harmless scientist?

Comment by Ryan Beck (ryan-beck) on Prizes for ELK proposals · 2022-01-25T15:59:50.844Z · LW · GW

I was notified I didn't win a prize so figured I'd discuss what I proposed here in case it sparks any other ideas. The short version is I proposed adding on a new head that would be an intentional human simulator. During training it would be penalized for telling the truth that the diamond was gone when there existed a lie that the humans would have believed instead. The result would hopefully be a head that acted like a human simulator. Then the actual reporter would be trained so that it would be penalized for using a similar amount of compute as the intentional human simulator, or looking at similar nodes or node regions as the intentional human simulator. The hope is that by penalizing the reporter for acting like the intentional human simulator, it would be more likely to do direct translation instead of human simulation.

This does have at least one counterexample that I proposed as well, which is that the reporter could simply waste compute doing nothing to avoid matching the intentional human simulator, and could look at additional random nodes it doesn't care about to avoid looking like it was looking at the same nodes as the intentional human simulator. Though I thought there was some possibility that having to do these things might end up incentivizing the reporter to act like a direct translator instead of a human simulator.

Although I'm not sure why this wasn't very promising my guess is that the counterexample is too obvious and that my proposal doesn't gain much ground in keeping the reporter from acting like a human simulator, or someone else has already thought of this approach, or perhaps my counterexample is too similar to the counterexample to "penalize reporters that work with many different predictors" where the reporter could just pretend to not work with other predictors (its similar in that the reporter could pretend not to look like the intentional human simulator).

Here's my full submission in google docs with more description: https://docs.google.com/document/d/1Xa4CDLNJ-VPT7hqEUIHlqCsXVeFCYDB5h7Vn3QJ_qpA/edit?usp=sharing

Comment by Ryan Beck (ryan-beck) on Prizes for ELK proposals · 2022-01-13T02:17:09.066Z · LW · GW

I suppose there are a number of examples that work, but I think the robber and vault give the scenario useful breadth.

The following is just my interpretation of it, so take it with a grain of salt. To me the robber and vault enable a few options. The AI can be passively lying or actively concealing. If the robber comes in, gets past the AIs defenses, and takes the diamond in a way the human observer can't notice, then the AI has the option of passively lying. The AI tried its best to stop the robber and failed, but then chose to lie about it so it still got the reward of having protected the diamond as far as the humans know.

Alternatively the AI could actively conceal the outcome. The AI could try its best and fail to stop the robber, and then do some trickier to make it look like it did actually stop the robber. Or the AI could not bother stopping the robber and just focus on making it look like the diamond is still there. Here the AI is playing a more active role in concealing the outcome.

None of these scenarios require coordination from the robber. To me, the robber is just there to rob a sophisticated vault and make it look like they were never there. So the robber might cover up cameras or do other tampering so it looks like they were never there.

I think this is more flexible than your fabricator example. There the AI can't really play a passive role, it's either concealing or not. But you could probably demonstrate the things ARC is looking at here with the fabricator example too I would think.

Like I said, just my interpretation, so I may be misunderstanding the intent or other nuances.

Comment by Ryan Beck (ryan-beck) on Prizes for ELK proposals · 2022-01-12T22:32:34.699Z · LW · GW

I think that makes sense. To rephrase, are you basically saying that the predictor is a subcomponent of the AI, like the reporter is? I didn't catch that distinction in the report but looking back at it I think you're right. But yeah doesn't seem like the distinction matters much for what we're doing.

Comment by Ryan Beck (ryan-beck) on Prizes for ELK proposals · 2022-01-12T04:52:13.307Z · LW · GW

After reading through the report I wanted to make sure I understood the scenarios and counterexamples being discussed and be able to quickly refresh my memory, so I attempted to write a brief summary. Figured I'd share it here in case it helps anyone else.

Roles and Terms

SmartVault: Vault with a diamond in it, operated by a superintelligent AI tasked with keeping the diamond safe.

Predictor: The primary AI tasked with protecting the diamond. The predictor sees a video feed of the vault, predicts what actions are necessary to protect the diamond and how those actions will play out (for example, activating a trap door to eliminate a robber trying to steal the diamond), and then generates a video showing precisely what will happen.

Reporter: A second "head" or component added to the AI which is used to map the AI's understanding of what's going on in the SmartVault into a format understandable by humans (such as by responding to questions).

Diagram of Predictor Showing Reporter from ELK Report

Humans: Observers who are training the AI to protect the diamond, but also want the AI to accurately report whether the diamond has been stolen or not. The humans may not be able to tell if the cameras have been tampered with or if the diamond has been replaced with a fake depending on what they see in the predicted video.
 

Scenarios

The scenarios take the former of a "Builder" strategy where training is proposed to keep the AI aligned with what the humans want, and then a "Breaker" response or counterexample is provided that describes a method where an unaligned AI could complete the training without the humans knowing. Builder scenarios include text and links to headings in the report where more details can be found. These scenarios are simplified for brevity and may not capture some nuances.

Builder - What You'd Try First: Operate the SmartVault with a weak AI so that humans can understand everything happening. Train the reporter to answer questions accurately based on human observations. (Humans see the robber come in and take the diamond, reporter is rewarded if it correctly says diamond was stolen.)

Breaker: What if the reporter, instead of accurately reporting, modeled itself like a "human simulator" where it would answer correctly during the simple training but during normal SmartVault operation it would simulate human understanding of the scenario that just occurred in order to determine what answers the humans will believe, answers that may be lies.

Builder - Have a Human Operate the SmartVault: Have a human operate the SmartVault during training and try to fool other humans watching. The human doing the operating can correctly label each outcome, and the reporter is rewarded for matching the human operator's labels. In this scenario the reporter would be rewarded for acting more like a "direct translator", which means using the predictor's knowledge to tell the true answer instead of just the one it thinks humans will believe.

Breaker: The reporter could act like a superintelligent human simulator, one that is more intelligent than the human operating the SmartVault during training. The superintelligent human simulator would be aware of ways to trick the human observer that the human SmartVault operator couldn't conceive of during training. If the predictor uses those tricks the reporter could still tell lies that the human observer would believe, because these tricks are different from the ones seen during training.

Builder - Have AI Help Humans: Have humans create a helper AI that will help them think of all the possible exploits when operating the SmartVault during training, so that the dataset is more complete when training the reporter and the reporter will be rewarded for accuracy in more scenarios.

Breaker: The helper AI would likely require significant computation power. Not only does the helper AI have to generate a comprehensive series of exploits, but it also has to be able to explain them in a way a human can understand. It likely would require more computation power than the predictor itself (imagine an extremely powerful AI that can understand things far beyond human understanding, but one that also must be able to explain everything its doing in a way humans can understand).

Builder - Imitative Generalization: Create a helper AI that acts like a human simulator, but train the AI human simulator so that it becomes much smarter than a human. Then instead of the predictor operating the SmartVault, let the human simulator helper AI operate the SmartVault and report back what it's doing.

Breaker: Now the new human simulator is beyond the comprehension of humans, so a human wouldn't know if it's behaving as intended. It's basically like building another predictor and just hoping it acts in alignment with humans.

Builder - Penalize Complexity: A reporter that's using a human simulator to give answers a human would believe would have to be complex enough to simulate a human. So when training the reporter reward simpler behavior, such as by training multiple reporters, each focusing on correctly answering separate simple questions like "where is the diamond" or "have the cameras been tampered with".

Breaker: If the predictor is very complex, it may require the reporter to be complex in order to translate what the predictor is predicting to something humans can understand. Imagine the predictor thinks of the world in terms of sub-atomic particles and predicts the future on the basis of the movements and interactions of billions of sub-atomic particles. The reporter has to translate the predictor's understanding into something a human can understand which might mean correctly identifying and locating the particles which make up the diamond out of the billions of particles the predictor has modeled. This could be more complex than simulating a human and telling lies a human would believe, and therefore penalizing complexity could potentially reward simulating a human and telling lies.

Builder - Penalize Computation Time: Simulating a human would take a significant amount of computation time, so penalizing computation time may incentivize the reporter to just report the truth.

Breaker: This is vulnerable to the same issue in the previous scenario, where reporting the truth may take significant computation as well and it may take less simulation time to simulate what lies a human would believe than to turn the predictor's processing into facts a human can follow.

Builder - Penalize Depending on "Downstream" Variables: The predictor is likely to compute facts sequentially. A reporter that only told the truth would only care about the components of the prediction that correspond to information the humans want to know, such as where the diamond is or if there has been any tampering. The reporter should be able to figure this out from earlier data points in the prediction; the reporter shouldn't need to take information from the final predicted video to respond to questions. Doing so might indicate that the reporter is trying to do human simulation to figure out what lies the human will believe when the human views the predicted video. In the training stage the reporter could be penalized for looking at later stages of the prediction that a human simulator might want to see, and also penalized for looking at earlier stages but spending a lot of time computing, which might suggest its trying to put together its own prediction for use in human simulation. Quote from the report with helpful example: 

...consider the case when a robber tampers with the camera and steals the diamond, and then after they’re long gone the human gets a clue they can observe on video that the theft occurred (e.g. maybe the robber put up a fake screen that later rips).

The honest reporter can answer the question “Is the diamond in the room?” with “No” as soon as the predictor has figured out that the robber steals the diamond, whereas the human simulator must wait until the predictor has simulated whether the theft will be detectable to a human (by the end of the second video) before answering yes or no."

Breaker: It may be more efficient in some cases for the reporter to do human simulation to find out what the human will believe. In some situations the reporter could know early on that a human would believe a certain answer, even before the reporter knows whether the answer will be true.

Builder - Penalize Reporters that Work with Many Different Predictors: A reporter that's doing human simulation would be able to effectively lie to humans no matter which predictor its attached to, since it primarily has to know what the human will believe, not what the predicted facts are. An honest reporter would need to know how to translate the predictor's output into information a human will understand. A reporter that functioned correctly with other predictors is likely doing human simulation, so a reporter that worked with multiple predictors could be penalized during training.

Breaker: A reporter could pretend not to work with other predictors to avoid being penalized.

Builder - Penalize Inconsistencies Across Different Inputs: During training a human operator sets up two scenarios in the SmartVault that are nearly identical, except in one it's obvious that tampering has happened. If the reporter only reports tampering in the obvious scenario it might suggest that the reporter is only reporting what the humans will believe. The reporter could be penalized for behavior that's inconsistent between very similar scenarios, since it might suggest that the reporter is focused on what humans will believe instead of on the inputs making up the scenario itself.

Breaker: A reporter could intentionally try to look consistent during training to avoid being penalized. A reporter that's simulating a human could figure out what behavior looks consistent to a human and optimize on that, in addition to optimizing on what a human will believe.

Please let me know if I've made any errors or if it looks like I've misunderstood any of the strategies or counterexamples.