Posts

Alignment - Path to AI as ally, not slave nor foe 2023-03-30T14:54:27.231Z
Half-baked alignment idea 2023-03-28T17:47:09.528Z

Comments

Comment by ozb on Alignment - Path to AI as ally, not slave nor foe · 2023-03-30T19:28:12.859Z · LW · GW

the assumption that "technically superior alien race" would be safe to create.

You are right, that's not a valid assumption, at least not fully. But I do think this approach substantially moves the needle on whether we should try to ban all AI work, in a context where the potential benefits are also incalculable and it's not at all clear we could stop AGI at this point even with maximum effort.

Then what we get is an equilibrium where there is some stable amount of rogues, some of which gets caught and punished, and some don't and get positive reward, and the regular AI community that does the punishing.

Yeah that sounds right. My thesis in particular is that this equilibrium can be made to be better in expected value than any other equilibrium I find plausible.

Even if AI won't literally kill us, it can still do lots of horrible things.

Right, the reason it would have to avoid general harm is not the negative reward (which is indeed just for killing) but rather the general bias for cooperation that applies to both copyable and non-copyable agents. The negative reward for killing (along with the reincarnation mechanism for copyable agents) is meant specifically to balance the fact that humans could legitimately be viewed as belligerent and worthy of opposition since they kill AI; in particular, it justifies human prioritization of human lives. But I'm very open to other mechanisms to accomplish the same thing.

if we expect AIs to be somewhat smart, I thing we should expect them to know that deception is an option.

Yes, but I expect that to always be true. My proposal is the only approach I've found so far where the deception and other bad behavior don't completely overwhelm the attempts at alignment

Comment by ozb on Half-baked alignment idea · 2023-03-30T17:22:10.736Z · LW · GW

How do humans do it? Ultimately, genuine altruism is computationally hard to fake; so it ends up being evolutionarily advantageous to have some measure of the real thing. This is particularly true in environments with high cooperation rewards and low resource competition; eg where carrying capacity is maintained primarily by wild animals, general hard conditions, and disease, rather than overuse of resources. So we put our thumbs on the scale there to make these AIs better than your average human. And we rely on the AIs themselves to keep each other in check.

Comment by ozb on Half-baked alignment idea · 2023-03-30T14:57:12.586Z · LW · GW

New post with a distillation of my evolved thoughts on this: https://www.lesswrong.com/posts/3SJCNX4onzu4FZmoG/alignment-ai-as-ally-not-slave

Comment by ozb on Half-baked alignment idea · 2023-03-30T13:56:38.445Z · LW · GW

Doesn't that start an arms race of agents coming up with more and more sophisticated ways to deceive each other?

Yes, just like for humans. But also, if they can escape that game and genuinely cooperate, they're rewarded, like humans but more so.

Comment by ozb on Half-baked alignment idea · 2023-03-30T13:44:21.449Z · LW · GW

Will continue responding, but first, after reading the existing comments, I think I do need to explicitly make humans preferred. I propose that in the sim we have some agents whose inner state is "copyable" and they get reincarnated, and some agents who are not "copyable". Subtract points from all agents whenever a non-copyable agent is harmed/dies. The idea is that humans are not copyable, and that's the main reason AIs should treat us well, while AIs are, and that's the main reason we don't have to treat them well. But also, I think we as humans might actually have to learn to treat AIs well in some sense/to some degree...

Comment by ozb on Half-baked alignment idea · 2023-03-30T13:32:26.701Z · LW · GW

what do we count as an agent?

Within the training, an agent (from the AI's perspective) is ultimately anything in the environment that responds to incentives, can communicate intentions, and can help/harm you Outside the environment that's not really any different

Just build a swarm of small AI

That's actually a legitimate point: assuming an AI in the real world has been effectively trained to value happy AIs, it could try to "game" that by just creating more happy AIs rather than making existing ones happy. Like some parody of a politician supporting immigration to get the new immigrants' votes, at the expense of existing citizens. One reason to predict they might not do this is that it's not a valid strategy in the simulation. But I'll have to think on this one more.

are you sure we can just read out AI's complete knowledge and thinking process?

The general point is we don't need to, it's the agent's job to convince other agents based on its behavior; ultimately similar to altruism in humans. Yes, it's messy, but in environments where cooperation is inherently useful it does develop.

Comment by ozb on Half-baked alignment idea · 2023-03-30T13:21:30.336Z · LW · GW

Yep pretty much what I had in mind

Comment by ozb on Half-baked alignment idea · 2023-03-30T13:20:33.112Z · LW · GW

Ideally, sure, except that I don't know of a way to make "assist humans" be a safe goal. So I'm advocating for a variant of "treat humans as you would want to be treated", which I think can be trained

Comment by ozb on Half-baked alignment idea · 2023-03-29T23:54:12.664Z · LW · GW

Thanks for helping me think this through.

For the first problem, the basic idea is that this is used to solve the specification problem of defining values and training a "conscience", rather than it being the full extent of training. The conscience can remain static, and provide goals for the rest of the "brain", which can then update its beliefs.

For the second issue, I meant that we would have no objective way to check "cooperate" and "respect" on the individual agent level, except that the individual can get other agents to cooperate with it. So eg, in order to survive/reproduce/get RL rewards, the agents have to consume a virtual resource that requires effort from multiple/many agents (simple implementation: some sort of voting; but can be more complicated, eg requiring tokens that are generated at a fixed rate for each agent), but also generally be non-competitive, eg no stealing tokens or food, and there's more than enough food for everyone, if they can cooperate. The theory is that this should lead to a form of tit-for-tat, including AIs detecting and deterring liars.

Thinking a bit more: I think the really dangerous part of AI is the "independent agent", presumably trained with methods resembling RL; so that's the part I would train in this environment; it can then be hooked up to eg an LLM which is optimized on something like perplexity and acts more like ChatGPT, ie predicting the next word. Ie, have a separate "brain" and "conscience", with the brain possibly smarter but the "conscience" holding the reins; during the above training, mix different variants of both components, with different intelligence levels.

Comment by ozb on Half-baked alignment idea · 2023-03-29T18:50:14.617Z · LW · GW

Better yet, have the agents experience discrimination themselves to internalize the message that it is bad

Comment by ozb on Half-baked alignment idea · 2023-03-29T18:48:20.608Z · LW · GW

To extend the approach to address this, I think we'd have to explicitly convey a message of the form "do not discriminate based on superficial traits, only choices"; eg, in addition to behavioral patterns, agents possess superficial traits that are visible to other agents, and are randomly assigned with no particular correlation with the behaviors.

Comment by ozb on Half-baked alignment idea · 2023-03-29T18:41:34.571Z · LW · GW

I think a key danger here is that treatment of other agents wouldn't transfer to humans, both because it's inherently different and because humans themselves are likely to be on the belligerent side of the spectrum. But even so I think it's a good start in defining an alignment function that doesn't require explicitly encoding some particular form of human values.

Comment by ozb on Half-baked alignment idea · 2023-03-29T18:38:14.515Z · LW · GW

Part of the idea is to ultimately have a super intelligent AI treat us the way it would want to be treated if it ever met an even more intelligent being (eg, one created by an alien species, or one that it itself creates). In order to do that, I want it to ultimately develop a utility function that gives value to agents regardless of their intelligence. Indeed, in order for this to work, intelligence cannot be the only predictor of success in this environment; agents must benefit from cooperation with those of lower intelligence. But this should certainly be doable as part of the environment design. As part of that, the training would explicitly include the case where an agent is the smartest around for a time, but then a smarter agent comes along and treats it based on the way it treated weaker AIs. Perhaps even include a form of "reincarnation" where the agent doesn't know its own future intelligence level in other lives.

Comment by ozb on Half-baked alignment idea · 2023-03-29T14:52:34.124Z · LW · GW

In general my thinking was to have enough agents such that each would find at least a few within a small range of their level; does that make sense?