minutes from a human-alignment meeting

post by bhauth · 2024-05-24T05:01:53.904Z · LW · GW · 4 comments

Contents

4 comments

"OK, let's get this meeting started. We're all responsible for development of this new advanced intelligence 'John'. We want John to have some kids with our genes, instead of just doing stuff like philosophy or building model trains, and this meeting is to discuss how we can ensure John tries to do that."

"It's just a reinforcement learning problem, isn't it? We want kids to happen, so provide positive reinforcement when that happens."

"How do we make sure the kids are ours?"

"There's a more fundamental problem than that: without intervention earlier on, that positive reinforcement will never happen."

"OK, so we need some guidance earlier on. Any suggestions?"

"To start, having other people around is necessary. How about some negative reinforcement if there are no other humans around for some period of time?"

"That's a good one, also helps with some other things. Let's do that."

"Obviously sex is a key step in producing children. So we can do positive reinforcement there."

"That's good, but wait, how do we tell if that's what's actually happening?"

"We have access to internal representation states. Surely we can monitor those to determine the situation."

"Yeah, we can monitor the representation of vision, instead of something more abstract and harder to understand."

"What if John creates a fictional internal representation of naked women, and manages to direct the monitoring system to that instead?"

"I don't think that's plausible, but just in case, we can add some redundant measures. A heuristic blend usually gives better results, anyway."

"How about monitoring the level of some association between some representation of the current situation and sex?"

"That could work, but how do we determine that association? We'd be working with limited data there, and we don't want to end up with associations to random irrelevant things, like specific types of shoes or stylized drawings of ponies."

"Those are weird examples, but whatever. We can just rely on indicators of social consensus, and then blend those with personal experiences to the extent they're available."

"I've said this before, but this whole approach isn't workable. To keep a John-level intelligence aligned, we need another John-level intelligence."

"Oh, here we go again. So, how do you expect to do that?"

"I actually have a proposal: we have John follow cultural norms around having children. We can presume that a society that exists would probably have a culture conducive to that."

"Why would you expect that to be any more stable than John as an individual? All that accomplishes is some averaging, and it adds the disadvantages of relying on communication."

"I don't have a problem with the proposal of following cultural norms, but I think that such a culture will only be stable to the extent that the other alignment approaches we discussed are successful. So it's not a replacement, it's more of a complement."

"We were already planning for some cultural norm following. Anyone opposed to just applying the standard amount of that to sex-related things?"

"Seems good to me."

"I have another concern. I think the effectiveness of the monitoring systems we discussed is going to depend on the amount of recursive self-improvement that happens, so we should limit that."

"I think that's a silly concern and a huge disadvantage. Absolutely not."

"I'm not concerned about the alignment impact if John is already doing some RSI, but we do have a limited amount of time before those RSI investments need to start paying off. I vote we limit the RSI extent based on things like available food resources and life expectancy."

"I don't think everyone will reach a consensus on this issue, so let's just compromise on the amount and metrics."

"Fine."

"Are we good to go, then?"

"Yes, I think so."

4 comments

Comments sorted by top scores.

comment by wassname · 2024-05-25T00:12:50.704Z · LW(p) · GW(p)

Just build the good John's but not the bad Johns.

comment by Gunnar_Zarncke · 2024-05-24T13:32:43.606Z · LW(p) · GW(p)

Nice.

How do you get norm-following with limited data? That seems like quite a hard problem. 

Just make it in John's self-interest.

Replies from: programcrafter
comment by ProgramCrafter (programcrafter) · 2024-05-25T08:34:27.810Z · LW(p) · GW(p)

Just make it in John's self-interest.

That's the first step; the second is to make it more beneficial than alternatives, and preferably by a large margin so that adversaries can't outbid norm-following way (as is case with peer pressure).

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2024-05-25T10:10:35.901Z · LW(p) · GW(p)

Yeah, but how do we set this up as a stable environment? We can maybe create the initial population, but later, all Johns have to maintain it, otherwise it will fall apart. 

We have to make resources scarce enough and difficult to extract, such that John has to collaborate to get them. Then, resources have to be somewhat unpredictable, such that John has to explore and share information. With competition, more collaborative groups will win.