Posts

How can we prevent AGI value drift? 2024-11-20T18:19:24.375Z

Comments

Comment by Dakara (chess-ice) on How can we prevent AGI value drift? · 2024-11-21T13:41:29.800Z · LW · GW

I can't think of any ways to make a group of people that controls AGI release it from their control. But if I am missing something or if maybe there is some plan for this scenario, then I'd be really happy to learn about it.

Comment by Dakara (chess-ice) on How can we prevent AGI value drift? · 2024-11-21T09:21:47.310Z · LW · GW

My biggest concern with intent alignment of AGI is that we might run into the issue of AGI being used for something like a totalitarian control over everyone who doesn't control AGI. It becomes a source of nearly unlimited power. The first company to create intent-aligned AGI (probably ASI at that point) can use it to stop all other attempts at building AGI. At that point, we'd have a handful of people wielding incredible power. It seems unlikely that they'd just decide to give it up. I think your "big if" is a really, really big if.

But other than that, your plan definitely seems workable. It avoids the problem of value drift, but unfortunately it incurs the cost dealing with power-hungry humans.

Comment by Dakara (chess-ice) on Intrinsic Power-Seeking: AI Might Seek Power for Power’s Sake · 2024-11-21T07:24:18.051Z · LW · GW

That's a very thoughtful response from TurnTrout. I wonder if @Gwern agrees with its main points. If not, it would be good know where he thinks it fails.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T22:41:06.638Z · LW · GW

P.S. Here is the link to the question that I posted.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T17:43:36.157Z · LW · GW

Noosphere, I am really, really thankful for your responses. You completely answered almost all (I am still not convinced about that strategy of avoiding value drift. I am probably going to post that one as a question to see if maybe other people have different strategies on preventing value drift) of the concerns that I had about alignment.

This discussion, significantly increased my knowledge. If I could triple upvote your answers, I would. Thank you! Thank you a lot!

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T17:22:18.505Z · LW · GW

Fair enough. Would you expect that AI would also try to move its values to the moral reality? (something that's probably good for us, cause I wouldn't expect human extinction to be a morally good thing)

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T16:09:35.995Z · LW · GW

Ah, so you are basically saying that preserving current values is like a meta instrumental value for AGIs similar to self-preservation that is just kind of always there? I am not sure if I would agree with that (if I am correctly interpreting you) since, it seems like some philosophers are quite open to changing their current values.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T15:56:23.236Z · LW · GW

Reading from LessWrong wiki, it says "Instrumental convergence or convergent instrumental values is the theorized tendency for most sufficiently intelligent agents to pursue potentially unbounded instrumental goals such as self-preservation and resource acquisition"

It seems like it preserves exactly the goals we wouldn't really need it to preserve (like resource acquisition). I am not sure how it would help us with preserving goals like ensuring humanity's prosperity, which seem to be non-fundamental.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T15:29:48.572Z · LW · GW

What would instrumental convergence mean in this case? I am not sure of what that means in this case.

Comment by Dakara (chess-ice) on Thoughts on “AI is easy to control” by Pope & Belrose · 2024-11-20T14:51:13.297Z · LW · GW

Have you had any p(doom) updates since then or is it still around 5%?

Comment by Dakara (chess-ice) on By Default, GPTs Think In Plain Sight · 2024-11-20T14:02:15.420Z · LW · GW

After 2 years have passed, I am quite interested in hearing @Fabien Roger's thoughts on this comment, especially this part "But how useful could gpt-n be if used in such a way? On the other extreme, gpt-n is producing internal reasoning text at a terabyte/minute. All you can do with it is grep for some suspicious words, or pass it to another AI model. You can't even store it for later unless you have a lot of hard drives. Potentially much more useful. And less safe.".

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-20T12:53:45.730Z · LW · GW

That does indeed answer my 3 concerns (and Seth's answer does as well). Overnight, I came up with 1 more concern.

What if AGI somewhere down the line overgoes a value drift. After all, looking at the evolution, it seems like our evolutionary goal was supposed to be "produce as many offsprings". And in the recent years, we have strayed from this goal (and are currently much worse at it than our ancestors). Now, humans seem to have goals like "design a video game" or "settle in France" or "climb Everest". What if AGI similarly changes its goals and values overtime? Is there are way to prevent that or at least be safeguarded against that?

I am afraid that if that happens, humans would, metaphorically speaking, stand in AGI's way of climbing Everest.

Comment by Dakara (chess-ice) on 5 ways to improve CoT faithfulness · 2024-11-19T23:05:39.602Z · LW · GW

"Well, I'm not sure. As you mention, it depends on the step size. It also depends on how vulnerable to adversarial inputs LLMs are and how easy they are to find. I haven't looked into the research on this, but it sounds empirically checkable. If there are lots of adversarial inputs which have a wide array of impacts on LLM behavior, then it would seem very plausible that the optimized planner could find useful ones without being specifically steered in that direction."

I am really interested in hearing CBiddulph's thoughts on this. Do you agree with Abram?

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T22:14:42.073Z · LW · GW

I have 3 other concrete concerns about this strategy. So if I understand it correctly, the plan is for humans to align AGI and then for that AGI to align AGI and so forth (until ASI).

  1. What if the strategy breaks on the first step? What if first AGI turns out to be deceptive (scheming) and only pretends to be aligned with humans. It seems like if we task such deceptive AGI to align other AGIs, then we will end up with a pyramid of misaligned AGIs.

  2. What if the strategy breaks later down the line? What if AGI #21 accidentally aligns AGI #22 to be deceptive (scheming)? Would there be any fallback mechanisms we can rely on?

  3. What is the end goal? Do we stop once we achieve ASI? Can we stop once we achieve ASI? What if ASI doesn't agree and instead opts to continue self-improving? Are we going to be able to get to the point where the acceleration of ASI's intelligence plateaus and we can recuperate and plan for future?

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T19:26:15.145Z · LW · GW

Sure, I might as well ask my question directly about scalable oversight, since it seems like a leading strategy of iterative alignment anyways. I do have one preliminary question (which probably isn't worthy of being included in that post, given that it doesn't ask about a specific issue or threat model, but rather about expectations of people).

I take it that this strategy relies on evaluation being easier than coming up with research? Do you expect this to be the case?

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T18:48:11.168Z · LW · GW

That's pretty interesting, I do think that if iterative alignment strategy ends up working, then this will probably end up working too (if nothing else, then because this seems much easier).

I have some concerns left about iterative alignment strategy in general, so I will try to write them down below.

EDIT: On the second thought, I might create a separate question for it (and link it here), for the benefit of all of the people who concerned about the things (or similar things) that I am concerned about.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T18:36:19.192Z · LW · GW

This topic is quite interesting for me from the perspective of human survival, so if you do decide to make a post specifically about preventing jailbreaking, then please tag me (somewhere) so that I can read it.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T18:33:13.403Z · LW · GW

I agree with comments both by you and Seth. I guess that isn't really part of an alignment as usually understood. However, I think it is a part of a broad preventing AI from killing humans strategy, so it's still pretty important for our main goal.

I am not exactly sure I understand your proposal. Are you proposing that we radically gut our leading future model by restricting it severely? I don't think any AI labs will agree to do so, because such future AI is much less useful than probably even current AIs.

Or are you proposing that we use AI monitors to monitor our leading future AI models and then we heavily restrict only the monitors?

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T13:06:26.326Z · LW · GW

I also have a more meta-level layman concern (sorry if it will sound unusual). There seem to be a large number of jailbreaking strategies that all succeed against current models. To mitigate them, I can conceptually see 2 paths: 1) trying to come up with a different niche technical solution to each and every one of them individually or 2) trying to come up with a fundamentally new framework that happens to avoid all of them collectively.

Strategy 1 seems logistically impossible, as developers at leading labs (which are most likely to produce AGI) have to be aware of all of them (and they are often reported in relatively unknown papers). Furthermore, even if they somehow manage to monitor all reported jailbreaks, they would have to come up with so many different solutions, that it seems very unlikely to succeed.

Strategy 2 seems conceptually correct, but there seems to be no sign of it as even newer models are getting jailbreaked.

What do you think?

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T12:01:48.405Z · LW · GW

Looking more generally, there seems to be a ton of papers that develop sophisticated jailbreak attacks (that succeed against current models). Probably more than I can even list here. Are there any fundamentally new defense techniques that can protect LLMs against these attacks (since the existing ones seem to be insufficient)?

EDIT: The concern behind this comment is better detailed in the next comment.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-19T11:29:58.188Z · LW · GW

"Perhaps the missing piece is that I think alignment is already solved for LLM agents."

Another concern that I might have is that maybe it only seems like alignment is solved for LLMs. For example, this, this, this and this short papers argue that that seemingly secure LLMs may not be as safe as we initially believe. And it appears that they test even our models that are considered to be more secure and still find this issue.

Comment by Dakara (chess-ice) on The Plan - 2023 Version · 2024-11-19T08:22:21.463Z · LW · GW

That's a really good point. I would like to see John address it, because it seems quite crucial for the overall alignment plan.

Comment by Dakara (chess-ice) on AI Control: Improving Safety Despite Intentional Subversion · 2024-11-18T22:48:40.809Z · LW · GW

Thank you for both of your responses, they address all of my worries. Good luck on all of your future projects!

Comment by Dakara (chess-ice) on How to prevent collusion when using untrusted models to monitor each other · 2024-11-18T21:25:42.641Z · LW · GW

Søren Elverlin has recently released a video critique of this article on YouTube, claiming that it strongly fails to deliver. He goes over the entire piece and criticizes most of it.

Interested in hearing your thoughts. The video is very detailed (and high-quality) and engages directly with your article. It also raises valid points about its shortcomings. How would you address those?

Comment by Dakara (chess-ice) on Noosphere89's Shortform · 2024-11-18T18:52:35.550Z · LW · GW

Yep, Seth has really clearly outlined the strategy and now I can see what I missed on the first reading. Thanks to both of you!

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-18T18:38:16.484Z · LW · GW

"If you have an agent that's aligned and smarter than you, you can trust it to work on further alignment schemes. It's wiser to spot-check it, but the humans' job becomes making sure the existing AGI is truly aligned, and letting it do the work to align its successor, or keep itself aligned as it learns."

Ah, that's the link that I was missing. Now it makes sense. You can use AGI as a reviewer for other AGIs, once it is better than humans at reviewing AGIs. Thank you a lot for clarifying!

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-18T17:37:22.254Z · LW · GW

I've been reading a lot of the stuff that you have written and I agree with most of it (like 90%). However, one thing which you mentioned (somewhere else, but I can't seem to find the link, so I am commenting here) and which I don't really understand is iterative alignment.

I think that the iterative alignment strategy has an ordering error – we first need to achieve alignment to safely and effectively leverage AIs.

Consider a situation where AI systems go off and “do research on alignment” for a while, simulating tens of years of human research work. The problem then becomes: how do we check that the research is indeed correct, and not wrong, misguided, or even deceptive? We can’t just assume this is the case, because the only way to fully trust an AI system is if we’d already solved alignment, and knew that it was acting in our best interest at the deepest level.

Thus we need to have humans validate the research. That is, even automated research runs into a bottleneck of human comprehension and supervision.

The appropriate analogy is not one researcher reviewing another, but rather a group of preschoolers reviewing the work of a million Einsteins. It might be easier and faster than doing the research itself, but it will still take years and years of effort and verification to check any single breakthrough.

Fundamentally, the problem with iterative alignment is that it never pays the cost of alignment. Somewhere along the story, alignment gets implicitly solved.

Comment by Dakara (chess-ice) on Noosphere89's Shortform · 2024-11-18T12:00:40.038Z · LW · GW

I've been reading a lot of the stuff that you have written and I agree with most of it (like 90%). However, one thing which you mentioned (somewhere else, but I can't seem to find the link, so I am commenting here) and which I don't really understand is iterative alignment.

I think that the iterative alignment strategy has an ordering error – we first need to achieve alignment to safely and effectively leverage AIs.

Consider a situation where AI systems go off and “do research on alignment” for a while, simulating tens of years of human research work. The problem then becomes: how do we check that the research is indeed correct, and not wrong, misguided, or even deceptive? We can’t just assume this is the case, because the only way to fully trust an AI system is if we’d already solved alignment, and knew that it was acting in our best interest at the deepest level.

Thus we need to have humans validate the research. That is, even automated research runs into a bottleneck of human comprehension and supervision.

The appropriate analogy is not one researcher reviewing another, but rather a group of preschoolers reviewing the work of a million Einsteins. It might be easier and faster than doing the research itself, but it will still take years and years of effort and verification to check any single breakthrough.

Fundamentally, the problem with iterative alignment is that it never pays the cost of alignment. Somewhere along the story, alignment gets implicitly solved.

Comment by Dakara (chess-ice) on Simple probes can catch sleeper agents · 2024-11-18T10:08:56.755Z · LW · GW

I am also interested in knowing whether the probing method is a solution to the undetectable backdoor problem.

Comment by Dakara (chess-ice) on Simple probes can catch sleeper agents · 2024-11-18T10:07:31.112Z · LW · GW

This paper argues that unintended deceptive behavior is not susceptible to detection by probing method. The authors of that paper argue that the probing method fares no better than random guessing for detecting unintended deceptive behavior.

I would really appreciate any input, especially from Monte or his co-authors. This seems like a very important issue to address.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-11-17T22:19:41.104Z · LW · GW

James, thank you for a well-written comment. It was a pleasure to read. Looking forward to Seth's response. Genuinely interested in hearing his thoughts.

Comment by Dakara (chess-ice) on AI Control: Improving Safety Despite Intentional Subversion · 2024-11-17T21:58:03.241Z · LW · GW

I do believe that Shubhorup's second edit provides another case for concern. If you combine his proposal with steganography, the collusion team has another way of successfully mitigating Ryan's safety strategy.

This collusion team's strategy should also ideally be addressed.

Comment by Dakara (chess-ice) on AI Control: Improving Safety Despite Intentional Subversion · 2024-11-17T21:48:54.016Z · LW · GW

This paper creates a new 1-bit strategy for circumventing Ryan and Buck's safety strategy. The collusion team achieves 90%+ success rate.

I would really appreciate any input, especially from Ryan or his co-authors. This seems like a very important issue to address.

Comment by Dakara (chess-ice) on If we solve alignment, do we die anyway? · 2024-08-28T15:50:17.603Z · LW · GW

I wonder, have the comments managed to alleviate your concerns at all? Are there any promising ideas for multipolar AGI scenarios? Were there any suggestions that could work?