Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations

post by Zeming Wei · 2023-10-30T17:22:31.780Z · LW · GW · 1 comments

Paper link: https://arxiv.org/pdf/2310.06387.pdf

Abstract: 

Large Language Models (LLMs) have shown remarkable success in various tasks, but concerns about their safety and the potential for generating malicious content have emerged. In this paper, we explore the power of In-Context Learning (ICL) in manipulating the alignment ability of LLMs. We find that by providing just few in-context demonstrations without fine-tuning, LLMs can be manipulated to increase or decrease the probability of jailbreaking, \textit{i.e.} answering malicious prompts. Based on these observations, we propose In-Context Attack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guarding aligned language model purposes. ICA crafts malicious contexts to guide models in generating harmful outputs, while ICD enhances model robustness by demonstrations of rejecting to answer harmful prompts. Our experiments show the effectiveness of ICA and ICD in increasing or reducing the success rate of adversarial jailbreaking attacks. Overall, we shed light on the potential of ICL to influence LLM behavior and provide a new perspective for enhancing the safety and alignment of LLMs.

1 comments

Comments sorted by top scores.

comment by Raemon · 2023-10-30T17:22:26.404Z · LW(p) · GW(p)

This reminds me of my pet crusade that Abstracts should be either Actually Short™, or broken into paragraphs [LW · GW