ML Model Attribution Challenge [Linkpost]

post by aogara (Aidan O'Gara) · 2022-08-30T19:34:39.007Z · LW · GW · 0 comments

This is a link post for mlmac.io

Contents

No comments

From the ML Model Attribute Challenge website

Large language models (LLMs) may enable the generation of misinformation at scale, especially when fine-tuned on malicious text. However, it is unclear how widely adversaries use LLMs to generate misinformation today, partly because no general technical method has been widely deployed to attribute a fine-tuned LLM to a base LLM.

The ML Model Attribution Challenge (MLMAC) challenges contestants to develop creative technical solutions for LLM attribution. Contestants will attribute synthetic text written by fine-tuned language models back to the base LLM, establishing new methods that may provide strong evidence of model provenance. Model attribution would allow regulatory bodies to trace intellectual property theft or influence campaigns back to the base model. This competition works toward AI assurance by developing forensic capabilities and establishing the difficulty of model attribution in natural language processing and could contribute to the maturation of AI forensics as a subfield of AI security.

This competition incentivizes open research in this area. Each winning contestant must publish their method, and after the competition concludes, we will release query logs for each fine-tuned model that can enable post-hoc analysis of blind attribution methods. We have planned subsequent activity in this area to institutionalize practical model attribution research.

The contest concludes next Friday, September 9th. $10,000 in prizes will be awarded amongst the winning submissions. MLMAC is a joint effort of organizers including Schmidt Futures, Hugging Face, Microsoft, and MITRE. 

For more analysis of risks from automated misinformation, see "Risks from Automated Persuasion [LW · GW]" (Barnes, 2021), "Persuasion Tools [LW · GW]" (Kokotajlo, 2020), and my summary [LW(p) · GW(p)]. For a broader discussion of language model misuse, see "Ethical and social risks of harm from Language Models" (Weidinger et al., 2021). 

0 comments

Comments sorted by top scores.