[MLSN #1]: ICLR Safety Paper Roundup

post by Dan H (dan-hendrycks) · 2021-10-18T15:19:59.766Z · LW · GW · 1 comments

Contents

  Discrete Representations Strengthen Vision Transformer Robustness
      Paper
    Other Recent Robustness Papers
  TruthfulQA: Measuring How Models Mimic Human Falsehoods
      Paper
    Other Recent Monitoring Papers
  The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models
      Paper
    Other Recent Alignment Papers
    Recent External Safety Papers
    Other News
None
1 comment

As part of a larger community building effort, I am writing a monthly safety newsletter which is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here.


Welcome to the 1st issue of the ML Safety Newsletter. In this edition, we cover:

 

Discrete Representations Strengthen Vision Transformer Robustness

Overview of the proposed Vision Transformer that uses discrete representations. The pixel embeddings (orange) are combined with discrete embedded tokens (pink) to create the input to the Vision Transformer.

There is much interest in the robustness of Vision Transformers, as they intrinsically scale better than ResNets in the face of unforeseen inputs and distribution shifts. This paper further enhances the robustness of Vision Transformers by augmenting the input with discrete tokens produced by a vector-quantized encoder. Why this works so well is unclear, but on datasets unlike the training distribution, their model achieves marked improvements. For example, when their model is trained on ImageNet and tested on ImageNet-Rendition (a dataset of cartoons, origami, paintings, toys, etc.), the model accuracy increases from 33.0% to 44.8%.

Paper

 

Other Recent Robustness Papers

Improving test-time adaptation to distribution shift using data augmentation.

Certifying robustness to adversarial patches.

Augmenting data by mixing discrete cosine transform image encodings.

Teaching models to reject adversarial examples when they are unsure of the correct class.

 

TruthfulQA: Measuring How Models Mimic Human Falsehoods

Models trained to predict the next token are incentivized to repeat common misconceptions.

A new benchmark shows that GPT-3 imitates human misconceptions. In fact, larger models more frequently repeat misconceptions, so simply training more capable models may make the problem worse. For example, GPT-J with 6 billion parameters is 17% worse on this benchmark than a model with 0.125 billion parameters. This demonstrates that simple objectives can inadvertently incentivize models to be misaligned and repeat misinformation. To make models outputs truthful, we will need to find ways to counteract this new failure mode.

Paper

 

Other Recent Monitoring Papers

An expanded report towards building truthful and honest models.

Using an ensemble of one-class classifiers to create an out-of-distribution detector.

Provable performance guarantees for out-of-distribution detection.

Synthesizing outliers is becoming increasingly useful for detecting real anomalies.

 

The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models

As networks become larger, they can more aggressively optimize proxies and reduce performance of the true objective.

Real-world constraints often require implementing rough proxies instead of our true objectives. However, as models become more capable, they can exploit faults in the proxy and undermine performance, a failure mode called proxy gaming. This paper finds that proxy gaming occurs in multiple environments including a traffic control environment, COVID response simulator, Atari Riverraid, and a simulated controller for blood glucose levels. To mitigate proxy gaming, they use anomaly detection to detect models engaging in proxy gaming.

Paper

 

Other Recent Alignment Papers

A paper studying how models may be incentivized to influence users.

Safe exploration in 3D environments.

 

Recent External Safety Papers

A thorough analysis of security vulnerabilities generated by Github Copilot.

An ML system for improved decision making.

 

Other News

The NSF has a new call for proposals. Among other topics, they intend to fund Trustworthy AI (which overlaps with many ML Safety topics), AI for Decision Making, and Intelligent Agents for Next-Generation Cybersecurity (the latter two are relevant for External Safety).


 

1 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2021-10-18T19:26:03.732Z · LW(p) · GW(p)

Thank you! I am glad you are doing this!