Posts

Comments

Comment by tdietterich on Proveably Safe Self Driving Cars · 2024-09-18T13:46:25.346Z · LW · GW

I enthusiastically support doing everything we can to reduce vulnerabilities. As I indicated, a foundation model approach to representation learning would greatly improve the ability of systems to detect novelty. Perhaps we should rename the "provable safety" area as "provable safety modulo assumptions" area and be very explicit about our assumptions. We can then measure progress by the extent to which we can shrink those assumptions. 

Comment by tdietterich on Proveably Safe Self Driving Cars · 2024-09-15T15:21:29.623Z · LW · GW

Thank you for this post. You are following the standard safety engineering playbook. It is good at dealing with known hazards. However, it has traditionally been based entirely on failure probabilities of components (brakes, engines, landing gear, etc.). Extending it to handle perception is challenging. In particular, the perceptual system must accurately quantify its uncertainty so that the controller can act cautiously under uncertainty. For aleatoric uncertainty (i.e., in-distribution), this is a solved problem. But for epistemic uncertainty, there are still fundamental challenges. As you are well aware, epistemic uncertainty encompasses out-of-distribution / anomaly / novelty detection. These methods rely on having a representation that separates the unknowns from the knowns -- the anomalies from the nominals. Given such separation, flexible density estimators can do the job of detecting novelty. However, I believe it is impossible to provide guarantees for such representations. The best we can do is ensure that the representations capture all known environmental variability, and for this, vision foundation models are a practical way forward. But there may be other technical advances that can strengthen the learned representations.

Another challenge goes beyond perception to prediction. You discussed predicting the behavior of other cars, but what about pedestrians, dogs, cats, toddlers, deer, kangaroos, etc.? New modes of personal transportation are marketed all the time, and the future trajectories of people using those modes are highly variable. This is another place where epistemic uncertainty must be quantified.

Bottom line: Safety in an open world cannot be proved, because it relies on perfect epistemic uncertainty quantification, which is impossible. 

Finally, I highly recommend Nancy Leveson's book, Engineering a Safer World. She approaches safety as a socio-technical control problem. It is not sufficient to design and prove that a system is safe. Rather, safety is a property that must be actively maintained against disturbances. This requires post-deployment monitoring, detection of anomalies and near misses, diagnosis of the causes of those anomalies and near missses, and updates to the system design to accommodate them. One form of disturbance is novelty, as described above. But the biggest threat is often budget cuts, staff layoffs, and staff turnover in the organization responsible for post-deployment maintenance. She documents a recurring phenomenon -- that complex systems tend to migrate toward increased risk because of these disturbances. A challenge for all of us is to figure out how to maintain robust socio-technical systems that in turn can maintain the safety of deployed systems. 

--Tom Dietterich