Systems Engineering and the META Program
post by ryan_b · 2018-12-20T20:19:25.819Z · LW · GW · 3 commentsContents
3 comments
I periodically look for information on systems engineering. This time I came across a powerpoint presentation from the MIT Open Courseware course Fundamentals of Systems Engineering. Professor de Weck, who taught the course, had done some research on state-of-the-art methods developed as part of DARPA's META Program.
A few years ago DARPA wrapped up the program, designed to speed up delivery of cyber-electro-mechanical systems (war machines) by 5x. Since the parent program Adaptive Vehicle Make seems to have concluded without producing a vehicle, I infer the META Program lost its funding at the same time.
The work it produced appears to be adjacent to our interests along several dimensions though, so I thought I would bring it to the community's attention. The pitch for the program, taken from the abstract of de Weck's paper:
The method claims to achieve this speedup by a combination of three main mechanisms:
1. The deliberate use of layers of abstraction. High-level functional requirements are used to explore architectures immediately rather than waiting for downstream level 2,3,4 ... requirements to be defined.
2. The development and use of an extensive and trusted component (C2M2L) model library. Rather than designing all components from scratch, the META process allows importing component models directly from a library in order to quickly compose functional designs.
3. The ability to find emergent behaviors and problems ahead of time during virtual Verification and Validation (V&V) and generating designs that are correct-by-construction allows a more streamlined design process and avoids costly design iterations that often lead to expensive design changes.
Which is to say they very carefully architect the system, use known-to-be-good components, and employ formal verification to catch problems early. In the paper a simulation of the META workflow successfully achieved a 4.4x development speedup compared to the same project's actual development using traditional methods.
There are a bunch of individual directions explored which are of interest. Some that struck me were:
- A metric for complexity.
- A metric for adaptability.
- Stuff for quantitative verification methods.
- Stuff for complexity reduction and verification.
It looks like the repository for the program's outputs is here. A missile engineer trying to apply the method to healthcare is here. Reducing healthcare costs is rocket science!
In a nutshell it contains a whole bunch of things we have long discussed all in a single package, and now there are a few people out and about trying to get parts into practice.
3 comments
Comments sorted by top scores.
comment by Davidmanheim · 2018-12-23T06:59:38.695Z · LW(p) · GW(p)
Good finds!
I think they are headed in the right direction, but I'm skeptical of the usefulness their work on complexity. The metrics ignore computational complexity of the model, and assume all the variance is modeled based on sources like historical data and expert opinion. It's also not at all useful unless we can fully characterize the components of the system, which isn't usually viable.
It also seems to ignore the (in my mind critical) difference between "we know this is evenly distributed in the range 0-1" and "we have no idea what the distribution of this is over the space 0-1." But I may be asking for too much in a complexity metric.
Replies from: ryan_b↑ comment by ryan_b · 2018-12-28T22:04:30.736Z · LW(p) · GW(p)
My default assumption is that the metrics themselves are useless for AI purposes, but I think the intuitions behind their development might be fruitful.
I also observe that the software component of this process is stuff like complicated avionics software, used to being tested under adversarial conditions. It seems likely to me that if a dangerous AI were to be built using modern techniques like machine learning, it would probably be assembled in a process broadly similar to this.