Building an civilisation scale OODA loop for the problem of AGI
post by whpearson · 2017-10-15T17:56:55.582Z · LW · GW · 2 commentsContents
2 comments
You can break down our civilizations reaction to the problem of AGI into a massive decentralized OODA loop. Each part of the OODA loop is not one person but an aggregate of many people and organisations.
My current major worry is that we do not have a robust loop.
Observations: There are a few major observations we have made that inform our current work. AGI systems might be more powerful than humans and able to make themselves more powerful again via RSI. We can't control our current reinforcement learning systems
Orientation: This is the philosophy and AI strategy work of FHI and others
Decide: This is primarily done inside the big foundations and soon the governments
Act: This is OpenAIs work on AI safety for RL. Or instituting AI policy.
What I think we are not doing is investing much money into the observation phase. There is a fair amount of observation of the RL work, but we do not have much observation going into how much more powerful AGI systems will be and can be made via RSI.
One example of the observations we could make, would be to try and get an estimate of how much speeding up human cognitive work would speed up science. We could could look at science from a systemic perspective and see how long various steps take. The steps might be
Gathering data
Analysing data
Thinking about the analysis
Each of these will have a human and a non-human component (either collecting data or computational analysis of the data).
If we could get better observations of how much each component has we can get an estimate of how quickly things could be sped up.
Similar observations might be made for programming, especially programming of machine learning systems.
I will try and write a longer post at some point, fleshing things out more. But I would be interested in peoples other ideas on how we could improve the OODA loop for AGI.
2 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2017-10-15T20:00:36.130Z · LW(p) · GW(p)
I like where you're going with this, but having worked a lot with OODA and other feedback mechanisms applied to organizations, I'd like to share something I've learned that it reads to me a bit like you may be missing or not fully grasping.
Again, this may not be how you are thinking, but this sounds to me a bit like you are thinking as if OODA can be reified rather than it being purely a process. Certainly like any process it must be embodied in an organization, but there's a big difference between carrying out a process and having it exist. OODA loops are not created or setup so much as they emerge from individual actions guided by what OODA suggests you should do next. As an implementer I've made the mistake of thinking I could just "set up OODA" (or in the organizational process I typically think of this as kaizan in the sense used in the TPS) by telling people about it, scheduling meetings to carry out the steps, etc.. Yes, you have to do those things, but it's far more important to be cranking the gears than it is to put the gears in place.
So again I basically agree with you, but want to make clear in case it isn't already that very little of the challenge is noticing and wanting to see OODA used and most of it is doing the things to work through it.
↑ comment by whpearson · 2017-10-15T20:46:53.654Z · LW(p) · GW(p)
Agreed. I was thinking that an OODA loop (or something similar) is an interesting lens with which to view the process of trying to solve the problems of AGI. Mainly as a guide for what might be lacking and where you might put your efforts to get new information or improve the flows of current information.