AI Risk & Opportunity: Strategic Analysis Via Probability Tree
post by lukeprog
A very simple tree
How the tree could be expanded
Part of the series AI Risk and Opportunity: A Strategic Analysis.
(You can leave anonymous feedback on posts in this series here. I alone will read the comments, and may use them to improve past and forthcoming posts in this series.)
There are many approaches to strategic analysis (Bishop et al. 2007). Though a morphological analysis (Ritchey 2006) could model our situation in more detail, the present analysis uses a simple probability tree (Harshbarger & Reynolds 2008, sec. 7.4) to model potential events and interventions.
A very simple tree
In our initial attempt, the first disjunction concerns which of several (mutually exclusive and exhaustive) transformative events comes first:
- "FAI" = Friendly AI.
- "uFAI" = UnFriendly AI, not including uFAI developed with insights from WBE.
- "WBE" = Whole brain emulation.
- "Doom" = Human extinction, including simulation shutdown and extinction due to uFAI striking us from beyond our solar system.
- "Other" = None of the above four events occur in our solar system, perhaps due to stable global totalitarianism or for unforeseen reasons.
Our probability tree begins simply:
Each circle is a chance node, which represents a random variable. The leftmost chance node above represents the variable of whether FAI, uFAI, WBE, Doom, or Other will come first. The rightmost chance nodes are open to further disjunctions: the random variables they represent will be revealed as we continue to develop the probability tree.
Each left-facing triangle is a terminal node, which for us serves the same function as a utility node in a Bayesian decision network. The only utility node in the tree above assigns a utility of 0 (bad!) to the Doom outcome.
Each branch in the tree is assigned a probability. For the purposes of illustration, the above tree assigns .01 probability to FAI coming first, .52 probability to uFAI coming first, .07 probability to WBE coming first, .35 to Doom coming first, and .05 to Other coming first.
How the tree could be expanded
The simple tree above could be expanded "downstream" by adding additional branches:
We could also make the probability tree more actionable by trying to estimate the probability of desirable and undesirable outcomes given certain that certain shorter-term goals are met. In the example below, "private push" means that a non-state actor passionate about safety invests $30 billion or more into developing WBE technology within 30 years from today. Perhaps there's a small chance this safety-conscious actor could get to WBE before state actors, upload FAI researchers, and have them figure out FAI before uFAI is created.
We could also expand the tree "upstream" by making the first disjunction be not concerned with our five options for what comes first but instead with a series of disjunctions that feed into which option will come first.
We could add hundreds or thousands of nodes to our probability tree, and then use the software to test for how much the outcomes change when particular inputs are changed, and learn what things we can do now to most increase our chances of a desirable outcome, given our current model.
We would also need to decide which "endgame scenarios" we want to include as possible terminals, and the utility of each. These choices may be complicated by our beliefs about multiverses and simulations.
However, decision trees become enormously large and complex very quickly as you add more variables. If we had the resources for a more complicated model, we'd probably want to use influence diagrams instead (Howard & Matheson 2005), e.g. one built in Analytica, like the ICAM climate change model. Of course, one must always worry that one's model is internally consistent but disconnected from the real world (Kay 2012).
Comments sorted by top scores.
comment by othercriteria ·
2012-04-07T12:15:41.288Z · LW(p) · GW(p)
I'm not well-versed in these probability trees but something weird seems to be happening in the third one. The conditional probabilities under "private push" and those under no "private push" should each separately sum to 1. Changing them to joint probabilities would resolve the problem. Alternatively, you could put a probability on "private push" and then put FAI, uFAI, and Other on each branch.
Replies from: lukeprog
comment by Bugmaster ·
2012-04-07T17:38:20.999Z · LW(p) · GW(p)
This is kind of a weird article... It explains how to use decision trees, but then it just stops, without telling me what to expect, why I should care, or, assuming I did care, how to assign probabilities to the nodes. So, the only feeling I'm left with at the end is, "all righty then, time for tea".
In addition, instead of saying "X | private push" and "X | no private push", it might be clearer to add the nodes "private push" and "no private push" explicitly, and then connect them to "FAI", "uFAI", etc. An even better transformation would be to convert the tree into a graph; this way, you won't need to duplicate the terminal nodes all the time.
Replies from: othercriteria
↑ comment by othercriteria ·
2012-04-07T18:38:38.174Z · LW(p) · GW(p)
Moving to a graph makes elicitation of the parameters a lot more difficult (to the extent that you have to start specifying clique potentials instead of conditional probabilities). Global tasks like marginalization or conditioning also become a lot harder.
Replies from: Bugmaster
↑ comment by Bugmaster ·
2012-04-07T19:05:26.132Z · LW(p) · GW(p)
Moving to a graph makes elicitation of the parameters a lot more difficult (to the extent that you have to start specifying clique potentials instead of conditional probabilities).
I think you can still get away with using conditional probabilities if you make the graph directed and acyclical, as I should've specified (my bad). The graph is still more complex than the tree, as you said, but if we're using software for the tree, we might as well use one for the graph...
comment by XiXiDu ·
2012-04-07T10:09:47.943Z · LW(p) · GW(p)
But this is much too large a project for me to undertake now.
Too bad. I was excited about this post and thought it was a good sign that you took that path and that it would be highly promising to pursue it further.
Replies from: lukeprog
↑ comment by lukeprog ·
2012-04-07T11:09:13.270Z · LW(p) · GW(p)
Another worry is that putting so many made-up probabilities into a probability tree like this is not actually that helpful. I'm not sure if that's true, but I'm worried about it.
Replies from: Dmytry
↑ comment by Dmytry ·
2012-04-07T17:26:08.180Z · LW(p) · GW(p)
I'm rather pessimistic about that. I think that basically the tree branches into a very huge number of possibilities; one puts the possibilities into categories, but has no way of finding total probabilities within the categories.
Furthermore, the categories themselves do not correspond to technological effort; there is the FAIs that resulted from regular AI effort via some rather simple insight by the scientist that came up with AI, the insights that may be only visible up close when one is going over an AI and figuring out why the previous version killed half of itself repeatedly instead of self improving, or other cases the probability of which we can't guess at without knowing how the AI is being implemented and what sort of great filters does the AI have to pass before it fooms. And there are uFAIs that result from the FAI effort; those are the uFAIs of entirely different kind, with their own entirely different probabilities that can't be guessed at.
The intuitions are often very wrong, for example the intuitions about what 'random' designs do; firstly, our designs are not random, and secondarily, random code predominantly crashes, and the non crashing space is utterly dominated by one or two simplest noncrash behaviours, and same may be true of the goal systems which pass the filters of not crashing, and recursively self improving over enormous range. The filters are specific to the AI architecture.
The existence of filters, in my opinion, entirely thwarts any generic intuitions and generic arguments. The unknown , highly complex filters are an enormous sea between the logic and the probability estimates, in the land of inferences. The illogic, sadly, does not need to pass through the sea, and rapidly suggests the numbers that are not, in any way, linked to the relevant issue-set.
comment by keefe ·
2012-04-11T04:11:11.077Z · LW(p) · GW(p)
I think that you are you are on a solid research path here. I think you have reached the bounds of business oriented software and it's time to look into something like apache mahout or RDF. Decision tree implementations are available all over, just find a data structure and share them and run inference engines like owlim or pellet and see what you can see.
RDF is a good interim solution because you can start encoding things as structured data. I have some JSON->RDF stuff for inference if you get to that point.
Here is one way to represent these graphs as RDF.
Each edge becomes an edge to a blank node, that blank node has the label, arrival probability and could link to evidence supporting. Representing weighted graphs in RDF is fairly well studied.
The question is, what is your net goal of this from a computational artifact point of view?
comment by lessdazed ·
2012-06-07T17:38:57.909Z · LW(p) · GW(p)
Teaching tree thinking through touch.
These experiments were done with video game trees showing evolutionary divergence, and this method of teaching outperformed traditional paper exercises. Perhaps a simple computer program would make teaching probability trees easier, or the principles behind the experiments could be applied in another way to teach how to use these trees.