Prediction as coordination

post by jacobjacob · 2019-07-23T06:19:40.038Z · LW · GW · 4 comments

Contents

  The standard model of forecasting
  Example 1: coordination in mathematics via formalism
  Example 2: futures markets as using predictions for coordination
  Example 3: predicting community consensus 
  Example 4: avoiding info-cascades
  Example 5: building fire alarms
  Who is going to make the forecasts?
  Can't we just use blog posts?
None
4 comments

I want to introduce a model of why forecasting might be useful which I think is underappreciated: it might help us solve coordination problems.

This is currently only a rough idea, and I will proceed by examples, pushing this post out early rather than not at all.


The standard model of forecasting

This looks something like:

We have our big, confusing, philosophical, long-term uncertainties. We then need to 1) find the right short-term questions which capture these uncertainties, which are 2) understandable to traditional Superforecasters without very deep inside knowledge, and whose expertise has only been demonstrated on short-term questions in more well-understood domains, who then 3) use tools like outside views and guesstimates to answer them.

When I hear people say they're not excited about forecasting, it's almost always because they think this standard model won't work for AI safety. I'm very sympathetic to that view.


Example 1: coordination in mathematics via formalism

When quantifying our beliefs, we lose a large amount of nuance and interpretability. This is similar to how, when formalising things mathematically, we sacrifice the majority of human understanding.

What we gain, instead, is the ability to express and communicate thoughts...

This is a trade-off that allows a community of mathematicians to make intellectual progress together, and to effectively make results common knowledge [LW · GW] in a way which allows them to coordinate on what to solve next.


Example 2: futures markets as using predictions for coordination

Getting enough food for everyone is a big coordination problem. We want some people to stockpile things like rice and wheat so that we're prepared for a drought, but we also don't want people to waste opportunities on storing stuff which has to be thrown away in case the next harvest goes well. These kinds of problems are solved by futures markets, which effectively predict the future price of rice, and thereby provide an incentive arbitrage away any abrupt price fluctuations (i.e. to strategically stockpile/sell out rice so as to match future supply and demand). Robert Shiller has suggested these as a candidate for the greatest financial innovation.


Example 3: predicting community consensus

One particularly interesting use case is trying to predict what the x-risk community will believe at some time t in the future. Assuming the community is truth-seeking, anyone who spots the direction in which opinions will converge in advance of their convergence has 1) performed an important epistemic service, and 2) provided important evidence of their own epistemic trustworthiness.

For example, the CAIS model has gathered a fair amount of attention. (I personally don't have a strong inside view on it.) If someone would have predicted this shift more than a year ago, we would want to trust them a bit more next time they predicted a shift in community attention.

It was mentioned to me that one researcher thought this model important more than 1.5 years ago; but the reason he thought so was not because of superior reasoning -- but because of inside knowledge.

This is an inefficiency. The frontiers of our collective attention allocation do not line up with the frontiers of our intellectual progress, and hence see abrupt fluctuations as papers are released and the advantage of inside info is dispelled.

One implementation of this might look like sending out a survey asking about important questions to important organisations on ~yearly intervals, and then have people trying to predict the outcomes of that survey.

This has one important advantage over the standard uses of forecasting: we don't have to resolve the questions "all the way down". If we simply ask what people will think take-off speeds are likely to be, rather than what take-off speeds are actually going to be, and further assume that people move closer to the truth in expectation, this gives us a much cheaper signal to evaluate.


Example 4: avoiding info-cascades

Info-cascades occur when people update off of each others beliefs without appropriately sharing the evidence for those beliefs, and the same pieces of evidence end up "double-counted". Having better system for tracking who believes what and why could help solve this, and prediction systems could be one way of doing so.


Example 5: building fire alarms

Eliezer notes [LW · GW] that rather than being evidence of a fire, fire alarms make it common knowledge that it's social acceptable to act as if there's a fire. They're the cue on which everyone jumps from one social equilibrium to another.

Eliezer claims there's no fire alarm for AGI in society more broadly. I suspect there are also areas within the x-risk space where we don't have fire alarms. Prediction systems are one way of building them.


Who is going to make the forecasts?

An important clarification: I'm not saying that we should "outsource" the intellectual work of solving hard x-risk research problems to forecasters without domain-expertise. (That is another interesting and controversial proposal one might discuss.)

Rather, I'm saying that we should use predictions as a vehicle to capture the changing beliefs of current domain-experts, and allocate their attention going forwards, (smoothing out attentional discontinuities in expectation).

I'm not saying we should replace Eric Drexler with a swarm of hobby forecasters. I'm saying that a few full-time x-risk researchers might realise before the rest of the community that Eric's work deserves marginally more attention, and be right about that, and that community-internal forecasting systems can allow us to more effectively use their insights.


Can't we just use blog posts?

Compared to a numerical prediction, a blog post...

Blog posts have the crucial property of being "essay complete" in their expressiveness, but that comes at the cost of idiosyncracy and poor scalability.

A better model is probably to treat blog posts as part of the ground truth over which predictions operate, just as the rice and wheat markets provide the ground truth for their respective futures markets.

I'd rather have only blog posts than only prediction systems, but I'd rather have both than only blog posts.


4 comments

Comments sorted by top scores.

comment by Raemon · 2019-07-24T23:53:43.666Z · LW(p) · GW(p)

This feels adjaecent to Critch's post on Coordination Surveys [LW · GW], maybe worth thinking about the two concepts in conjunction.

A concern I had about the coordination surveys post is that it didn't do as much to distinguish between surveys-as-information-gathering and surveys-as-coordination-tool (which felt superficially dark artsy at first glance), but I think a survey that explicitly disclaimed that it was for coordination rather than information gathering seems fine.

Replies from: jacobjacob
comment by jacobjacob · 2019-07-25T02:44:18.423Z · LW(p) · GW(p)

Good pointer! The idea here would be to predict the outcomes of that survey, to solve the problem of coordinating people over time, in addition to the problem of coordinating people at one point in time.

Also, Critch pointed to this in the context of surveys of the broader AI research community (in which case it might appear more dark artsy), I was pointing to use within the narrower x-risk/AI safety communities.

comment by romeostevensit · 2019-07-24T22:54:58.357Z · LW(p) · GW(p)

Cruxing on implicit predictions as revealed by life strategies is a good trust building exercise. Including examining the discomfort that comes up.

comment by Matt Goldenberg (mr-hire) · 2019-07-24T20:36:34.121Z · LW(p) · GW(p)

Great post, I broadly agree that prediction is an underused tool for coordination.

Example 4: avoiding info-cascades Info-cascades occur when people update off of each others beliefs without appropriately sharing the evidence for those beliefs, and the same pieces of evidence end up “double-counted”. Having better system for tracking who believes what and why could help solve this, and prediction systems could be one way of doing so.

One danger of using prediction as a coordination mechanism is actually creating feedback loops that act very similar to info cascades. For instance, lets say a group of experts says that a particular research direction is likely to yield insights. The community updates that certain avenues for research are more likely, and therefore allocates more grant funds to those areas. The more grant funds then cause them to be more sure that that particular domain will yield insights, which cause more grant funds to be allocated.

That's bad, but it gets worse. Now that the experts know that the community updates on their predictions, they make their predictions even stronger next time. Even if in the counterfactual world where they didn't make their prediction, they'd be only 50% sure that there would be insights in the next 5 years from a particular world, in the world where they predict 50%, they think the actual chance will be something like 60%, so this sort of cascade, if they want to be accurate, causes them to make the prediction start at 75% due to their knowledge about the prediction's impact on the world.