How to treat problems of unknown difficulty

post by owencb · 2014-07-30T11:27:33.014Z · LW · GW · Legacy · 10 comments

Contents

  Introduction
    Definition
  What kind of problems are we looking at?
  Model from extreme ignorance
  Reasoning about broader classes of model
    Approximating as locally log-uniform
    Places this might fail
  Implications for expected returns
    Marginal returns
    Comparison with returns from a Pareto distribution
  Further work
None
10 comments

Crossposted from the Global Priorities Project

This is the first in a series of posts which take aim at the question: how should we prioritise work on problems where we have very little idea of our chances of success. In this post we’ll see some simple models-from-ignorance which allow us to produce some estimates of the chances of success from extra work. In later posts we’ll examine the counterfactuals to estimate the value of the work. For those who prefer a different medium, I gave a talk on this topic at the Good Done Right conference in Oxford this July.

Introduction

How hard is it to build an economically efficient fusion reactor? How hard is it to prove or disprove the Goldbach conjecture? How hard is it to produce a machine superintelligence? How hard is it to write down a concrete description of our values?

These are all hard problems, but we don’t even have a good idea of just how hard they are, even to an order of magnitude. This is in contrast to a problem like giving a laptop to every child, where we know that it’s hard but we could produce a fairly good estimate of how much resources it would take.

Since we need to make choices about how to prioritise between work on different problems, this is clearly an important issue. We can prioritise using benefit-cost analysis, choosing the projects with the highest ratio of future benefits to present costs. When we don’t know how hard a problem is, though, our ignorance makes the size of the costs unclear, and so the analysis is harder to perform. Since we make decisions anyway, we are implicitly making some judgements about when work on these projects is worthwhile, but we may be making mistakes.

In this article, we’ll explore practical epistemology for dealing with these problems of unknown difficulty.

Definition

We will use a simplifying model for problems: that they have a critical threshold D such that the problem will be completely solved when D resources are expended, and not at all before that. We refer to this as the difficulty of the problem. After the fact the graph of success with resources will look something like this:

Of course the assumption is that we don’t know D. So our uncertainty about where the threshold is will smooth out the curve in expectation. Our expectation beforehand for success with resources will end up looking something like this:

Assuming a fixed difficulty is a simplification, since of course resources are not all homogenous, and we may get lucky or unlucky. I believe that this is a reasonable simplification, and that taking these considerations into account would not change our expectations by much, but I plan to explore this more carefully in a future post.

What kind of problems are we looking at?

We’re interested in one-off problems where we have a lot of uncertainty about the difficulty. That is, the kind of problem we only need to solve once (answering a question a first time can be Herculean; answering it a second time is trivial), and which may not easily be placed in a reference class with other tasks of similar difficulty. Knowledge problems, as in research, are a central example: they boil down to finding the answer to a question. The category might also include trying to effect some systemic change (for example by political lobbying).

This is in contrast to engineering problems which can be reduced down, roughly, to performing a known task many times. Then we get a fairly good picture of how the problem scales. Note that this includes some knowledge work: the “known task” may actually be different each time. For example, proofreading two pages of text is quite the same, but we have a fairly good reference class so we can estimate moderately well the difficulty of proofreading a page of text, and quite well the difficulty of proofreading a 100,000-word book (where the length helps to smooth out the variance in estimates of individual pages).

Some knowledge questions can naturally be broken up into smaller sub-questions. However these typically won’t be a tight enough class that we can use this to estimate the difficulty of the overall problem from the difficult of the first few sub-questions. It may well be that one of the sub-questions carries essentially all of the difficulty, so making progress on the others is only a very small help.

Model from extreme ignorance

One approach to estimating the difficulty of a problem is to assume that we understand essentially nothing about it. If we are completely ignorant, we have no information about the scale of the difficulty, so we want a scale-free prior. This determines that the prior obeys a power law. Then, we update on the amount of resources we have already expended on the problem without success. Our posterior probability distribution for how many resources are required to solve the problem will then be a Pareto distribution. (Fallenstein and Mennen proposed this model for the difficulty of the problem of making a general-purpose artificial intelligence.)

There is still a question about the shape parameter of the Pareto distribution, which governs how thick the tail is. It is hard to see how to infer this from a priori reasons, but we might hope to estimate it by generalising from a very broad class of problems people have successfully solved in the past.

This idealised case is a good starting point, but in actual cases, our estimate may be wider or narrower than this. Narrower if either we have some idea of a reasonable (if very approximate) reference class for the problem, or we have some idea of the rate of progress made towards the solution. For example, assuming a Pareto distribution implies that there’s always a nontrivial chance of solving the problem at any minute, and we may be confident that we are not that close to solving it. Broader because a Pareto distribution implies that the problem is certainly solvable, and some problems will turn out to be impossible.

This might lead people to criticise the idea of using a Pareto distribution. If they have enough extra information that they don’t think their beliefs represent a Pareto distribution, can we still say anything sensible?

Reasoning about broader classes of model

In the previous section, we looked at a very specific and explicit model. Now we take a step back. We assume that people will have complicated enough priors and enough minor sources of evidence that it will in practice be impossible to write down a true distribution for their beliefs. Instead we will reason about some properties that this true distribution should have.

The cases we are interested in are cases where we do not have a good idea of the order of magnitude of the difficulty of a task. This is an imprecise condition, but we might think of it as meaning something like:

There is no difficulty X such that we believe the probability of D lying between X and 10X is more than 30%.

Here the “30%” figure can be adjusted up for a less stringent requirement of uncertainty, or down for a more stringent one.

Now consider what our subjective probability distribution might look like, where difficulty lies on a logarithmic scale. Our high level of uncertainty will smooth things out, so it is likely to be a reasonably smooth curve. Unless we have specific distinct ideas for how the task is likely to be completed, this curve will probably be unimodal. Finally, since we are unsure even of the order of magnitude, the curve cannot be too tight on the log scale.

Note that this should be our prior subjective probability distribution: we are gauging how hard we would have thought it was before embarking on the project. We’ll discuss below how to update this in the light of information gained by working on it.

The distribution might look something like this:

In some cases it is probably worth trying to construct an explicit approximation of this curve. However, this could be quite labour-intensive, and we usually have uncertainty even about our uncertainty, so we will not be entirely confident with what we end up with.

Instead, we could ask what properties tend to hold for this kind of probability distribution. For example, one well-known phenomenon which is roughly true of these distributions but not all probability distributions is Benford’s law.

Approximating as locally log-uniform

It would sometimes be useful to be able to make a simple analytically tractable approximation to the curve. This could be faster to produce, and easily used in a wider range of further analyses than an explicit attempt to model the curve exactly.

As a candidate for this role, we propose working with the assumption that the distribution is locally flat. This corresponds to being log-uniform. The smoothness assumptions we made should mean that our curve is nowhere too far from flat. Moreover, it is a very easy assumption to work with, since it means that the expected returns scale logarithmically with the resources put in: in expectation, a doubling of the resources is equally good regardless of the starting point.

It is, unfortunately, never exactly true. Although our curves may be approximately flat, they cannot be everywhere flat -- this can’t even give a probability distribution! But it may work reasonably as a model of local behaviour. If we want to turn it into a probability distribution, we can do this by estimating the plausible ranges of D and assuming it is uniform across this scale. In our example we would be approximating the blue curve by something like this red box:

Obviously in the example the red box is not a fantastic approximation. But nor is it a terrible one. Over the central range, it is never out from the true value by much more than a factor of 2. While crude, this could still represent a substantial improvement on the current state of some of our estimates. A big advantage is that it is easily analytically tractable, so it will be quick to work with. In the rest of this post we’ll explore the consequences of this assumption.

Places this might fail

In some circumstances, we might expect high uncertainty over difficulty without everywhere having local log-returns. A key example is if we have bounds on the difficulty at one or both ends.

For example, if we are interested in X, which comprises a task of radically unknown difficulty plus a repetitive and predictable part of difficulty 1000, then our distribution of beliefs of the difficulty about X will only include values above 1000, and may be quite clustered there (so not even approximately logarithmic returns). The behaviour in the positive tail might still be roughly logarithmic.

In the other direction, we may know that there is a slow and repetitive way to achieve X, with difficulty 100,000. We are unsure whether there could be a quicker way. In this case our distribution will be uncertain over difficulties up to around 100,000, then have a spike. This will give the reverse behaviour, with roughly logarithmic expected returns in the negative tail, and a different behaviour around the spike at the upper end of the distribution.

In some sense each of these is diverging from the idea that we are very ignorant about the difficulty of the problem, but it may be useful to see how the conclusions vary with the assumptions.

Implications for expected returns

What does this model tell us about the expected returns from putting resources into trying to solve the problem?

Under the assumption that the prior is locally log-uniform, the full value is realised over the width of the box in the diagram. This is w = log(y) - log(x), where x is the value at the start of the box (where the problem could first be plausibly solved), y is the value at the end of the box, and our logarithms are natural. Since it’s a probability distribution, the height of the box is 1/w.

For any z between x and y, the modelled chance of success from investing z resources is equal to the fraction of the box which has been covered by that point. That is:

(1) Chance of success before reaching z resources = log(z/x)/log(y/x).

So while we are in the relevant range, the chance of success is equal for any doubling of the total resources. We could say that we expect logarithmic returns on investing resources.

Marginal returns

Sometimes of greater relevance to our decisions is the marginal chance of success from adding an extra unit of resources at z. This is given by the derivative of Equation (1):

(2) Chance of success from a marginal unit of resource at z = 1/zw.

So far, we’ve just been looking at estimating the prior probabilities -- before we start work on the problem. Of course when we start work we generally get more information. In particular, if we would have been able to recognise success, and we have invested z resources without observing success, then we learn that the difficulty is at least z. We must update our probability distribution to account for this. In some cases we will have relatively little information beyond the fact that we haven’t succeeded yet. In that case the update will just be to curtail the distribution to the left of z and renormalise, looking roughly like this:

Again the blue curve represents our true subjective probability distribution, and the red box represents a simple model approximating this. Now the simple model gives slightly higher estimated chance of success from an extra marginal unit of resources:

(3) Chance of success from an extra unit of resources after z = 1/(z*(ln(y)-ln(z))).

Of course in practice we often will update more. Even if we don’t have a good idea of how hard fusion is, we can reasonably assign close to zero probability that an extra $100 today will solve the problem today, because we can see enough to know that the solution won’t be found imminently. This looks like it might present problems for this approach. However, the truly decision-relevant question is about the counterfactual impact of extra resource investment. The region where we can see little chance of success has a much smaller effect on that calculation, which we discuss below.

Comparison with returns from a Pareto distribution

We mentioned that one natural model of such a process is as a Pareto distribution. If we have a Pareto distribution with shape parameter α, and we have so far invested z resources without success, then we get:

(4) Chance of success from an extra unit of resources = α/z.

This is broadly in line with equation (3). In both cases the key term is a factor of 1/z. In each case there is also an additional factor, representing roughly how hard the problem is. In the case of the log-linear box, this depends on estimating an upper bound for the difficulty of the problem; in the case of the Pareto distribution it is handled by the shape parameter. It may be easier to introspect and extract a sensible estimate for the width of the box than for the shape parameter, since it is couched more in terms that we naturally understand.

Further work

In this post, we’ve just explored a simple model for the basic question of how likely success is at various stages. Of course it should not be used blindly, as you may often have more information than is incorporated into the model, but it represents a starting point if you don't know where to begin, and it gives us something explicit which we can discuss, critique, and refine.

In future posts, I plan to:

10 comments

Comments sorted by top scores.

comment by Decius · 2014-07-31T19:26:45.610Z · LW(p) · GW(p)

How long do you expect to practice before you get /Wingardium Leviosa/ to work?

Replies from: owencb
comment by owencb · 2014-07-31T19:32:16.719Z · LW(p) · GW(p)

Great question. Yes, I should have mentioned this: there's an assumption in there that what you're trying to do is possible.

If you believe that it may be outright impossible, then you might use something like this model for estimating the chances of success /conditional on it being possible/. So, not great odds on /Wingardium Leviosa/.

Replies from: Decius
comment by Decius · 2014-07-31T20:30:51.843Z · LW(p) · GW(p)

The math gets beyond my ability to grok when updating not just the amount of effort required if it is possible, but also updating belief about how likely it is to be possible.

Replies from: owencb
comment by owencb · 2014-07-31T21:38:13.308Z · LW(p) · GW(p)

It works out fairly simply with the box model; if there's a particular question you have in mind I could do a worked example.

From a practical point of view, I think you hardly need to deal with this updating, though. I'd be inclined to produce your best estimate of impossibility given present information, then you can just use that for making decisions today. If there's probability p that it's possible, then the value of working on it is exactly p times whatever the value would be if you were sure it was possible.

Replies from: Decius
comment by Decius · 2014-09-17T04:26:36.115Z · LW(p) · GW(p)

Sorry for digging this back out:

Consider the situation: There is Prize P for submitting a design for a bridge that meets certain criteria (location, budget, etc.)

An engineer looks at the situation, and says "The expected time T to find a breakthrough for design such a bridge, conditional on it being possible, is worth ten times the prize P. I estimate a 25% chance that it is possible given the harsh constraints. However, if I had worked on it for time A without success, I will update to a less than 10% chance that it is possible given the constraints."

I think there /might be/ a reason to work for slightly more or less than A before aborting, but I think the calculus makes the proper undershoot approach zero; the proper value of trying includes the cost of time that it would take you to determine that it isn't likely enough to be possible, times the chance that you would spend that time before aborting.

Which ends up turning the decision into a straight-up Value of Information/Cost of Information calculation, which implies that people should spend lots of effort ($10k equivalent) pursuing goals with pP ( -log(P) )in the 2-4 range if those goals have payoffs in the $1m-$1b range.

Once I have time to integrate the conclusion here into the portion of my psyche that thinks it insane to chase a payoff with less that a single-digit chance of the jackpot, I suspect that I might end up owning a business.

Replies from: owencb
comment by owencb · 2014-09-18T11:54:33.913Z · LW(p) · GW(p)

The end of your example gets hard to follow. In particular I don't know what this means:

goals with pP ( -log(P) )in the 2-4 range

On the other hand it sounds like you're reaching a qualitatively correct solution.

Replies from: Decius
comment by Decius · 2014-09-18T22:51:54.185Z · LW(p) · GW(p)

pP is the negative logarithm of P, the probability of the 'jackpot payoff'.

If money were of linear value, I would be ambivalent about investing $10k in a startup that had a .01 chance of making a million dollars; for P=.01, pP=2

comment by Stuart_Armstrong · 2014-08-05T18:31:45.787Z · LW(p) · GW(p)

Thanks, this is most useful.

Someone (Tegmark?) used this kind of approach to show the Fermi paradox isn't so surprising. If you take a uniform log prior over intelligent civilization density, you quickly remove the end where there would be multiple species in the solar system, and then it's not much of a space until you're below 1 civ per Hubble volume...

comment by leplen · 2014-07-30T23:39:39.262Z · LW(p) · GW(p)

This seems potentially very interesting, but I'm not totally sure I see where it's going?

What are the fitting parameters of this model? The mean of D? If I want to compare investing in FAI to fusion, how does this improve my previous "X seems on average easier" heuristic? Are there any real life examples that we could apply this model to?

Random asides:

I'm not totally sold on the reasoning for choosing the Pareto distribution. Why would I expect problem difficulty to follow a Pareto distribution? Is there any set of solved problems for which that has been true?

This seems vaguely reminiscent of the multi-armed bandit problem, although the payoffs in your formulation are entirely binary. Is a binary Success/Failure a realistic model of technical progress? Can the existing research on k-armed bandits be applied to this type of problem?

Replies from: owencb
comment by owencb · 2014-07-31T10:23:46.491Z · LW(p) · GW(p)

I'm going to reply briefly to some of these questions, but the subject of several of them should be more comprehensively covered in future posts.

Yes, I will be applying this model (& extensions) to some real examples, and also looking at some places where it predicts observed behaviour.

Yes, binary success/failure is not always a good model, and I'll explore this.

Fallenstein and Mennen give more in-depth justifications for why you might use a Pareto distribution in the linked article; I wasn't really defending the Pareto distribution, so perhaps I passed over this rather quickly.

Interesting idea on analogies with the multi-armed bandit problem. My impression is that it's not really applicable, since we're looking at problems which only pay out once (or at the very least have diminishing returns), but there might be something to say there. I'll think about it.