Mathematical Models of Progress?

post by abramdemski · 2021-02-16T00:21:44.298Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    19 SDM
    12 NunoSempere
    5 abramdemski
    4 Gerald Monroe
    2 avturchin
None
1 comment

I would be interested in collecting a bunch of examples of mathematical modeling of progress. I think there are probably several of these here, but I don't expect to be able to find all of them myself. I'm also interested to know about any models like this elsewhere.

I was reading the LessWrong 2018 books, and the following posts stuck out to me:

The Year The Singularity Was Cancelled talks about a model which predicted world population quite well, by supplementing a basic population equation with a simple mathematical model of technological progress. To summarize: population carrying capacity is assumed to increase due to technological progress. Technological progress is modeled as proportional to the population: a particular population  leads carrying capacity  to have a derivative . (This reflects the idea that carrying capacity multiplies the carrying capacity; if it added to the carrying capacity, we might make the derivative equal  instead.) Population should typically remain close to the carrying capacity; so, we could simply assume that population equals carrying capacity. We then expect hyperbolic growth, IE something like ; here,  is the year of the (population) singularity. This model is a decent fit to the data until the year 1960, which is of course the subject of the post.

One of my thoughts after reading this was: wouldn't it make more sense to avoid the assumption that population equals carrying capacity? Population growth can't be greater than exponential. The hyperbolic model doesn't make any sense, and the assumption that population equals carrying capacity appears to be the culprit. 

It would make more sense to, instead, use more typical population models (which predict near-exponential growth when population is significantly below carrying capacity, tapering off near carrying capacity). I don't yet know if this has been done in the literature. However, it's commonly said that around the time of the industrial revolution, humankind escaped the Malthusian trap, because progress outpaced birthrates. (I know the demographic transition is a big player here, but let's ignore it for a moment.) If we were modeling this possibility, it makes sense that progress would stop accelerating so much around this point: once progress is increasing the carrying capacity faster than the population can catch up, we no longer expect to see population match carrying capacity.

This would imply that population transitions from hyperbolic growth to exponential growth, some time shortly before the singularity of the hyperbola. Which approximately matches what we observe: a year where the singularity was "cancelled".

However, in the context of AI progress in particular, this model seems naive. Human birthrates cannot keep pace with the resources progress provides. However, AI has no such limitation. Therefore, we might expect progress to look hyperbolic again at some point, when AI starts contributing significantly to progress. (Indeed, one might have expected this from computers alone, without saying the words "AI" -- computers allow "thinking power" to increase, without the population actually increasing.)

Some of the toy mathematical models Paul Christiano discusses in Takeoff Speeds might be used to add AI to the projection.

So, I'm interested in:

Related Question: Any response to Paul Christiano on takeoff speeds? [LW · GW]

Answers

answer by Sammy Martin (SDM) · 2021-02-16T15:47:42.268Z · LW(p) · GW(p)

I made an attempt to model intelligence explosion dynamics in this post [LW · GW], by attempting to make the very oversimplified exponential-returns-to-exponentially-increasing-intelligence model used by Bostrom and Yudkowsky slightly less oversimplified.

This post tries to build on a simplified mathematical model of takeoff which was first put forward by Eliezer Yudkowsky and then refined by Bostrom in Superintelligence, modifying it to account for the different assumptions behind continuous, fast progress as opposed to discontinuous progress. As far as I can tell, few people have touched these sorts of simple models since the early 2010’s, and no-one has tried to formalize how newer notions of continuous takeoff fit into them. I find that it is surprisingly easy to accommodate continuous progress and that the results are intuitive and fit with what has already been said qualitatively about continuous progress.

The page includes python code for the model.

This post doesn't capture all the views of takeoff - in particular it doesn't capture the non-hyperbolic faster growth mode scenario, where marginal intelligence improvements are exponentially increasingly difficult and therefore we get a (continuous or discontinuous switch to a) new exponential growth mode rather than runaway hyperbolic growth.

But I think that by modifying the f(I) function that determines how RSI capability varies with intelligence we can incorporate such views.

(In the context of the exponential model given in the post that would correspond to an f(I) function where 

which would result in a continuous (determined by size of d) switch to a single faster exponential growth mode)

But I think the model still roughly captures the intuition behind scenarios that involve either a continuous or a discontinuous step to an intelligence explosion.

answer by NunoSempere · 2021-02-16T14:39:52.475Z · LW(p) · GW(p)

Artificial Intelligence and Economic Growth, by Chad Jones et al for a particular model, Economic growth under transformative AI for a comprehensive review.

comment by abramdemski · 2021-02-16T20:14:01.281Z · LW(p) · GW(p)

Ah, excellent! I looked at the first link. This seems nice in that it (1) attempts to treat AI as continuous with earlier forms of automation, meaning the models can be meaningfully checked and fine-tuned based on historical trends, and (2) uses the same kind of simple mathematical model I'm looking at.

answer by abramdemski · 2021-02-16T17:51:54.259Z · LW(p) · GW(p)

Takeoff Speed: Simple Asymptotics in a Toy Model. [LW · GW]

This post examines simple models of recursive self-improvement, where intelligence is the derivative of knowledge, but intelligence is also some function of knowledge (since knowledge can be applied to improve intelligence). It concludes that growth in intelligence is sublinear so long as returns from knowledge diminish faster than ; subexponential so long as returns are diminishing at all; exponential precisely when returns are linear; and superexponential (having a singularity at finite time) if returns increase like some polynomial.

In a comment there [LW · GW], I argue that it makes more sense to think in terms of the growth in capabilities, rather than the growth in intelligence; making that shift, it seems like almost any assumption gives you superlinear growth, but the crossover to superexponential is still at the same spot.

Modeling the Human Trajectory

This seems like a wonderful exploration of more sophisticated versions of the model discussed in 1960: the year the singularity was cancelled. A quick glance suggests that it doesn't make the modification I was interested in exploring, but I have not read it thoroughly yet.

answer by [deleted] · 2021-02-16T08:10:13.197Z · LW(p) · GW(p)

Artificial sentience is a technological project very comparable to the Manhattan project, though.  Prior to reaching critical mass it's not doing anything at all.  Once you reach critical mass - in this case, useful AI agents that are general purpose - you can use the first systems to let you build the one that explodes.  

The thing is that it's all or nothing.  AI agents that don't produce more value than their cost have negative gain.  We have all sorts of crummy agents today of questionable utility.  (agents that try to spot fraud or machinery failure or other difficult to solve regression problems).  Once you get to positive gain, you need an AI system sophisticated enough to self improve from it's own output.  Before you hook in the last piece of such a system, nothing happens, and we don't know exactly when this will start to work.   

This is fundamentally difficult to model.  If you plotted human caused fission events per year, you would see a line near zero, suddenly increasing in 1942 with the Chicago pile, then going vertical and needing a logarithmic scale with the 1945 Los Alamos test.  

Progress had been made, thousands of little things, to reach this point, but it wasn't really certain until the first blinding flash and mushroom cloud that this overall effort was really going to work.  There could have been all kinds of hidden laws of nature that would have prevented a fission device from working.  Similarly, there are plenty of people (often seemingly to protect their own sense of importance or well being) who believe some hidden law of nature will prevent an artificial sentience from really working.  

comment by abramdemski · 2021-02-16T20:03:48.435Z · LW(p) · GW(p)

I agree with the dangers of modeling progress in this way. I'm just curious how well we can build the model, and what it would predict. Fur a specific sort of person, these mathematical models are more convincing than detailed explanations of why the future might go specific ways. And it seems to me that there is some low hanging fruit around improving these sorts of models.

1 comment

Comments sorted by top scores.

comment by Pattern · 2021-02-21T17:20:09.658Z · LW(p) · GW(p)

Instead of using an equation, a simulation using multiple agents could be employed.