Request for help with economic analysis related to AI forecasting

post by ESRogs · 2016-02-06T01:27:39.810Z · LW · GW · Legacy · 13 comments

Contents

13 comments

[Cross-posted from FB]

I've got an economic question that I'm not sure how to answer.

I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&D to be automated, that would be very interesting to know about!)

So I'd like to be able to measure the share of AI R&D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).

What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.

Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.

On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.

So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?

13 comments

Comments sorted by top scores.

comment by Houshalter · 2016-02-07T22:00:17.774Z · LW(p) · GW(p)

Faster computers almost certainly enable AI research. The current wave of deep learning is only possible because computers suddenly jumped 10-50x over a few years. (That is fast/cheap/general purpose GPUs enabling training of huge networks on a single computer.)

What's weird about this is that it isn't just being able to run bigger NNs. Before it was believed to be impossible to run really deep NNs because of vanishing gradients. Then suddenly people could experiment with deep nets on much faster computers, though they were still slow and impractical. But by experimenting with them, they figured out how to initialize weights properly, and now they can train much faster on even slow computers.

Nothing stopped anyone from making that discovery in the 90's. But it took a renewed interest and faster computers to do experiments with for it to happen. The same is true for many other methods that have been invented. Things like dropout totally could have been invented a decade or two earlier, but for some reason it just wasn't.

And there were supercomputers back then that could have run really big nets. If someone had an algorithm ready, it could have been tested. But no one had code just sitting around waiting for computers to get fast enough. Instead computers got fast first, then innovation happened.

The same is true for old AI research. The early AI researchers were working with computers smaller than my graphing calculator. That's why a lot of early AI research seems silly, and why promising ideas like NNs were abandoned initially.

I heard an anecdote about one researcher who went from university to university carrying a stack of punch cards, and running them on the computer when there was spare time. It was something like a simple genetic algorithm that could easily complete in a few seconds on a modern computer. But took him months or years to get results from it.

Replies from: HungryHobo
comment by HungryHobo · 2016-02-08T17:17:45.570Z · LW(p) · GW(p)

The pattern is the same across the entire software industry, not just AI research.

Only a small portion of real progress comes from professors and Phd. Per person they tend to do pretty well in terms of innovation but it's hard to beat a million obsessed geeks willing and able to spend every hour of their free time experimenting with something.

The people working in the olden days weren't just working with slower computers, a lot of the time they were working with buggy, crappier languages, feature-poor debuggers and no IDE's.

A comp sci undergrad student working with a modern language in a modern IDE with modern debuggers can whip up in hours what it would have taken phd's weeks to do back in the early days and it's not all just hardware.

Don't get me wrong: Hardware helps, having cycles to burn and so much memory that you don't have to care about wasting it also saves you time but you get a massive feedback loop where the more people there are in your environment doing similar things the more you can focus on the novel, important parts of your work rather than fucking around trying to find where you set a pointer incorrectly or screwed up a JUMP.

Very few people have access to supercomputers, if they do then they aren't going to be spending their supercomputer time going "well that didn't work but what if I tried this slight variation.."x100

Everyone has access to desktops so as soon as something can run on consumer electronics thousands of people can suddenly spend all night experimenting.

Even if the home experimentation doesn't yield the results you now have a generation of teenagers who've spent time thinking about the problem and have experience of thinking in the right terms at a young age and are primed to gain a far deeper understanding once they hit college age.

comment by MaximumLiberty · 2016-02-07T19:13:19.340Z · LW(p) · GW(p)

You have run into the "productivity paradox." This is the problem that, while it seems from first-hand observation that using computers would raise productivity, that rising productivity does not seem to show up in economy-wide statistics. It is something of a mystery. The Wikipedia page on the subject has an OK introduction to the problem.

I'd suggest that the key task is not measuring the productivity of the computers. The task is measuring the change in productivity of the researcher. For that, you must have a measure of research output. You'd probably need multiple proxies, since you can't evaluate it directly. For example, one proxy might be "words of published AI articles in peer-reviewed journals." A problem with this particular proxy is substitution, over long time periods, of self-publication (on the web) for journal publication.

A bigger problem is the quality problem. The quality of a good today is far better than the similar good of 30 years ago. But how much? There's no way to quantify it. Economists usually use some sense that "this year must be really close to last year, so we'll ignore it across small time frames." But that does not help for long time frames (unless you are looking only at the rate of change in productivity rates, such that the productivity rate itself gets swept aside by taking the first derivative, which works fine as long as quality is nor changing disproportionately to productivity). The problem seems much greater if you have to assess the quality of AI research. Perhaps you could construct some kind of complementary metric for each proxy you use, such as "citations in peer-reviewed journals" for each peer-reviewed article you used in the proxy noted above. And you would again have to address the effect of self-publication, this time on quality.

comment by MarsColony_in10years · 2016-02-06T06:30:10.089Z · LW(p) · GW(p)

I like this idea. I'd guess that a real economist would phrase this problem as trying to measure productivity. This isn't particularly useful though. Productivity is output (AI research) value over input (time), so this begs the question of how to measure the output half. (I mention it mainly just in case it's a useful search term.)

I'm no economist, but I do have an idea for measuring the output. It's very much a hacky KISS approach, but might suffice. I'd try and poll various researchers, and ask them to estimate how much longer it would take them to do their work by slide-rule. You could ask older generations of researchers the same thing about past work. You could also ask how much faster their work would have been if they could have done it on modern computers.

It would also probably be useful to know what fraction of researcher's time is spent using a computer. Ideally you would know how much time was spent running AI-specific programs, versus things like typing notes/reports into Microsoft Word. (which could clearly be done on a typewriter or by hand.) Programs like RescueTime could monitor this going forward, but you'd have to rely on anecdotal data to get a historical trend line. However, anecdote is probably good enough to get an order-of-magnitude estimate.

You'd definitely want a control, though. People's memories can blur together, especially over decades. Maybe find a related field for whom data actually does exist? (From renting time on old computers? There must be at least some records.) If there are old computer logs specific to AI researchers, it would be fantastic to be able to correlate something like citations/research paper or number of papers per researcher per year with computer purchases. (Did such-and-such universitys new punch-card machine actually increase productivity?) Publication rates in general are skyrocketing, and academic trends shift, so I suspect that publications is a hopelessly confounded metric on a timescale of decades, but might be able to show changes from one year to the next.

Another reason for good control group, if I remember correctly, is that productivity of industry as a whole didn't actually improve much by computers; people just think it was. It might also be worth digging around in the Industrial-Organizational Psychology literature to see if you can find studies involving productivity that are specific to AI research, or even something more generic like Computer Science. (I did a quick search on Google Scholar, and determined that all my search terms were far too common to narrow things down the the oddly-specific target.)

Replies from: Lumifer
comment by Lumifer · 2016-02-06T18:05:22.312Z · LW(p) · GW(p)

I'd try and poll various researchers, and ask them to estimate how much longer it would take them to do their work by slide-rule.

The answer would be "infinity" -- you can't do AI work by slide-rule. What next?

Replies from: MarsColony_in10years, Old_Gold
comment by MarsColony_in10years · 2016-02-06T20:35:09.621Z · LW(p) · GW(p)

As I understand it, Eliezer Yudkowski doesn't do much coding, but mostly purely theoretical stuff. I think most of Superintelligence could have been written on a typewriter based on printed research. I also suspect that there are plenty of academic papers which could be written by hand.

However, as you point out, there are also clearly some cases where it would take much, much longer to do the same work by hand. I'd disagree that it would take infinite time, and that it can't be done by hand, but that's just me being pedantic and doesn't get to the substance of the thing.

The questions that would be interesting to answer would be how much work falls into the first category and how much falls into the second. We might think of this as a continuum, ranging from 0 productivity gain from computers, to trillions of times more efficient. What sub-fields would and wouldn't be possible without today's computers? What types of AI research is enabled simply by faster computers, and which types are enabled by using existing AI?

Maybe I can type at 50 words a minute, but I sure as hell can't code at 50 WPM. Including debugging time, I can write a line of code every couple minutes, if I'm lucky. Looking back on the past 2 things I wrote, one was ~50 lines of code and took me at least an hour or two if I recall, and the other was ~200 lines and took probably a day or two of solid work. I'm just starting to learn a new language, so I'm slower than in more familiar languages, but the point stands. This hints that, for me at least, the computer isn't the limiting factor. It might take a little longer if I was using punch cards, and at worst maybe twice as long if I was drafting everything by hand, but the computer isn't a huge productivity booster.

Maybe there's an AI researcher out there who spends most of his or her day trying different machine learning algorithms to try and improve them. Even if not, It'd still take forever to crunch that type of algorithm by hand. It'd be a safe bet that anyone who spends a lot of time waiting for code to compile, or who rents time on a supercomputer, is doing work where the computer is the limiting factor. It seems valuable to know which areas might grow exponentially alongside Moore's law, and which might grow based on AI improvements, as OP pointed out.

comment by Old_Gold · 2016-02-09T00:57:56.120Z · LW(p) · GW(p)

You'd be amazed what people can do "by hand". Keep in mind, computer was originally an occupation.

Replies from: Lumifer
comment by Lumifer · 2016-02-09T15:42:47.200Z · LW(p) · GW(p)

No, I don't think I would be amazed. But do tell: how would you do AI by hand? The Chinese Room is fine as a thought experiment, but try implementing it in reality...

comment by Gunnar_Zarncke · 2016-02-06T12:17:46.393Z · LW(p) · GW(p)

Levels of indirection. Every time a human out-sources a task to a computer it creates and uses an indirection. This is not new. It is the same as referring to books instead of knowing yourself. Later you just know whom to ask for the needed information. So my idea would be to try to measure levels of indirection. For example computers don't just do computation for computation about computation (see also Fundamental theorem of software engineering). Wikipedia is not just pages but pages referring to pages (see also Getting to Philosophy).

Note that this nicely differentiates many kinds of noise from useful information: "Random" repetitions and variants of the same cause a wide graph but no deep (though t is easy to artificially create arbitrarily deep graphs, how to solve this remains as an exercise for the reader).

A hypothesis regarding human knowledge creation could be that the number of levels of indirection added per unit of human research effort is roughly constant. And that the introduction of AI speeds this up.

comment by James_Miller · 2016-02-06T02:08:33.745Z · LW(p) · GW(p)

Here is a simple model that might be useful. You make and sell good G at price g per unit. You make it using inputs X and Y with, say, a production function G=(X^a)(Y^b) where X is how much of one input you use and Y how much of the other. The cost of each unit of X is x, and the cost of each unit of Y is y.

So your goal is to pick X and Y to maximize (X^a)(Y^b)g-Xx-Yx. You want to know how xX/(xX+yY) changes as a and x change.

comment by buybuydandavis · 2016-02-07T05:07:51.773Z · LW(p) · GW(p)

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

If you make it more of a productivity question like this, then you can ask how people actually do their work, and how many man hours it takes to do that work to get the same level of performance.

By that metric, things have improved a lot.

But if the metric is "effort to halve errors from existing levels", you don't see the same improvement.

Do you want improved algorithm effectiveness to count as "current AI systems"?

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

I think a lot. The increased computer power and system tools allow hyperparameter spaces to be searched automatically. A lot of brute force becomes practical, and the available applications support it. Also, consider the application support for import of data, and generation of new data.

So I'd like to be able to measure the share of AI R&D done by computers vs humans.

It's kind of like asking how much of your driving is done by the tires, and how much by the transmission. Your out of luck if you don't have both.

what we should expect progress to look like going forward.

If the goal is forecasting progress going forward, I'd recommend thinking of the problem in those terms and flowing from there instead of thinking of going directly to sub forecasts before you've clearly related the meaning of those sub forecasts to the main goal.

comment by jacob_cannell · 2016-02-07T01:50:52.189Z · LW(p) · GW(p)

This isn't a new phenomena: the word 'computer' originally described a human occupation, perhaps the first major 'mental' occupation to be completely automated.

That started around WW2, so this general trend can be seen for the last 75 years or so. I'd look at the fraction of the economy going into computing, as how that has changed over time is due to the interplay between the various effects of automation.

comment by turchin · 2016-02-06T11:43:52.257Z · LW(p) · GW(p)

My idea would be to search actual examples where neural nets were used to create new neural nets, or a program has been writing programs.

For example in chip industry computers are used to create new chips on all stages. And in some stages they became un-replacable by humans I guess. This stage is probably tracing of actual chip layout in silicon based in its electric scheme.

You may be interested in all software which automate the process of creating neural nets or simplify it. This software has price and economic effect.