Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth?
post by VipulNaik · 2014-04-11T19:07:09.786Z · LW · GW · Legacy · 57 commentsContents
TL;DR #1: Short versus long run for supply and demand #2: Introducing technology, the arrow of time, and experience curves #3: The genie out of the bottle, and gaining from bubbles #4: The importance of market diversity and the importance of intermediate milestones being valuable #5: Different market segments may face different technological challenges #6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story? #7: Complementary innovation and high conjunctivity of the progress scenario Comments and suggestions are greatly appreciated. None 57 comments
Warning: This is a somewhat long-winding post with a number of loosely related thoughts and no single, cogent thesis. I have included a TL;DR after the introduction, listing the main points. All corrections and suggestions are greatly appreciated.
It's commonly known, particularly to LessWrong readers, that in the world of computer-related technology, key metrics have been doubling fairly quickly, with doubling times ranging from 1 to 3 years for most metrics. The most famous paradigmatic example is Moore's law, which predicts that the number of transistors on integrated circuits doubles approximately every two years. The law itself stood up quite well until about 2005, but one of its implications, based on Dennard scaling, broke down after that (see here for a detailed overview of the breakdown by Sebastian Nickel). Another similar proposed law is Kryder's law, which looks at the doubling of hard disk storage capacity. Chapters 2 and 3 of Ray Kurzweil's book The Singularity is Near goes into detail regarding the technological acceleration (for an assessment of Kurzweil's prediction track record, see here).
One of the key questions facing futurists, including those who want to investigate the Singularity, is the question of whether such exponential-ish growth will continue for long enough for the Singularity to be achieved. Some other reasonable possibilities:
- Growth will continue for a fairly long time, but slow down to a linear pace and therefore we don't have to worry about the Singularity for a very long time.
- Growth will continue but converge to an asymptotic value (well below the singularity threshold)beyond which improvements aren't possible. Therefore, growth will progressively slow down but still continue as we come closer and closer to the asymptotic value
- Growth will come to a halt, because there is insufficient demand at the margin for improvement in the technology.
Ray Kurzweil strongly adheres to the exponential-ish growth model, at least for the duration necessary to reach computers that are thousands of times as powerful as humanity (that's what he calls the Singularity). He argues that although individual paradigms (such as Moore's law) eventually run out of steam, new paradigms tend to replace them. In the context of computational speed, efficiency, and compactness, he mentions nanotechnology, 3D computing, DNA computing, quantum computing, and a few other possibilities as candidates for what might take over once Moore's law is exhausted for good.
Intuitively, I've found the assumption of continued exponential growth wrong. I'll hasten to add that I'm mathematically literate and so it's certainly not the case that I fail to appreciate the nature of exponential growth — in fact, I believe my skepticism is rooted in the fact that I do understand exponential growth. I do think the issue is worth investigating, both from the angle of whether the continued improvements are technologically feasible, and from the angle of whether there will be sufficient incentives for people to invest in achieving the breakthroughs. In this post, I'll go over the economics side of it, though I'll include some technology-side considerations to provide context.
TL;DR
I'll make the following general points:
- The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves.
- Industries based on knowledge goods exhibit experience curve effects: what matters is cumulative demand rather than demand in a given time interval. The irreversibility of creating knowledge goods creates a dynamic different from that in other industries.
- What matters for technological progress is what people investing in research think future demand will be like. Bubbles might actually be beneficial if they help lay the groundwork of investment that is helpful for many years to come, even though the investment wasn't rational for individual investors.
- Each stage of investment requires a large enough number of people with just the right level of willingness to pay (see the PS for more). A diverse market, with people at various intermediate stages of willingness to pay, is crucial for supporting a technology through its stages of progress.
- The technological challenges confronted at improving price-performance tradeoffs may differ for the high, low, and middle parts of the market for a given product. The more similar these challenges, the faster progress is likely to be (because the same research helps with all the market segments together).
- The demand-side story most consistent with exponential technological progress is one where people's desire for improvement in the technologies they are using are proportional to the current level of the technologies. But this story seems inconsistent with the facts: people's appetite for improvement probably declines once technologies get good enough. This creates problems for the economic incentive side of the exponential growth story.
- Some exponential growth stories require a number of technologies to progress in tandem. Progress in one technology helps facilitate demand for another complementary technology in this story. Such progress scenarios are highly conjunctive, and it is likely that actual progress will fall far short of projected exponential growth.
#1: Short versus long run for supply and demand
In the short run, supply curves are upward-sloping and demand curves are downward-sloping. In particular, this means that when the demand curve expands (more people wanting to buy the item at the same price) then that causes an increase in price and increase in quantity traded (rising demands creates shortages at the current price, motivating suppliers to increase supplies and also charge more money given the competition between buyers). Similarly, if the supply curve expands (more amount of the stuff getting produced at the same price) then that causes a decrease in price and increase in quantity traded. These are robust empirical observations that form the bread and butter of micreconomics, and they're likely true in most industries.
In the long run, however, things become different because people can reallocate their fixed costs. The more important the allocation of fixed costs is to determining the short-run supply curve, the greater the difference between short-run supply curves based on choices of fixed cost allocation. And in particular, if there are increasing returns to scale on fixed costs (for instance, a factory that produces a million widgets costs less than 1000 times a factory that produces a thousand widgets) and fixed costs contribute a large fraction of production costs, then the long-run supply curve might end up be downward-sloping. An industry where the long-run supply curve is downward-sloping is called a decreasing cost industry (see here and here for more). (My original version of this para was incorrect; see CoItInn's comment and my response below it for more).
#2: Introducing technology, the arrow of time, and experience curves
The typical explanation for why some industries are decreasing cost industries is the fixed costs of investment in infrastructure that scale sublinearly with the amount produced. For instance, running ten flights from New York to Chicago costs less than ten times as much as running one flight might. This could be because the ten flights can share some common resources such as airport facilities or even airplanes, and also they can offer backups for one another in case of flight cancellations and overbooking. The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year. A mass transit system for a city of a million people costs less than 100 times as much as a mass transit system for a city of the same area with 10,000 people. These explanations for decreasing cost have only a moderate level of time-directionality. When I talk of time-directionality, I am thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up. If demand for New York-Chicago flights reverts to low levels, the planes can be diverted to other routes. If demand for hard drives falls, the factory producing them can (at some refurbishing cost) produce flash memory or chips or something totally different. As for intra-city mass transit systems, some are easier to repurpose than others: buses can be sold, and physical train cars can be sold, but the rail lines are harder to repurpose. In all cases, there is some time-directionality, but not a lot.
Technology, particularly the knowledge component thereof, is probably an exception of sorts. Knowledge, once created, is very cheap to store, and very hard to destroy in exchange for other knowledge. Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.
This sort of time-directionality is closely related to (though not the same as) the idea of experience curve effects: instead of looking at the quantity demanded or supplied per unit time in a given time period, it's more important to consider the cumulative quantity produced and sold, and the economies of scale arise with respect to this cumulative quantity. Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.
The concept of price skimming is also potentially relevant.
#3: The genie out of the bottle, and gaining from bubbles
The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high. This depends on the technology not getting lost if the suppliers go out of business — but that's probably likely, given that suppliers are unlikely to want to destroy cost-lowering technologies. Even if they go out of business, they'll probably sell the technology to somebody who is still in business (after all, selling their technology for a profit might be their main way of recouping some of the costs of their investment). Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest. In particular, the tech bubble of 1998-2001 spurred significant investments in Internet infrastructure (based on false premises) as well as in the semiconductor industry, permanently lowering the prices of these, and facilitating the next generation of technological development. However, the argument also ignores the fact that the resources spent on the technological development could instead have gone to other even more valuable technological developments. That's a big omission, and probably destroys the case entirely, except for rare situations where some technologies have huge long-term spillovers despite insufficient short-term demand for a rational for-profit investor to justify investment in the technology.
#4: The importance of market diversity and the importance of intermediate milestones being valuable
The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting: for instance, if production costs are $10 per piece and 30 people are willing to pay $100 per piece, then pushing production down from $10 to $5 per piece yields a net gain of only $150 — a pittance compared to the existing profit of $2700. On the other hand, if there are 300 people willing to pay $10 per piece, existing profit is zero whereas the profit arising from reducing the cost to $5 per piece is $1500. On the third hand, people willing to pay only $1 per piece are useless in terms of spurring investment to reduce the price to $5, since they won't buy it anyway.
Building on the preceding point, the market segment that plays the most critical role in pushing the frontier of technology can change as the technology improves. Initially, when prices are too high, the segment that pushes technology further would be the small high-paying elite (the early adopters). As prices fall, the market segment that plays the most critical role becomes less elite and less willing to pay. In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do. Also, market segments for whom the technology is still too expensive are also benefiting in terms of future expectations. Poor people who couldn't afford mobile phones in 1994 benefited from the rich people who generated demand for the phones in 1994, and the middle-income people who generated demand for the phones in 2004, so that now, in 2014, the phones are cost-effective for many of the poor people.
It becomes clear from the above that the continued operation of technological progress depends on the continued expansion of the market into segments that are progressively larger and willing to pay less. Note that the new populations don't have to be different from the old ones — it could happen that the earlier population has a sea change in expectations and demands more from the same suppliers. But it seems like the effect would be greater if the population size expanded and the willingness to pay declined in a genuine sense (see the PS). Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge (as in the example above, going from customers willing to pay $100 to customers willing to pay $1 would require too much investment in research and development and may not be supported by the market). You need people at each intermediate stage to spur successive stages of investment.
A closely related point is that even though improving a technology by a huge factor (such as 1000X) could yield huge gains that would, on paper, justify the cost of investment, the costs in question may be too large and the uncertainty may be too high to justify the investment. What would make it worthwhile is if intermediate milestones were profitable. This is related to the point about gradual expansion of the market from a small number of buyers with high willingness to pay to a large number of buyers with low willingness to pay.
In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.
#5: Different market segments may face different technological challenges
There are two ends at which technological improvement may occur: the frontier end (of the highest capacity or performance that's available commercially) and the low-cost end (the lowest cost at which something useful is available). To some extent, progress at either end helps with the other, but the relationship isn't perfect. The low-cost end caters to a larger mass of low-paying customers and the high-cost end caters to a smaller number of higher-paying customers. If progress on either end complements the other, that creates a larger demand for technological progress on the whole, with each market segment freeriding off the other. If, on the other hand, progress at the two ends requires distinct sets of technological innovations, then overall progress is likely to be slower.
In some cases, we can identify more than two market segments based on cost, and the technological challenge for each market segment differs.
Consider the case of USB flash drives. We can broadly classify the market into three segments:
- At the high end, there are 1 TB USB 3.0 flash drives worth $3000. These may appeal to power users who like to transfer or back up movies and videos using USB drives regularly.
- In the middle (which is what most customers in the First World, and their equivalents elsewhere in the world, would consider) are flash drives in the 16-128 GB range with prices ranging from $10-100. These are typically used to transfer documents and install softwares, with the occasional transfer of a movie.
- At the "low" end are flash drives with 4 GB or less of storage space. These are sometimes ordered in bulk for organizations and distributed to individual members. They may be used by people who are highly cash-constrained (so that even a $10 cost is too much) and don't anticipate needing to transfer huge files over a USB flash drive.
The cost challenges in the three market segments differ:
- At the high end, the challenges of miniaturization of the design dominate.
- At the middle, NAND flash memory is a critical determinant of costs.
- At the low end, the critical factor determining cost is the fixed costs of production, including the costs of packaging. Reducing these costs would presumably involve lowering the fixed costs of production, including cheaper, more automated, more efficient packaging.
Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle. Note also that the definitions of the low, middle, and high end are themselves subject to change. Ten years ago, there wasn't really a low or high end (more on this in the historical anecdote below). More recently, some disk space values have moved from the high end to the middle end, and others have moved from the middle end to the low end.
#6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story?
Most of the discussion of laws such as Moore's law and Kryder's law focus on the question of technological feasibility. But demand-side considerations matter, because that's what motivates investments in these technologies. In particular, we might ask: to what extent do people value continued improvements in processing speed, memory, and hard disk space, directly or indirectly?
The answer most consistent with exponential growth is that whatever level you are currently at, you pine for having more in a fixed proportion to what you currently have. For instance, for hard disk space, one theory could be that if you can buy x GB of hard disk space for $1, you'd be really satisfied only with 3x GB of hard disk space for $1, and that this relationship will continue to hold whatever the value of x. This model relates to exponential growth because it means that the incentives for proportional improvement remain constant with time. It doesn't imply exponential growth (we still have to consider technological hurdles) but it does take care of the demand side. On the other hand, if the model were false, it wouldn't falsify exponential growth, but it should make us more skeptical of claims that exponential growth will continue to be robustly supported by market incentives.
How close is the proportional desire model to the reality? I think it's a bad description. I will take a couple of examples to illustrate.
- Hard disk space: When I started using computers in the 1990s, I worked on a computer with a hard disk size of 270 MB (that included space for the operating system). The hard disk really did get full just with ordinary documents and spreadsheets and a few games played on monochrome screens — no MP3s, no photos, no videos, no books stored as PDFs, and minimal Internet browsing support. When I bought a computer in 2007, it had 120 GB (105 GB accessible) and when I bought a computer last year, it had 500 GB (450 GB accessible). I can say quite categorically that the experiences are qualitatively different. I no longer have to think about disk space considerations when downloading PDFs, books, or music — but keeping hard disk copies of movies and videos might still give me pause in the aggregate. I actually downloaded an offline version of Wikipedia for 10 GB, something that gave me only a small amount of pause with regards to disk space requirements. Do I clamor for an even larger hard disk? Given that I like to store videos and movies and offline Wikipedia, I'd be happy if the next computer I buy (maybe 7-10 years down the line?) had a few terabytes of storage. But the issue lacks anything like the urgency that running out of disk space had back in the day. I probably wouldn't be willing to pay much for improvements in disk space at the margin. And I'm probably at the "use more disk space" extreme of the spectrum — many of my friends have machines with 120 GB hard drives and are nowhere near close to running out of it. Basically, the strong demand imperative that existed in the past for improving hard drive capacity no longer exists (here's a Facebook discussion I initiated on the subject).
- USB flash drives: In 2005, I bought a 128 MB USB flash drive for about $50 USD. At the time, things like Dropbox didn't exist, and the Internet wasn't too reliable, so USB flash drives were the best way of both backing and transferring stuff. I would often come close to running out of space on my flash drive just to transfer essential items. In 2012, I bought two 32 GB USB flash drives for a total cost of $32 USD. I used one of them to back up all my documents plus a number of my favorite movies, and still had a few GB to spare. The flash drives do prove inadequate for transferring large numbers of videos and movies, but those are niche needs that most people don't have. It's not clear to me that people would be willing to pay more for a 1 TB USB flash drive (a few friends I polled on Facebook listed reservation prices for a 1 TB USB flash drive ranging from $45 to $85. Currently, $85 is the approximate price of 128 GB USB flash drives; here's the Facebook discussion). At the same time, it's not clear that lowering the cost of production for the 32 GB USB flash drive would significantly increase the number of people who would buy that. On either end, therefore, the incentives for innovation seem low.
#7: Complementary innovation and high conjunctivity of the progress scenario
The discussion of the hard disk and USB flash drive examples suggests one way to rescue the proportional desire and exponential growth views. Namely, the problem isn't with people's desires not growing fast enough, it's with complementary innovations not happening fast enough. In this view, maybe in processor speed improved dramatically, new applications enabled by that would revive the demand for extra hard disk space and NAND flash memory. Possibilities in this direction include highly redundant backup systems (including peer-to-peer backup), extensive internal logging of activity (so that any accidental changes can be easily located and undone), extensive offline caching of websites (so that temporary lack of connectivity has minimal impact on browsing experience), and applications that rely on large hard disk caching to complement memory for better performance.
This rescues continued exponential growth, but at a high price: we now need to make sure that a number of different technologies are progressing simultaneously. Any one of these technologies slowing down can cause demand for the others to flag. The growth scenario becomes highly conjunctive (you need a lot of particular things to happen simultaneously), and it's highly unlikely to remain reliably exponential over the long run.
I personally think there's some truth to the complementary innovation story, but I think the flagging of demand in absolute terms is also an important component of the story. In other words, even if home processors did get a lot faster, it's not clear that the creative applications this would enable would have enough of a demand to spur innovation in other sectors. And even if that's true at the current margin, I'm not sure how long it will remain true.
This blog post was written in connection with contract work I am doing for the Machine Intelligence Research Institute, but repreesents my own views and has not been vetted by MIRI. I'd like to thank Luke Muehlhauser (MIRI director) for spurring my interest in the subject, Jonah Sinick and Sebastian Nickel for helpful discussions on related matters, and my Facebook friends who commented on the posts I've linked to above.
Comments and suggestions are greatly appreciated.
PS: In the discussion of different market sectors, I argued that the presence of larger populations with lower willingness to pay might be crucial in creating market incentives to further improve a technology. It's worth emphasizing here that the absolute size of the incentive depends on the population more than the willingness to pay. To reduce the product cost from $10 to $5, the profit from a population of 300 people willing to pay at least $10 is $1500, regardless of the precise amount they are willing to pay. But as an empirical matter, accessing larger populations requires going to lower levels of willingness to pay (that's what it means to say that demand curves slope downward). Moreover, the nature of current distribution of disposable wealth (as well as willingness to experiment with technology) around the world is such that the increase in population size is huge as we go down the rung of willingness to pay. Finally, the proportional gain from reducing production costs is higher from populations with lower willingness to pay, and proportional gains might often be better proxies of the incentives to invest than absolute gains.
I made some minor edits to the TL;DR, replacing "downward-sloping demand curves" with "downward-sloping supply curves" and replacing "technological progress" with "exponential technological progress". Apologies for not having proofread the TL;DR carefully before.
57 comments
Comments sorted by top scores.
comment by Lumifer · 2014-04-12T00:12:50.185Z · LW(p) · GW(p)
I think the Stein's Law is relevant here: "If something cannot go on forever, it will stop".
I also think that you have to be careful about being clear the growth of what are you talking about. I suspect that there is no universal growth curve for all technology (or for all "knowledge industries") and particulars matter. Economic growth, for example, is very malleable (it's basically "more of stuff that humans value") and so can morph in time. But the density of information storage, to take another example, has clear limits, at least according the current physics, and so its growth, even if exponential for a time, is ultimately limited.
comment by benkuhn · 2014-04-11T21:36:17.216Z · LW(p) · GW(p)
One story for exponential growth that I don't see you address (though I didn't read the whole post, so forgive me if I'm wrong) is the possibility of multiplicative costs. For example, perhaps genetic sequencing would be a good case study? There seem to be a lot of multiplicative factors there: amount of coverage, time to get one round of coverage, amount of DNA you need to get one round of coverage, ease of extracting/preparing DNA, error probability... With enough such multiplicative factors, you'll get exponential growth in megabases per dollar by applying the same amount of improvement to each factor sequentially (whereas if the factors were additive you'd get linear improvement).
Replies from: VipulNaik↑ comment by VipulNaik · 2014-04-11T22:02:15.027Z · LW(p) · GW(p)
I'm actually writing another (long) post on exponential growth and the different phenomena that could lead to it. Multiplicative costs are on the list of plausible explanations. I've discussed these multiplicative stories with Jonah and Luke before.
I think that multiplicative costs is a major part of the story for the exponential-ish improvements in linear programming algorithms, as far as I could make out based on a reading of this paper: http://web.njit.edu/~bxd1947/OR_za_Anu/linprog_history.pdf
More in my upcoming post :).
UPDATE: Here's the post: http://lesswrong.com/lw/k1s/stories_for_exponential_growth/
comment by EHeller · 2014-04-12T01:34:36.205Z · LW(p) · GW(p)
I'd combine some ideas from the series of posts about majoring in physics that have turned up here- between 1970 or so and today physics has moved from a great career choice to a pretty mediocre career choice, to a downright awful career choice. Chemistry is now undergoing the same transition.
Our supply of scientific labor is vastly outpacing demand- I don't see how this can fail to impact technological growth in the long term.
Traditionally, the government helped to fill the gap in profitable intermediate milestones by massively subsidizing basic research, but the push within universities for profitable patents (and a few other factors) has shifted the system towards more and more applied research.
Replies from: asr↑ comment by asr · 2014-04-12T02:14:02.264Z · LW(p) · GW(p)
Traditionally, the government helped to fill the gap in profitable intermediate milestones by massively subsidizing basic research, but the push within universities for profitable patents (and a few other factors) has shifted the system towards more and more applied research.
It's a relatively recent tradition. Serious government funding of basic research didn't start until after WW2, as near as I can tell, anywhere in the world.
Also I am skeptical that "basic" and "applied" research is a useful distinction these days. A large fraction of science funding goes to biology and medical research. Understanding the mechanisms underlying biological processes gets you pretty far towards coming up with treatments that alter those mechanisms. But if "understanding how cells work, exactly" isn't basic research, what is?
Replies from: EHeller↑ comment by EHeller · 2014-04-12T03:16:51.122Z · LW(p) · GW(p)
It's a relatively recent tradition. Serious government funding of basic research didn't start until after WW2, as near as I can tell, anywhere in the world.
It goes back much, much further. Newton was appointed to the royal mint, Leibniz worked for several rulers, Galileo was directly funded by the rulers of Florence,etc (I specifically named people from what I consider to be the beginning of science). The tradition in democratic governments dates back to WW2, but the tradition itself is much older.
Also I am skeptical that "basic" and "applied" research is a useful distinction these days. A large fraction of science funding goes to biology and medical research
The difference between "applied" and "basic" is the difference between biology and medical research. While biomed is booming, its incredibly hard to get a job as a biologist (Douglas Prasher is the norm, not the exception).
Replies from: asr↑ comment by asr · 2014-04-12T03:49:56.791Z · LW(p) · GW(p)
It goes back much, much further. Newton was appointed to the royal mint, Leibniz worked for several rulers, Galileo was directly funded by the rulers of Florence,etc (I specifically named people from what I consider to be the beginning of science). The tradition in democratic governments dates back to WW2, but the tradition itself is much older.
All those people had government funding, but with the possible exception of Galileo, it wasn't funding "for basic science."
The one I know most about is Newton, and the example seems clearly misleading here. When he went to the mint he was already well-established and had done much of his important scientific work. (The Principia came out in 1687, and Newton went to the mint in 1696.) Moreover, this wasn't a funding source for scientific pursuits. He devoted a huge amount of time and energy to running the mint, including personally investigating and prosecuting counterfeiters. (See Thomas Levenson's entertaining Newton and the Counterfeiter.)
Leibnitz, as near as I can gather from wikipedia was there to be an ornament to the court of Hannover, but it's not at all clear that they cared about his scientific or mathematical achievements. Can you point me to something specific?
Galileo was a professor, so I suppose that counts.
I grant that governments and rulers have funded philosophers and professors, for a long while. But big-money research, with billion-dollar budgets and massive labs with thousands of researchers is much newer.
The difference between "applied" and "basic" is the difference between biology and medical research. While biomed is booming, its incredibly hard to get a job as a biologist (Douglas Prasher is the norm, not the exception).
I don't think this is strong evidence for "insufficient funding." In the US, and to some extent elsewhere, research money is channeled towards graduate student assistantships and fellowships, and away from full-time mid-career researchers. As a result, regardless of the total degree of funding, the population of researchers is skewed towards young people. In consequence, there is a fierce competition for permanent jobs.
Replies from: EHeller↑ comment by EHeller · 2014-04-12T04:17:52.563Z · LW(p) · GW(p)
The one I know most about is Newton, and the example seems clearly misleading here. When he went to the mint he was already well-established and had done much of his important scientific work.
Previous to the mint post, Newton was lucasian professor at Cambridge and received patronage from the royal society, of which he later became president. The Royal Society was founded with the blessing of (and supported financially by) the king, with the stated purpose of "advancing knowledge."
The French Royal Academy, instead of being simply patronized by the government was created entirely as an organ of government.
Leibniz, while a court "ornament" was supported so that he could do his research, and his patrons supported other scientists at court for the same purpose. Galileo received generous patronage from the Medicis.
Sure, the political purpose probably was more about prestige then research, but 'd argue that funding basic research is always about prestige (in the Hanson sense), even in the post-ww2 democracies. The stated purpose, however, was basic research and it clearly began a tradition that continues to this day of government patronized basic science research.
Can we agree that my statement that government traditionally funds basic research is accurate?
I don't think this is strong evidence for "insufficient funding.
What would you consider evidence of insufficient funding? My point stands- funding for biomedical is large and growing, funding for basic biology is smaller and flat or shrinking. This leads to huge career differences between medical and biological researchers (the between field differences can't be explained by the structure of the organization that funds both fields). The NIH's budget doubling in the 90s went almost entirely towards applied medical research.
The other big push in that direction comes from universities, who relatively recently noticed that licensing patents to industry is big business.
comment by ThisSpaceAvailable · 2014-04-12T01:12:44.276Z · LW(p) · GW(p)
A TL;DR of more than 300 words seems rather long to me. Also, for #1, did you mean to say "The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves."?
"At the same time, it's not clear that lowering the cost of production for the 32 GB USB flash drive would significantly increase the number of people who would buy that."
Smart phones with less than 32 GB are still a significant part of the market. Wouldn't technology that allows cheaper 32 GB USB flash drives also allow 32 GB smart phones? Also, doesn't this intersect with SSD? I think that a lot of people would be willing to pay $100 for a 1 TB SSD.
Replies from: VipulNaik↑ comment by VipulNaik · 2014-04-12T01:22:28.136Z · LW(p) · GW(p)
I appreciate your taking the time to comment.
A TL;DR of more than 300 words seems rather long to me.
True. I could try compactifying it more when I have time.
Also, for #1, did you mean to say "The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves."?
Thanks! I fixed that.
Smart phones with less than 32 GB are still a significant part of the market. Wouldn't technology that allows cheaper 32 GB USB flash drives also allow 32 GB smart phones? Also, doesn't this intersect with SSD? I think that a lot of people would be willing to pay $100 for a 1 TB SSD.
I'd need to know more about the technological limitations to comment. Prima facie, I don't think that the cost of the flash memory is the constraint (32 GB flash drives cost ~$20 or less, and the NAND flash itself costs < $15). I'm not sure what the limiting factor is. It could be an issue of miniaturization (32 GB takes more space inside the phone so needs a more difficult manufacturing process). Or it could be a form of price discrimination by the manufacturers (though the competitive nature of the smartphone device market is an argument against price discrimination being viable in the long term).
Also, doesn't this intersect with SSD? I think that a lot of people would be willing to pay $100 for a 1 TB SSD.
That's possible. I don't have deep object level knowledge here. Whether the innovation will happen depends on whether the number of people who would initially (i.e., over the time horizon that companies need to justify investments) be willing to buy a 1TB SSD at that price is enough to recoup the initial research and investment costs needed for getting the price down to that level.
comment by Lumifer · 2014-04-12T00:14:56.969Z · LW(p) · GW(p)
I'd be happy if the next computer I buy (maybe 7-10 years down the line?) had a few terabytes of storage.
Replies from: Baughn
↑ comment by Baughn · 2014-04-12T13:38:04.275Z · LW(p) · GW(p)
Multiple disks, though, and setting up a system like that - RAIDZ2, whatever - requires a nontrivial level of understanding to go with the benefits.
Replies from: Lumifer↑ comment by Lumifer · 2014-04-14T16:34:03.926Z · LW(p) · GW(p)
requires a nontrivial level of understanding
I don't think so -- if you have enough money you can just buy a pretty much idiot-proof NAS box (e.g. a Synology one) with as much storage as you want. Let me remind you that you can buy a 3Tb hard drive for about $140 now.
Replies from: Baughn↑ comment by Baughn · 2014-04-15T12:48:30.593Z · LW(p) · GW(p)
Which is not something that the typical person, i.e. someone who barely understands the notion of "folders", can do. I despair at the thought of explaining that some of their data exists on the laptop and some of it on this box over there, let alone how and when to move data between them.
Replies from: milindsmart↑ comment by milindsmart · 2016-08-25T17:56:59.009Z · LW(p) · GW(p)
There is no guarantee that there exists some way for them to understand.
Consider the possibility that it's only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principles and tools... maybe?
What level of sophistication would you think is un-idiot-proof-able? Nuclear missiles? not-proven-to-be-friendly-AI?
comment by SebNickel · 2014-05-04T12:38:26.638Z · LW(p) · GW(p)
The most famous paradigmatic example is Moore's law, which predicts that the number of transistors on integrated circuits doubles approximately every two years. The law itself stood up quite well until about 2005, but broke down after that (see here for a detailed overview of the breakdown by Sebastian Nickel)
Thanks! But as you know my overview you link to is about the breakdown of Dennard Scaling, which is related to but really quite distinct from Moore's Law. I'm not sure how much this matters, but it struck me as misleading.
Replies from: VipulNaikcomment by Yosarian2 · 2014-04-15T17:38:02.796Z · LW(p) · GW(p)
I'm not sure that I see a strong demand-side argument here against exponential growth.
Even if demand for one specific technology falls (for example, people care less about having hard drive space), then you would expect demand for other technological advancements to increase; after all, demand is basically limited by consumer wealth, and in the long run if consumers are spending less money on something, they'll be spending more money on something else, which will encourage technological improvements in that field.
Overall, I think the exponential growth story on a socital level is larger then the computers/ Moore's law type of exponential growth you're focusing on here. What I would say is that exponential growth comes from something like this:
Assumption 1. As technology improves and science advances, and as existing technologies become more widely deployed and cheaper and more uses are found for them, the total resources avalable to society as a whole increase.
Assumption 2. Society will continue to invest a certain percentage of it's resources into advances in science and technology,
It's basically the 'compound interest' model of exponential growth; more investment capital we have as a species as a whole means more investment payoff, which then gives us more investment capital.
"Resources" is a vague term, but it seems true to me in a number of senses. Better technology means better scientific instruments, which means faster advances in science. More advances in science means more options opening up for technological progress. More technology also means a larger economy, with more industrial production, more efficient food production, more overall resources. More resources can be devoted to education, which increases the mental resources of the human race. More access to information, ect.
Basically, even if demand for consumer computer technology slows down, then I would just expect that capital to go into other kinds of research (industrial computer technology, or biotech, or energy research, or pure science, ect), which would then increase the total capacity of the human race, and accelerate overall research. And then, even if the demand for improved computer technology doesn't justify spending X money right now, that same research will be both easier and cheaper to do, and require a much smaller relative use of resources, in the future. So even if computer technology slows down for the short term, I would expect overall exponential growth to continue to accelerate, until it drags computer technology along with it.
I think that's a common phenomenon; right now, we as a society are investing a huge amount of our resources into computer technology, and as a side effect that's dragging along technology improvement in dozens of other areas, everything from biology to automobiles. If we shift our focus, then overall advancement should continue, and likely would drag along technology advancement in everything, including computers. (Perhaps biological research or research into materials science suggests better ways of constructing computers, for example).
Replies from: VipulNaik↑ comment by VipulNaik · 2014-04-15T19:08:48.432Z · LW(p) · GW(p)
I do think economic growth will continue to be exponential over short time horizons, though the exponent itself might change over time (it's unclear whether the change will be in the positive or negative direction). My focus here was on specific technologies whose continued exponential growth for the next 30 years or so is used as an argument for the imminence of a technological singularity.
As we shift from one paradigm of advancement to another, we may still have exponential growth, but the exponent for the new exponential growth paradigm may be quite different.
Replies from: Yosarian2, ColtInn↑ comment by Yosarian2 · 2014-04-15T20:36:22.562Z · LW(p) · GW(p)
Fair enough. In that case, though, I think you then have to consider the possibilities that other forms of technological development might themselves lead to a singularity of a different type (biotech, for example, seems quite possible), or might at least lower the barrier and make it easier for someone to improve computer technology with fewer resources, making it profitable for people to continue to improve computers even with a lower payoff in forms of consumer demand.
That is; if there's only X level of consumer demand for "better computers" by whatever definition you want to use, that might not be enough to fund enough research to accomplish that right now, but in an exponentially growing economy with exponentially growing technology and resources, it should cost far less to make that advance in a few years.
So long as the whole economy and the whole mass of human science and technology continues to grow exponentially, I would expect computers to continue to improve exponentially; they may become a "lagging indicator" of progress instead of the cutting edge if other areas get a larger fraction of the research capital investment, but even that should be enough to maintain some kind of exponential curve.
Replies from: VipulNaik↑ comment by VipulNaik · 2014-04-15T22:11:14.330Z · LW(p) · GW(p)
Yes, this is a plausible scenario. I personally put weight on this type of scenario, namely, that progress might stall and then resume once some complementary supply-side and demand-side innovations have been made and other economic progress has happened to support more investment in the area. I don't think this would be runaway technological progress. I might talk more about this sort of scenario in a future post.
Replies from: ColtInn, Yosarian2↑ comment by ColtInn · 2014-04-16T00:29:15.402Z · LW(p) · GW(p)
I don't think this would be runaway technological progress
No reason to think it won't be runaway technological progress, depending on how you define runaway. The industrial revolution was runaway technological progress. Going from an economic output doubling time of 1000 years to 15 years is certainly runway. The rate of growth ultimately stalled but it was certainly runaway for that transitional period, even though there were stalls along the way.
Edited to add link.
If you haven't already seen a version of this talk by Robin Hanson, the first 20 minutes or so goes into this but it's interesting throughout if you have time.
http://www.youtube.com/watch?v=uZ4Qx42WQHo
↑ comment by Lumifer · 2014-04-16T00:54:30.699Z · LW(p) · GW(p)
No reason to think it won't be runaway technological progress
No reason? How about humans?
The rate of growth stalled but it was certainly runaway
Um. Runaway progress does not stall by defintion -- think about what "runaway" means.
Replies from: ColtInn↑ comment by ColtInn · 2014-04-16T02:51:57.966Z · LW(p) · GW(p)
No reason? How about humans?
So we're talking about a human based runaway scenario? That's not gonna happen.
Um. Runaway progress does not stall by defintion -- think about what "runaway" means.
OK, that's what 'runaway' growth means. Can this even be predicted. I think not. How could you possibly ever know that you're in a runaway? The transition from agriculture to industry saw an increase in economic growth roughly 65 times faster. I think if we saw global output accelerate by even half that in the next 20 years most would be calling a runaway scenario.
Replies from: Lumifer↑ comment by Lumifer · 2014-04-16T03:34:49.768Z · LW(p) · GW(p)
So we're talking about a human based runaway scenario? That's not gonna happen.
We are talking about a runaway scenario in a human civilization, aren't we?
OK, that's what 'runaway' growth means.
So what does it mean?
Replies from: ColtInn↑ comment by ColtInn · 2014-04-16T04:00:25.009Z · LW(p) · GW(p)
We are talking about a runaway scenario in a human civilization, aren't we?
I don't think that's possible. Do you? A runaway means a massive and ongoing boost in productivity. That seems achievable only by AI, full brain emulations, or transhumans that are much smarter and faster at doing stuff than humans can be.
So what does it mean?
I was agreeing (mostly). My point was that by that definition we could never predict, or even know that we are in the middle of, a runway scenario. I did pose it as a question and you did not reply with an answer. So what do you think? If the doubling time in economic output decreased by 35 times over the next 2, or even 4 decades, would you think we are in a runaway scenario?
Replies from: Lumifer↑ comment by Lumifer · 2014-04-16T04:28:06.455Z · LW(p) · GW(p)
achievable only by AI, full brain emulations, or transhumans
-
over the next 2, or even 4 decades
Situation in quote 1 will not happen within the time frame in quote 2.
Generally speaking, I understand "runaway" as "unstoppable", meaning both that it won't stop on its own (stall) and that we lost control over it.
And, by the way, understanding that you lost control is how you know you're in a runaway scenario.
Replies from: ColtInn↑ comment by ColtInn · 2014-04-16T04:44:00.442Z · LW(p) · GW(p)
I did not mean to imply that situation in quote 1 would happen within the timeframe of quote 2, and I don't think i did. It's a thought experiment and I think that is clear.
And, by the way, understanding that you lost control is how you know you're in a runaway scenario.
There are examples of this in real history from smart people who thought we'd lost control - see Samuel Butler. We have, arguably. The extent to which machines are now integral to continued economic prosperity is irreversible without unbearable costs (people will die).
Replies from: Lumifer↑ comment by Lumifer · 2014-04-16T04:53:35.139Z · LW(p) · GW(p)
It's a thought experiment and I think that is clear.
I am confused. What is a thought experiment?
Replies from: ColtInn↑ comment by ColtInn · 2014-04-16T05:07:39.936Z · LW(p) · GW(p)
My impression is that you are now evading questions and being deliberately provocative; but I'll play...
If the rate economic growth were to increase by x35, would you think you were in a runaway scenario?
http://en.wikipedia.org/wiki/Thought_experiment
Replies from: Lumifer↑ comment by Lumifer · 2014-04-16T14:42:20.877Z · LW(p) · GW(p)
When I'm being deliberately provocative, it's... more noticeable :-D I also know what a thought experiment is.
What I was confused about is exactly which part of the whole discussion about exponential growth did you consider to be a thought experiment.
If the rate economic growth were to increase by x35, would you think you were in a runaway scenario?
If that were the only piece of information that I had, no, I would not think so. Insufficient data.
↑ comment by Yosarian2 · 2014-04-15T22:48:41.576Z · LW(p) · GW(p)
I personally put weight on this type of scenario, namely, that progress might stall and then resume once some complementary supply-side and demand-side innovations have been made and other economic progress has happened to support more investment in the area.
Yeah, so do I.
I'm not sure it makes a lot of difference in terms of long run predictions, though. Let's say that for the next 10 years, we cut the amount of research we are doing into computers in half in percentage terms (so instead of putting X% of our global GDP into computer research every year, we put X/2%.) Let's say we take that and instead invest it in other forms of growth (other technologies, biotech, transhuman technologies, science, infrastructure, or even bringing the third world out of poverty and into education, ect) and maintain the current rate of global growth. Let's further say that the combination of global GDP growth and science and technology growth is roughly 7% a year, so that the global economy is doubled every 10 years in how much it can devote to research. And then at the end of that period, computer research goes back up to X%.
In that case, that 10 year long research slowdown would put us getting to where we "should" have been in computer science in 2044 now happening in 2045 instead; if that's the point we need to be at to get a singularity started, then that 10 years long research slowdown would only delay the singularity by about 1.75 years. (edit: math error corrected)
And not only that, after a 10 year slowdown into computer science research, I would expect computers to become the new "low hanging fruit", and we might end up devoting even more resources to it at that point, perhaps eliminating the time loss all together.
Basically, so long as exponential growth continues at all, in technological and economic terms in general, I don't think the kind of slowdown we're talking about would have a huge long-term effect on the general trajectory of progress.
Replies from: Lumifer↑ comment by Lumifer · 2014-04-16T00:41:30.825Z · LW(p) · GW(p)
As a general observation, you don't want to model growth (of any sort) as X% per year. You want to model it as a random variable with the mean of X% per year, maybe, and you want to spend some time thinking about its distribution. In particular, whether that distribution is symmetric and how far out do the tails go.
↑ comment by ColtInn · 2014-04-15T20:21:42.335Z · LW(p) · GW(p)
As we shift from one paradigm of advancement to another, we may still have exponential growth, but the exponent for the new exponential growth paradigm may be quite different.
Won't the rate of economic growth be different (much larger) by definition? I can't envisage a scenario where economic growth could be roughly as it is now or slower but we have experienced anything even approaching a technological singularity. Think of the change in growth rates resulting from the farming and industrial revolutions.
Something to puzzle over is the fact we have seen computational grunt grow exponentially for decade upon decade yet economic growth has been stable over the same period.
Replies from: VipulNaik, Yosarian2↑ comment by VipulNaik · 2014-04-15T22:05:19.976Z · LW(p) · GW(p)
Won't the rate of economic growth be different (much larger) by definition?
Depends on the reason for the switch to a new paradigm. If the reason is that there are even more attractive options, then economic growth would accelerate. If the reasons are that we're running out of demand for improvement across the board, and people are more satisfied with their lives, and the technological low-hanging fruit are taken, then economic growth could be lower.
Replies from: ColtInn↑ comment by ColtInn · 2014-04-15T22:35:08.387Z · LW(p) · GW(p)
I see. The demand side story. I suppose it is technically feasible but I find it unlikely in the extreme. There is nothing in history to suggest it and I don't think it fits with psychology. History is full of examples of how we won't want for anything after we have 'some foreseen progress'. We've had the luxury of being able to trade in some economic growth for more leisure, and still be better off than our grandparents, for a long time now, but haven't.
If the reasons are that we're running out of demand for improvement across the board, and people are more satisfied with their lives, and the technological low-hanging fruit are taken, then economic growth could be lower.
Do you mean lower than it is now? After a paradigm shift in advancement?
↑ comment by Yosarian2 · 2014-04-15T20:42:53.339Z · LW(p) · GW(p)
Something to puzzle over is the fact we have seen computational grunt grow exponentially for decade upon decade yet economic growth has been stable over the same period.
Economic growth itself is an exponential function. "The economy grows 3% every year" is exponential growth, not linear growth. I would say that it's only happened because of exponential technological progress; we never had that level of exponential growth until the industrial revolution. And I would say that most of the economic growth the first world has had over the past 20 years has come from recent technological advancement, mostly being the twin communication and computer revolutions we've had (PC's, cell phones, internet, smart phones, and some smaller examples of both).
Replies from: ColtInn↑ comment by ColtInn · 2014-04-15T22:10:05.890Z · LW(p) · GW(p)
I'm not sure how you got from my comments that I don't understand exponential growth. But let me remake the point more clearly. The doubling time of economic growth has remained stable at around 15 years. The doubling time of computational processing speed has remained roughly stable at around 24 months. I agree that economic growth in developed economies in the last 20 years has come largely from tech progress . But it has not had an effect on the rate of economic growth.
I would say that it's only happened because of exponential technological progress; we never had that level of exponential growth until the industrial revolution.
Long term global growth is achieved only through tech progress. We didn't have this rate of economic growth before the industrial revolution, that's true. It wasn't experienced during the agricultural phase. But foragers didn't enjoy the same growth rate as farmers. The rate of economic growth has not increased since well before the introduction of computers.
comment by lavalamp · 2014-04-14T23:04:05.301Z · LW(p) · GW(p)
I think your expanded point #6 fails to consider alternative pressures for hard drive & flash memory. Consider places like dropbox; they represent a huge demand for cheap storage. People probably (?) won't want huge(er) drives in their home computers going forward, but they are quite likely to want cloud storage if it comes down another order of magnitude in price. Just because people don't necessarily directly consume hard drives doesn't mean there isn't a large demand.
Consider also that many people have high MP digital cameras, still and video. Those files add up quickly.
Replies from: VipulNaik↑ comment by VipulNaik · 2014-04-15T19:19:23.641Z · LW(p) · GW(p)
Consider places like dropbox; they represent a huge demand for cheap storage. People probably (?) won't want huge(er) drives in their home computers going forward, but they are quite likely to want cloud storage if it comes down another order of magnitude in price. Just because people don't necessarily directly consume hard drives doesn't mean there isn't a large demand.
This is a good point that I didn't address in the post. I'd thought about it a while back but I omitted discussing it in the post.
A few counterpoints:
- Dropbox is all about backing up data that you already have. Even if everybody used Dropbox for all their content, that would still only double the need for storage space (if Dropbox stores everything at 3 locations, then it would 4X the need for storage space). This doesn't create huge incentives for improvement.
- In practice, Dropbox and cloud services wouldn't multiply storage space needs by that much, because a lot of content shared on these would be shared across devices (for instance, Amazon's Cloud Music Service doesn't store a different copy of each track for each buyer, it just stores one, or a few, copies per track). And many people won't even keep local copies. This would reduce rather than increase local storage needs. Even today, many people don't store movies on their hard drives or in DVDs but simply rely on online streaming and/or temporary online downloading.
I should note that I'm somewhat exceptional: I like having local copies of things to a much greater extent than most people (I download Wikipedia every month so I can have access to it offline, and I have a large number of movies and music stored on my hard drive). But to the extent that the Internet and improved connectivity has an effect, I suspect it would be ranging from something like multiplying demand by 4X (high-end) to actually reducing demand.
The point about camera, still, and video is good, and I do see applications in principle that could be used to fill up a lot of disk space. I don't think there is a lot of demand for these applications at the current margin. How many people who aren't photographers (by profession or hobby) even think about the storage space of their photos on their hard drives? How many people shoot videos and store them on their hard drives to a level that they actually have to start thinking about disk space considerations? I suspect the numbers involved here would be negligible. But I could be mistaken.
Replies from: lavalamp↑ comment by lavalamp · 2014-04-17T18:03:16.624Z · LW(p) · GW(p)
90% agree, one other thing you may not know: both dropbox and google drive have options to automatically upload photos from your phone, and you don't have to sync your desktop with them. So it's not clear that they merely double the needed space.
comment by Gunnar_Zarncke · 2014-04-13T21:27:25.552Z · LW(p) · GW(p)
6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story?
Most of the discussion of laws such as Moore's law and Kryder's law focus on the question of technological feasibility. But demand-side considerations matter, because that's what motivates investments in these technologies.
Actually I consider this to be the more dominant part in the equation. In principle the size of transistors could have been reduced much faster. Namely in steps up to (or rather down to) the resolution of the corresponding lithographic technique. But surely this would have involved higher investments at any single point. If you indeed have a constant demand factor as suggested, then this demand can basically be met with the smallest technological improvement shrinking the transistor size by the given factor within the time it takes to bring the improvements to market. Shrinking transistors faster has higher costs and risks and very limited benefits. Shrinking a bit faster might provide a competive advantage but this still has to balance against the costs and will not change the exponential curve (which, if the technology would progress toward the next principle border would be a step function where significantly different time and effort are 'only' needed to discover new lithographic techniques).
This sidestepts the fact that there are technological challenges beside lithography like better clean rooms, better etching/doting which have to follow shrinking. And this also ignores that there is a large component in algorithmic complexity (complexity of the ICs and complexity to design and route more compley IC). But these actually should have led to a super-exponential effects because a) they are not strictly needed to match the demand (until the next physical boundary) and b) using your own improved tools to improve the results beyond the pure shrinking but instead allowing for quicker design and additional speed-ups should show as super-exponential development. But neither does.
From this I conclude that it is not the technology which is limiting but the demand is (basically).
All this is based on the asummption that there are not too many limiting technological borders. If there were that would suggest lots of steps which might look exponential-like from a distance. I see that there are many small technological improvements but these are not limiting but rather cost-optimizations and thus don't count for the present analysis.
Replies from: VipulNaikcomment by ColtInn · 2014-04-13T18:25:21.499Z · LW(p) · GW(p)
Hi,
First contribution here but I've been lurking since the move from Overcoming Bias. Play nice :) I think this is an important subject matter which few people are giving attention to so I thought I'd offer some comments that might assist.
My comment was deemed too long so I've Split it up. If that's bad form let me know. Considered removing quotations of the OP but it would make it too difficult to read.
If you are going to use pieces of standard microeconomics in future versions of this analysis it might be best to spend a bit more time defining them more clearly and explaining how they map to the subject. If there is any confusion in the assumptions contained in the theory it carries through to the analysis of the case in question. It may well all be clear in your mind but it doesn't seem, at least to me, to unfold clearly in the text. But it might just be me...
Some examples From #1
But the short run looks very different from the long run. A key difference is that in the short run, suppliers cannot undertake fixed costs of capital investment and research to improve technology. In the long run, they can. Therefore, if some type of good is a lot easier to produce in bulk than in small quantities, then in the long run larger demand would lead to lower prices, because suppliers have lower production costs. For goods that enjoy efficiencies of scale in large-volume production, therefore, the long-run supply curve would be downward-sloping. An industry of this sort is called a decreasing cost industry
As you point out, in the short run, producers cannot immediately alter their fixed cost inputs e.g. new factories (r&d is a subelty different in the way it reduces costs which I'll return to), but in the long run they can - all costs are variable in the long run. So, if there is an increase in demand of good A, and production of good A is characterised by constant- or increasing-returns to scale or economies of scale (these concepts are different with the same result, one referring to a fixed proportion of inputs and the other to varying proportion of inputs) over time we end up with an increase in quantity produced of A and a decrease in the price of A. But there need not be a downward sloping supply curve. In the case of constant- or increasing-returns to scale you just make more A because more A is demanded. In the case of economies of scale you just select from an envelop of possible short run average cost curves along the long run cost curve that meets the quantity demanded and maximises profit.
A decreasing cost industry is something different. Let's say industry (multiple firms) i produces good z. In producing z they use inputs c,d and e. Industry i is categorised by economies of scale and experiences an unexpected increase in demand for good z. Over time they ramp up with new factories and grew to a medium size industry. Their input e is a non-commodity input which, because they are now a large bulk purchaser of e they buy at a discount. All other inputs costs remain the same in real terms. The result is a lower average cost of production and downward sloping industry supply curve. They may in fact be gaining from increasing returns to scale from producers of input e. Car manufacturing is one example.
From #2
The typical explanation for why some industries are decreasing cost industries... The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year.
This again is economies of scale and not decreasing cost industry. Also, it's just one form, not even an industry. To clarify what i wrote above a little, just because a firm experiences decreasing costs it does not make it a decreasing-cost-industry in the way that this special case is treated in microeconomic theory.
I am thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up.
Your meaning is not clear to me here. A lot infrastructure is durable. By "costs will continue to stay low even if demand falls back" do you mean even if demand increases once again? It will definitely stay low if demand falls, but it's not clear that that is your meaning. Why would prices go back up if the infrastructure has been re-purposed to some other market? Do you mean that because demand has decreased some firms will exit that market by re-purposing their gear and so those that remain will constitute a smaller market supply (supply curve shift) and therefore a new higher equilibrium price will emerge? I wouldn't call the airline route change a re-purpose but simply selling to a different consumer in the same market. The hard-drive to chip flash memory chips I would be more inclined to call re-purposing, but that's related to economies of scope and they'd probably be doing it already. There is already a huge body of theory on this stuff so maybe instead of using terms like 'time-directionality' and 'efficiency of scale' you could revist the theory and explain your hypothesis in very standard terminology?
Technology, particularly the knowledge component thereof, is probably an exception of sorts.
I'm far from convinced of this, if by technology you mean modern cutting edge computation tech for example. ALL goods ARE technology. From programming languages to aircraft to condoms to corkscrews. Knowledge, for a very long time, has been easy to reproduce. The fixed costs of produces a dread-tree-book is huge but the marginal cost of another one, once it is written and typeset and at first printed, is tiny in comparison.
Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.
Again, this is simply how it goes and always has for almost all products that at any stage had any value. Where demand flags think of it as re-purposing knowledge.
Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.
This is often referred to as the learning curve in microeconomics. Think about a firm who's production of a good is characterised by increasing returns to scale. Their average cost will decrease as they increase production. Over that same period they get better and faster - removing bottlenecks of various kinds through optimisation of their processes because they become more intimate with their production problems via experience and find ever more partial solutions. Whilst the decrease in average cost due to economies of scale is along the long run average cost curve, the decrease in average cost due to this learning is a shift in the long run average cost curve. This is also routine as one would imagine and a good example is a potato chip factory.
Replies from: ColtInn, VipulNaik, VipulNaik, VipulNaik↑ comment by ColtInn · 2014-04-13T18:26:17.581Z · LW(p) · GW(p)
Part 2
From #3
The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high.
Well, they might invest in r&d. If they can take adavatge to increasing returns to scale in meeting demand they'd likely just build more whatevers.
Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest.
Interesting, but I think the damage to willingness to invest would be too dear a loss and wouldn't assume it away. Don't know how much extra was gained through the tech bubble in terms of tech we wouldn't otherwise have had. The bublle was in speculative financial market investment and not necessarily in in real economy actual stuff. I'll grant there is probably some crossover. Still, if people see real value in production they will demand those goods. Did we get some gains in housing tech that we wouldn't otherwise have had because of the housing bubble?
The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting:
Won't it? I think it will. Lower costs -> higher accounting profits.
In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do
I don't follow this. Given that they are early-adopters, only other early adopters are present in the market at that point in time. How can they be freeriding off consumers who are willing to strike a tough bargain, as you say, if those bargain strikers are not present in the market by definition? You mean they benefit later when the later adopters enter? presumably the ealry adopters are at that time moved on to something newer and better.
Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge.
This need not be the case especially in the face of further tech improvements in production or dramatically lower costs of factors of production, even for exogenous reasons.
In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.
True enough it seems. And this is they way of things. The prospect of offering transcontinental flight to a european forager millenia ago would have been absurd. A hangliding tour, maybe not so much.
From #5
I can broadly support the segments you've categorised. But I disagree with the notion that "Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle."
What do you mean by progress? Does progress mean more stuff being made in that segment, as in, able to produce more of it? Production technology or the technology product itself? In either case I think the are inextricably linked from the top down.
From #6 and 7
When I first began using computers they were 16k. I was 7. I take your point about the lack of urgency for average PC users to have a great deal more ROM or RAM. But we use computers to do tasks not for their own sake and I take your point about complementary tasks. Where I disagree is "On either end, therefore, the incentives for innovation seem low". If the tools are there, people will find ever more awesome ways of using them, and that is the history of tech progress!
Keep at it. Happy to go into more detail if it might be useful. I've been thinking a lot lately about the order of realisation of various 'singularity' techs and why it matters. Anyone interested? If so I'll post up my thoughts.
Replies from: VipulNaik, VipulNaik, VipulNaik, VipulNaik↑ comment by VipulNaik · 2014-04-15T16:19:57.984Z · LW(p) · GW(p)
What do you mean by progress? Does progress mean more stuff being made in that segment, as in, able to produce more of it? Production technology or the technology product itself? In either case I think the are inextricably linked from the top down.
I meant technological progress that improves the price-performance tradeoff. I measure it by "what sort of prices do I see when I go to Amazon and search for USB flash drives?"
Where I disagree is "On either end, therefore, the incentives for innovation seem low". If the tools are there, people will find ever more awesome ways of using them, and that is the history of tech progress!
I do agree that if the tools are there and people get them (essentially) for free they'll find ways to use it. If the tools are there but at exorbitant prices, they won't. This gets back to the question of whether it's easy enough to improve the price-performance tradeoff sufficiently dramatically to get to the threshold where people are willing to pay for it. The existence of early adopters and intermediate populations can help bridge the chasm.
Thanks once again for your comments!
↑ comment by VipulNaik · 2014-04-15T16:00:10.517Z · LW(p) · GW(p)
Well, they might invest in r&d. If they can take advantage to increasing returns to scale in meeting demand they'd likely just build more whatevers.
This is true, and relates to the points i made later about the nature of demand and production structure mattering.
Won't it? I think it will. Lower costs -> higher accounting profits.
See the PS where I go into this in more detail.
I don't follow this. Given that they are early-adopters, only other early adopters are present in the market at that point in time. How can they be freeriding off consumers who are willing to strike a tough bargain, as you say, if those bargain strikers are not present in the market by definition? You mean they benefit later when the later adopters enter? presumably the early adopters are at that time moved on to something newer and better.
Perhaps my language wasn't clear. Obviously, they're not freeriding at the point in time when the market is young. But by the time the market gets large enough and costs drop, they are effectively freeriding.
For instance, I would probably pay something like $50/month for Facebook, but I get it for free because Facebook knows that most of its users wouldn't be willing to pay for their service, so they offer it for free. So I save $50/month, freeriding on the stinginess of other users.
You may argue that the early adopters paid their dues by buying the technology early on. But there could be a bunch of young people who "would have been early adopters" if they'd been around in the technology's infancy but they weren't around.
This need not be the case especially in the face of further tech improvements in production or dramatically lower costs of factors of production, even for exogenous reasons.
Yes, that is correct, it need not be the case. At the same time, I think it's a consideration.
More in my next reply comment.
↑ comment by VipulNaik · 2014-04-15T15:53:00.343Z · LW(p) · GW(p)
This is often referred to as the learning curve in microeconomics.
I already mentioned (and linked to) the term "experience curve effects" which is synonymous. I didn't claim this is unique to any specific industries, just that it was somewhat related to the point about the time-directionality of technology. I had written:
This sort of time-directionality is closely related to (though not the same as) the idea of experience curve effects: instead of looking at the quantity demanded or supplied per unit time in a given time period, it's more important to consider the cumulative quantity produced and sold, and the economies of scale arise with respect to this cumulative quantity.
You write:
Whilst the decrease in average cost due to economies of scale is along the long run average cost curve, the decrease in average cost due to this learning is a shift in the long run average cost curve. This is also routine as one would imagine and a good example is a potato chip factory.
Yes, I agree.
↑ comment by VipulNaik · 2014-04-15T15:48:40.912Z · LW(p) · GW(p)
I'm far from convinced of this, if by technology you mean modern cutting edge computation tech for example. ALL goods ARE technology. From programming languages to aircraft to condoms to corkscrews. Knowledge, for a very long time, has been easy to reproduce. The fixed costs of produces a dread-tree-book is huge but the marginal cost of another one, once it is written and typeset and at first printed, is tiny in comparison.
I agree that knowledge relevant to production is technology. But still, the fraction of a product's cost that's attributable to technology can vary widely. And the fraction that's attributable to new technology at the margin can vary even more widely.
For instance, printing presses for books are technology, but they're not technology at the margin. The technology has already been invented. When you set up a printing press based on old, pre-existing technology, you are investing in capital and labor, not in technology (except insofar as your buying the equipment played an incentivizing role retrospectively for the people who came up with the relevant printing press technology).
The ratios also matter. If we're shipping physical books, then the marginal cost is over a dollar. If we're letting people download e-books, the marginal cost is under a cent. If we think of the content of the book as the "technology" (insofar as it contains ideas) then the ratio of technology fixed cost (the cost of writing the book) to the costs of selling/distributing the book is a lot higher in the e-book case because the cost of selling/distributing is lower.
↑ comment by VipulNaik · 2014-04-15T15:31:44.674Z · LW(p) · GW(p)
I'll reply to your points one by one, and link to these replies from the main post.
On your point about short and long run and fixed versus variable costs.
As you point out, in the short run, producers cannot immediately alter their fixed cost inputs e.g. new factories (r&d is a subelty different in the way it reduces costs which I'll return to), but in the long run they can - all costs are variable in the long run. So, if there is an increase in demand of good A, and production of good A is characterised by constant- or increasing-returns to scale or economies of scale (these concepts are different with the same result, one referring to a fixed proportion of inputs and the other to varying proportion of inputs) over time we end up with an increase in quantity produced of A and a decrease in the price of A. But there need not be a downward sloping supply curve. In the case of constant- or increasing-returns to scale you just make more A because more A is demanded. In the case of economies of scale you just select from an envelop of possible short run average cost curves along the long run cost curve that meets the quantity demanded and maximises profit.
Thanks for that observation. I now realize that I wasn't clearly distinguishing between two related but distinct ideas:
- Even in the short run, a huge proportion of fixed costs means that the average cost per widget can go down as production increases for wide variation in the initial quantity of production. This could happen even though the marginal cost goes up, because the fixed costs still explain the bulk of production cost. The short-run supply curve could still be upward-sloping, but the short-run average total cost curve would be downward-sloping for quite a while (this is the average versus marginal distinction).
- The long-run supply curve is selected as an envelope of short-run supply curves, where different short-run supply curves consider different production scenarios. The reason why we can consider multiple short-run supply curves is that we have freedom to vary the "fixed" costs. Whether the long-run supply curve is upward-sloping or downward-sloping will depend on whether things become cheaper per unit when we spend more.
I do see now that the ideas are conceptually distinct. They are related in the following very trivial sense: if fixed costs contributed nothing at all to production, then the short run and long run behavior wouldn't differ. If, however, fixed costs do contribute something, then while we can say that the long-run supply curve is not as upward-sloping as the short-run supply curve, we can't categorically say that it will be downward-sloping. It could be that to double the production, the best strategy is to double fixed and variable costs, so that the long-run supply curve would just be flat (regardless of whether fixed or variable costs dominate).
I guess what I was additionally (implicitly) assuming is not just that the fixed costs dominate production, but that the fixed costs themselves are subject to economies of scale. In other words, I was thinking both that "the cost of setting up the factory to manufacture widgets dominates the cost of labor" and that "the cost of setting up the factory scales sublinearly with the number of widgets produced." If both assumptions are true, we should see downward-sloping long-run supply curves. In the real-world scenarios I had in mind, this is approximately true. But there are many others where it's not.
Thanks for pointing this out!
PS: My intuition was coming from the observation that the more dominant a role fixed costs play in the production process at the margin , the larger the divergence between the short-run and long-run supply curves. But of course, fixed costs could be a huge share of the production process money-wise and yet not play a dominant role at the margin (i.e., when comparing different short-run supply curves). And further, even if the long-run supply curve is much less upward-sloping than the short-run supply curve, that still doesn't make it downward-sloping.
comment by ChristianKl · 2014-04-15T16:06:44.726Z · LW(p) · GW(p)
What's the point of a 1 TB USB flash drive? I would want to have 1 TB on my mobile phone and an ability of my mobile phone to communicate with whatever computer on which I might want to transfer data.
I personally went from a HTC Hero to with less than 1000 MB internal storage to a Moto G 8 GB phone and feel like having made a mistake of not going for more storage space.
For my notebook I like the fact that it has a silent SSD. But I need to use an external harddisk to store all my data and that is no SSD and therefore can make noise.
Replies from: gwern↑ comment by gwern · 2014-04-15T17:04:40.775Z · LW(p) · GW(p)
What's the point of a 1 TB USB flash drive?...For my notebook I like the fact that it has a silent SSD. But I need to use an external harddisk to store all my data and that is no SSD and therefore can make noise.
If you can have a 1TB USB flash drive, then presumably you can have a 1TB internal drive which will store all of your files. (As indeed I have at this moment; well worth the ~$550.)
Replies from: ChristianKl↑ comment by ChristianKl · 2014-04-15T21:57:48.424Z · LW(p) · GW(p)
Yes, but that doesn't change the fact that looking at the usefulness of a 1TB USB flash drive is not a useful way to look at incentives to develop better storage media.
comment by ChristianKl · 2014-04-15T15:48:05.690Z · LW(p) · GW(p)
When it comes to Quantified Self technology such as MyBasis, having higher storage capabilities is very useful. Ideally you want to store information such as skin conductance at 1000 data points per second.
There are many times when sensors and camera's aren't that expensive and you could store a lot more data when you would have enough storage.
The limit to data retention of London surveillance videos isn't privacy law but cost of storage space. Even if private consumers have no need for diskspace companies want to do big data and need storage space.
comment by [deleted] · 2014-04-11T20:03:48.247Z · LW(p) · GW(p)
Perhaps you can test your model by applying it to the transition from oral culture to literacy, printed culture to electronic, or similar transitions which have already happened. Did what you are predicting happen then?