Posts
Comments
Here's my take on "something to protect" from personal experience:
Finding something to protect is likely quite difficult for many people. I was certainly headed in that direction, and making progress, but after having a child my "power" leveled up by orders of magnitude (if not my ability to wield it). I didn't have a child for this purpose, and the magnitude of this effect was not knowable in advance, even if the sign seemed likely.
The fact of having a child does not make me more rational. It does, however, provide a very large incentive to become more rational and to apply those gains to areas where they will earn the highest returns, and away from rationality as consumption. It's not simply that my priorities are different, but that I prioritize better, and strive for continuous improvement in this triage (or, I prioritize prioritizing) , because consequences are real and I have more to protect.
I also do not attempt to be rational about everything. That seems wrong headed, but another way to put it is that I prioritize my rationality expenditure. This is partly a willpower budget thing, partly a time budget thing. It's not they I am deliberately irrational about any one thing, but that unless I find a reason to evaluate something using the tools of reason, I don't - I assume the evolutionary reason for my carrying on as usual is good enough, so that I can get on with the business of doing stuff I have prioritized rationally.
When my toddler runs up to me, seemingly for no reason, wanting a hug, I don't think things like "What does he really want"? "What does this signal"? "The sensations I now feel, and the thoughts I am having, are not actually love - that is a social construct mapping to the the physiology evolved in order to protect the genes I have passed on... etc etc". Rather, I think something more like "Wow, it's so nice that he wants a hug from Dad. He's such a lovely boy" - because I do think those things, and the cause need not be investigated too deeply for my purposes. Then I can get back to the work of trying to make his future a good one, and that in his future more futures are good ones. It's a kind of meta-rational approach. This is an extreme case, of course, just to demonstrate the idea.
This approach demonstrates two important important issues for me. The first is that incentives matter, and aligning incentives matters. I want to minimise any principal-agent problems over my set of incentives. The second is that I've recognised that I have more power, gained from having something to protect. People often say things like "now that you have children you have to be more responsible". They often mean that you should work more, or play less, or now you have to conform to our values more (and signal it), or similar. I see it differently. The responsibility is not in doing a bunch of things, it is about working out what to do, and it was not just given to me, but is an internally generated desire for achieving good. It means that I think more carefully about goals and outcomes, and more easily disregard non-productive actions and plans. I certainly do this more than I did before having children, and likely more than I would otherwise have.
A couple of tangentially related things I find useful:
[http://www.overcomingbias.com/2014/06/you-cant-handle-the-truth.html Don’t Be “Rationalist”]
[http://www.davidhume.org/search.html?T1=on&T2=on&T3=on&A=on&L=on&ad=on&es=on&E=on&M=on&P=on&N=on&D=on&q=%22be+still+a+man%22 "Be a philosopher; but, amidst all your philosophy, be still a man."] -David Hume
Many note taking apps allow you to attach a picture. You could take notes with a pen when it is inappropriate to use your phone, then take a picture of those notes and attach them to an entry in the app. Those notes are then digitally searchable.
Six months ago I reverted to carrying and using a notebook. Prior to that I'd been using OneNote on my phone, with some success. The main reasons I began using a notebook again are: 1) when thinking about a particular problem I like to attempt describe its features in words, with a graph, and with math, where possible; 2) I find that I think slightly more clearly when I have to use a pen, and tend to be able to recall it better - not sure why this is.
I use a notebook that is about one third wider, slighter longer, and around three times thicker than my phone. It's pocket-able in most scenarios. The note books I use have a hard cover, an elastic band attached (which I use to keep a bullet space pen attached horizontally along the top of the book for a perfect fit) and is 250 lined pages.
I begin using it from the front for general notes, and record quotes or specific segments of text from the back. Collecting quotes together makes them much easier to find when the pad begins to fill up. I also underline each use of the general notes so that topics are clearly demarcated, making each easier to find later on.
Another related habbit I've gotten into over this period is using a single document notepad in the form of a Google Docs file. I have headings for books and book chapters, journal articles, blog posts, other articles, quotes, theory snippets/examples. If I have had cause to read a journal article, for example, I'll take a few minutes to write up a summary and the full reference for later use, even if I had no intention at the time to use it later. This has been a tough habit to form, but has proven to be very beneficial over the few months that I've been doing it. Of course, if a blog post or newspaper article was semi interesting, but useful only as a consumption good, I'll usually not record it.
Between these two forms of general not taking I've found that most of my needs are covered. I lost the ability to attach photos when switching back to a pen, but I haven't missed it much. I still take the pictures, it just adds and extra step to collate them, but it is something I rarely do. Before putting a full notepad to rest on the bookcase, I'll go through it and add relevant entries to my general notes Google doc (I've usually already done this) or add them in expanded form to specific topic documents.
A drawback is the potential to lose the notebook at any given time, and this is where digital connected versions are vastly superior. When I can draw and write by hand on a widely used phone as easily as I can with a notebook, I'll switch. My solution to the words, graphs and maths note taking on the phone was to record the words by typing them into OneNote, then finding a pen and paper, drawing and writing the pictures and symbols, then taking pictures and adding them to the note. This was fine when a pen and paper were readily available, but they often weren't.
My impression is that you are now evading questions and being deliberately provocative; but I'll play...
If the rate economic growth were to increase by x35, would you think you were in a runaway scenario?
I did not mean to imply that situation in quote 1 would happen within the timeframe of quote 2, and I don't think i did. It's a thought experiment and I think that is clear.
And, by the way, understanding that you lost control is how you know you're in a runaway scenario.
There are examples of this in real history from smart people who thought we'd lost control - see Samuel Butler. We have, arguably. The extent to which machines are now integral to continued economic prosperity is irreversible without unbearable costs (people will die).
We are talking about a runaway scenario in a human civilization, aren't we?
I don't think that's possible. Do you? A runaway means a massive and ongoing boost in productivity. That seems achievable only by AI, full brain emulations, or transhumans that are much smarter and faster at doing stuff than humans can be.
So what does it mean?
I was agreeing (mostly). My point was that by that definition we could never predict, or even know that we are in the middle of, a runway scenario. I did pose it as a question and you did not reply with an answer. So what do you think? If the doubling time in economic output decreased by 35 times over the next 2, or even 4 decades, would you think we are in a runaway scenario?
Precisely! So focus on the middle of the distribution, not the extremes.
No reason? How about humans?
So we're talking about a human based runaway scenario? That's not gonna happen.
Um. Runaway progress does not stall by defintion -- think about what "runaway" means.
OK, that's what 'runaway' growth means. Can this even be predicted. I think not. How could you possibly ever know that you're in a runaway? The transition from agriculture to industry saw an increase in economic growth roughly 65 times faster. I think if we saw global output accelerate by even half that in the next 20 years most would be calling a runaway scenario.
I don't think this would be runaway technological progress
No reason to think it won't be runaway technological progress, depending on how you define runaway. The industrial revolution was runaway technological progress. Going from an economic output doubling time of 1000 years to 15 years is certainly runway. The rate of growth ultimately stalled but it was certainly runaway for that transitional period, even though there were stalls along the way.
Edited to add link.
If you haven't already seen a version of this talk by Robin Hanson, the first 20 minutes or so goes into this but it's interesting throughout if you have time.
http://www.youtube.com/watch?v=uZ4Qx42WQHo
I see. The demand side story. I suppose it is technically feasible but I find it unlikely in the extreme. There is nothing in history to suggest it and I don't think it fits with psychology. History is full of examples of how we won't want for anything after we have 'some foreseen progress'. We've had the luxury of being able to trade in some economic growth for more leisure, and still be better off than our grandparents, for a long time now, but haven't.
If the reasons are that we're running out of demand for improvement across the board, and people are more satisfied with their lives, and the technological low-hanging fruit are taken, then economic growth could be lower.
Do you mean lower than it is now? After a paradigm shift in advancement?
I'm not sure how you got from my comments that I don't understand exponential growth. But let me remake the point more clearly. The doubling time of economic growth has remained stable at around 15 years. The doubling time of computational processing speed has remained roughly stable at around 24 months. I agree that economic growth in developed economies in the last 20 years has come largely from tech progress . But it has not had an effect on the rate of economic growth.
I would say that it's only happened because of exponential technological progress; we never had that level of exponential growth until the industrial revolution.
Long term global growth is achieved only through tech progress. We didn't have this rate of economic growth before the industrial revolution, that's true. It wasn't experienced during the agricultural phase. But foragers didn't enjoy the same growth rate as farmers. The rate of economic growth has not increased since well before the introduction of computers.
This is my view too. A good portion of the people in my life are addicted to something at any given time by this broader definition. I've experienced short periods of it myself ranging from gaming to geeking out way to much on a particular topic at the expense of proper food and sleep. I see it as a result of access to an ever increasing range of pleasure induces experiences at ever lower costs and hyperbolic discounting - too much of a good thing with blinders to future costs.
On what Gunnar_Zarncke has named Extreme Curiosity: "Be a philosopher; but, amidst all your philosophy, be still a man.". -David Hume
As we shift from one paradigm of advancement to another, we may still have exponential growth, but the exponent for the new exponential growth paradigm may be quite different.
Won't the rate of economic growth be different (much larger) by definition? I can't envisage a scenario where economic growth could be roughly as it is now or slower but we have experienced anything even approaching a technological singularity. Think of the change in growth rates resulting from the farming and industrial revolutions.
Something to puzzle over is the fact we have seen computational grunt grow exponentially for decade upon decade yet economic growth has been stable over the same period.
My experience is from Australia where things are a little different yet the same patterns emerge in returns to these majors.
All Australian universities offer undergraduate business majors, from the top to the bottom. Typically a Bachelor of Commerce in which a student will take intro courses on accounting, finance, economics, management etc and select a major for the remainder of credits. Universities with large econ departments often offer a Bachelor of Economics, alternatively or in addition to Commerce which covers more ground in econ but less in other areas. The placement of the econ department within a university also differs which may be in the business school, the social sciences faculty or liberal arts, and the teaching emphasis may be different. I think this demonstrates that economics is a subject which does not easily combine with others under a broader field of study. Good econ students will often take an additional 'Honours' year which is essentially masters level micro, macro and some specialised courses and a dissertation. The typical post uni path (for those that don't immediately go on to further study) is graduate programs in the commercial sector (typically commercial banks, professional services firms, insurance etc) or government (typically treasury, finance, reserve bank etc). Of those that go to government a large proportional end up at banks, insurance, consultancies etc. These programs are typically not open to grads from all majors and some will be open to a very narrow list of majors.
It seems to me that all of the hypotheses JonahSinick detailed are correct. The question is to what extent do each determine the gap in returns. There is a large existing literature on returns to different majors and there are surveys of grads that can help. Maybe an extensive lit review would be prudent before digging much further, if only to become familiar with available data and measurement issues.
True, but why the focus on elite colleges and investment banks? I think if you took out all grads employed by investment banks from all of the categories listed in the tables above you'd see the same pattern.
That's true, but employers are often looking for a skill set in additional to potential. Also, the interviewers are more likely majors from a business field than a philosophy grad and so can more easily evaluate suitability and potential based on a shared set of knowledge.
Finally I realized that non-nerds actually find listening easier than reading.
A lot of nerds listen to podcast. I'd estimate 80% plus of my communication is textual and that includes with family, and I'm a father.
I listen to several hours of podcats per week. Podcasts aren't two way communication. Like text they can be left alone and returned to at will. They can be educational and or entertainment. My favourite podcasts are those which are mostly other people conversing with each other over some debatable ideas. Audiobooks can be good too. I mostly listen to these during routine work, walking or cycling.
seem to prefer e.g. Skype calls over text chats because (to these people) voice chat requires less energy than writing, and feels like just having a normal conversation and thus effortless, whereas writing is something that requires actually thinking about what you say and thus feels much more laborious
I think most people prefer these modes to text and that we're the exception. There is some positive emotional payoff to hearing and seeing friends and loved ones. I've noticed that many people will skype or phone when the want to share positive things, but email or text when the message is a nagative or confrontational one. Lots of breakups happen via text butI imagine very few marriage proposals do.
Part 2
From #3
The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high.
Well, they might invest in r&d. If they can take adavatge to increasing returns to scale in meeting demand they'd likely just build more whatevers.
Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest.
Interesting, but I think the damage to willingness to invest would be too dear a loss and wouldn't assume it away. Don't know how much extra was gained through the tech bubble in terms of tech we wouldn't otherwise have had. The bublle was in speculative financial market investment and not necessarily in in real economy actual stuff. I'll grant there is probably some crossover. Still, if people see real value in production they will demand those goods. Did we get some gains in housing tech that we wouldn't otherwise have had because of the housing bubble?
The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting:
Won't it? I think it will. Lower costs -> higher accounting profits.
In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do
I don't follow this. Given that they are early-adopters, only other early adopters are present in the market at that point in time. How can they be freeriding off consumers who are willing to strike a tough bargain, as you say, if those bargain strikers are not present in the market by definition? You mean they benefit later when the later adopters enter? presumably the ealry adopters are at that time moved on to something newer and better.
Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge.
This need not be the case especially in the face of further tech improvements in production or dramatically lower costs of factors of production, even for exogenous reasons.
In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.
True enough it seems. And this is they way of things. The prospect of offering transcontinental flight to a european forager millenia ago would have been absurd. A hangliding tour, maybe not so much.
From #5
I can broadly support the segments you've categorised. But I disagree with the notion that "Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle."
What do you mean by progress? Does progress mean more stuff being made in that segment, as in, able to produce more of it? Production technology or the technology product itself? In either case I think the are inextricably linked from the top down.
From #6 and 7
When I first began using computers they were 16k. I was 7. I take your point about the lack of urgency for average PC users to have a great deal more ROM or RAM. But we use computers to do tasks not for their own sake and I take your point about complementary tasks. Where I disagree is "On either end, therefore, the incentives for innovation seem low". If the tools are there, people will find ever more awesome ways of using them, and that is the history of tech progress!
Keep at it. Happy to go into more detail if it might be useful. I've been thinking a lot lately about the order of realisation of various 'singularity' techs and why it matters. Anyone interested? If so I'll post up my thoughts.
Hi,
First contribution here but I've been lurking since the move from Overcoming Bias. Play nice :) I think this is an important subject matter which few people are giving attention to so I thought I'd offer some comments that might assist.
My comment was deemed too long so I've Split it up. If that's bad form let me know. Considered removing quotations of the OP but it would make it too difficult to read.
If you are going to use pieces of standard microeconomics in future versions of this analysis it might be best to spend a bit more time defining them more clearly and explaining how they map to the subject. If there is any confusion in the assumptions contained in the theory it carries through to the analysis of the case in question. It may well all be clear in your mind but it doesn't seem, at least to me, to unfold clearly in the text. But it might just be me...
Some examples From #1
But the short run looks very different from the long run. A key difference is that in the short run, suppliers cannot undertake fixed costs of capital investment and research to improve technology. In the long run, they can. Therefore, if some type of good is a lot easier to produce in bulk than in small quantities, then in the long run larger demand would lead to lower prices, because suppliers have lower production costs. For goods that enjoy efficiencies of scale in large-volume production, therefore, the long-run supply curve would be downward-sloping. An industry of this sort is called a decreasing cost industry
As you point out, in the short run, producers cannot immediately alter their fixed cost inputs e.g. new factories (r&d is a subelty different in the way it reduces costs which I'll return to), but in the long run they can - all costs are variable in the long run. So, if there is an increase in demand of good A, and production of good A is characterised by constant- or increasing-returns to scale or economies of scale (these concepts are different with the same result, one referring to a fixed proportion of inputs and the other to varying proportion of inputs) over time we end up with an increase in quantity produced of A and a decrease in the price of A. But there need not be a downward sloping supply curve. In the case of constant- or increasing-returns to scale you just make more A because more A is demanded. In the case of economies of scale you just select from an envelop of possible short run average cost curves along the long run cost curve that meets the quantity demanded and maximises profit.
A decreasing cost industry is something different. Let's say industry (multiple firms) i produces good z. In producing z they use inputs c,d and e. Industry i is categorised by economies of scale and experiences an unexpected increase in demand for good z. Over time they ramp up with new factories and grew to a medium size industry. Their input e is a non-commodity input which, because they are now a large bulk purchaser of e they buy at a discount. All other inputs costs remain the same in real terms. The result is a lower average cost of production and downward sloping industry supply curve. They may in fact be gaining from increasing returns to scale from producers of input e. Car manufacturing is one example.
From #2
The typical explanation for why some industries are decreasing cost industries... The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year.
This again is economies of scale and not decreasing cost industry. Also, it's just one form, not even an industry. To clarify what i wrote above a little, just because a firm experiences decreasing costs it does not make it a decreasing-cost-industry in the way that this special case is treated in microeconomic theory.
I am thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up.
Your meaning is not clear to me here. A lot infrastructure is durable. By "costs will continue to stay low even if demand falls back" do you mean even if demand increases once again? It will definitely stay low if demand falls, but it's not clear that that is your meaning. Why would prices go back up if the infrastructure has been re-purposed to some other market? Do you mean that because demand has decreased some firms will exit that market by re-purposing their gear and so those that remain will constitute a smaller market supply (supply curve shift) and therefore a new higher equilibrium price will emerge? I wouldn't call the airline route change a re-purpose but simply selling to a different consumer in the same market. The hard-drive to chip flash memory chips I would be more inclined to call re-purposing, but that's related to economies of scope and they'd probably be doing it already. There is already a huge body of theory on this stuff so maybe instead of using terms like 'time-directionality' and 'efficiency of scale' you could revist the theory and explain your hypothesis in very standard terminology?
Technology, particularly the knowledge component thereof, is probably an exception of sorts.
I'm far from convinced of this, if by technology you mean modern cutting edge computation tech for example. ALL goods ARE technology. From programming languages to aircraft to condoms to corkscrews. Knowledge, for a very long time, has been easy to reproduce. The fixed costs of produces a dread-tree-book is huge but the marginal cost of another one, once it is written and typeset and at first printed, is tiny in comparison.
Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.
Again, this is simply how it goes and always has for almost all products that at any stage had any value. Where demand flags think of it as re-purposing knowledge.
Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.
This is often referred to as the learning curve in microeconomics. Think about a firm who's production of a good is characterised by increasing returns to scale. Their average cost will decrease as they increase production. Over that same period they get better and faster - removing bottlenecks of various kinds through optimisation of their processes because they become more intimate with their production problems via experience and find ever more partial solutions. Whilst the decrease in average cost due to economies of scale is along the long run average cost curve, the decrease in average cost due to this learning is a shift in the long run average cost curve. This is also routine as one would imagine and a good example is a potato chip factory.