The President's Council of Advisors on Science and Technology is soliciting ideas
post by Kevin · 2010-07-12T23:41:04.746Z · LW · GW · Legacy · 83 commentsContents
83 comments
The question that the ideas are supposed to be in response to is:
What are the critical infrastructures that only government can help provide that are needed to enable creation of new biotechnology, nanotechnology, and information technology products and innovations -- a technological congruence that we have been calling the “Golden Triangle" -- that will lead to new jobs and greater GDP?"
Here are links to some proposed ideas that you should vote for, assuming you agree with them. You do have to register to vote, but the email confirmation arrives right away and it shouldn't take much more than two minutes of your time altogether. Why should you do this? The top voted ideas from this request for ideas will be seen by some of the top policy recommendation makers in the USA. They probably won't do anything like immediately convene a presidential panel on AGI, but we are letting them know that these things are really important.
Research the primary cause of degenerative diseases: aging / biological senescence
Explore proposals for sustaining the economy despite ubiquitous automation
Establish a Permanent Panel or Program to Address Global Catastrophic Risks, Including AGI
Does anyone have any other ideas? Feel free to submit them directly to ideascale, but it may be a better idea to first post them in the comments of this post for discussion.
83 comments
Comments sorted by top scores.
comment by Kevin · 2010-07-20T05:17:31.955Z · LW(p) · GW(p)
Thanks LW. Aging was the #2 most popular idea and catastrophic/AGI risk was the #3 idea.
The council met on July 16th. Would anyone like to watch the webcast of the July meeting and see if they mentioned these ideas?
http://www.whitehouse.gov/administration/eop/ostp/pcast
ETA: They didn't.
Replies from: VNKKET, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-20T07:13:28.330Z · LW(p) · GW(p)
Probably not, it's not on the agenda and they have no incentive to discuss it.
comment by twanvl · 2010-07-13T17:52:52.959Z · LW(p) · GW(p)
An important goal could be to increase rationality in the world. I believe that would do more good in the long term than trying to solve aging. The best way to achieve that through this suggestion box would be an idea that is simple and direct and hard to get wrong.
For the first two points (simple and direct) better education would help. How about improving science classes by including actual science, that is, learning to change your mind and to look at evidence. However, I fear this will be very hard to get right, and hard to get accepted by teachers. Does anyone have other suggestions for improving rationality?
Replies from: Roko, NancyLebovitz↑ comment by Roko · 2010-07-13T17:57:18.933Z · LW(p) · GW(p)
Retroviral genetic engineering once we know what genes control rationality.
It has the advantage that it would be in people's self-interest to do this. I suspect that some kind of individually beneficial modification is the solution.
Replies from: Hook, sketerpot, twanvl, mattnewport↑ comment by Hook · 2010-07-15T13:18:02.517Z · LW(p) · GW(p)
Psychosurgery or pharmaceutical intervention to encourage some of the more positive autistic spectrum cognitive traits seems more likely to work than this. We are far from identifying the genetic basis of intelligence or exceptional intelligence, never mind an aspect as specific as rationality.
It's also not clear that it is in someone's self interest to do this. I know you said retroviral genetic engineering, but for now I'll assume that it would only be possible on embryos. In that case, if someone really wanted grand children, it is not clear that making these alterations in her children would be the best way to achieve that goal.
↑ comment by sketerpot · 2010-07-13T19:07:26.650Z · LW(p) · GW(p)
Retroviral genetic engineering once we know what genes control rationality.
Would those genes have time to be expressed later in life?
Also, how heritable is rationality? How do we even measure it, to find out? And if we find some subset of our genes which can be reasonably tweaked to enhance rationality, I'm guessing that they would probably affect people's capability to be rational, in a bunch of indirect ways. There would have to be a memetic component, alongside any genetic tweaks we make. I think.
↑ comment by twanvl · 2010-07-13T18:20:38.842Z · LW(p) · GW(p)
That sounds a bit too long-term, though. By the time that becomes viable and socially acceptable (if it ever becomes so), we will be many presidencies into the future.
Also, for people to recognize that rationality enhancement is in their best interest you need some baseline level of rationality.
↑ comment by mattnewport · 2010-07-13T18:01:21.654Z · LW(p) · GW(p)
Do you know of any evidence suggesting strong heritability of rationality?
Replies from: Roko↑ comment by Roko · 2010-07-13T18:21:21.541Z · LW(p) · GW(p)
Most things are heritable. IQ, for example, with a coefficient of 0.4.
Replies from: mattnewport↑ comment by mattnewport · 2010-07-13T18:27:33.640Z · LW(p) · GW(p)
I'm skeptical of a strong heritability for rationality independent of IQ. Your original statement suggested to me that you knew of stronger evidence than 'most things are heritable'. If you don't know of any such evidence I'm inclined to remain firmly on the fence.
Replies from: NancyLebovitz, Roko↑ comment by NancyLebovitz · 2010-07-13T23:03:27.729Z · LW(p) · GW(p)
Have you read The Millionaire Next Door? The book is about people of very ordinary intelligence who are more rational about money than many smarter people.
Replies from: mattnewport, SilasBarta↑ comment by mattnewport · 2010-07-13T23:51:53.069Z · LW(p) · GW(p)
I haven't read it but I don't think this kind of example talks directly to the question of whether rationality is a strongly heritable trait independent of IQ. My current hypothesis (not strongly held or supported by large amounts of evidence) is that rationality is more a learned skill or habit of thinking which will tend to correlate with IQ (because higher IQ people will learn it easier/faster and apply it better) but that some high IQ people have failed to learn it and some lower IQ people have become quite good at it.
Examples of lower IQ people who are more rational than higher IQ people do not on their own help to distinguish whether rationality is a separately heritable trait from IQ or a learned habit of thinking.
I would not be hugely surprised if certain big 5 personality traits or other potentially heritable personality traits made people more inclined to learn rational thinking which might provide an indirect basis for heritability of rationality.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-14T01:18:14.729Z · LW(p) · GW(p)
I wasn't reading carefully, so I just offered evidence that rationality is somewhat independent of IQ.
↑ comment by SilasBarta · 2010-07-13T23:15:47.927Z · LW(p) · GW(p)
Didn't the people in that book get rich by saving a lot and investing aggressively for the long term?
How's that strategy working out?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-13T23:19:26.517Z · LW(p) · GW(p)
I don't know of any follow-ups.
They invested conservatively, not aggressively. I expect they're better off than people who got heavily into debt, but probably not as well off as some people who were insiders enough to not lose too much when they made bad investments for other people.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-14T13:44:02.169Z · LW(p) · GW(p)
I meant aggressively in the sense of well-diversified and stock-heavy (hence the "long-term" bit). If they got rich off of bond interest, well, it wasn't investment acumen that explains their success, but a) raw earning power, and b) not spending it all.
"Assume a high income" is not all that helpful.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-14T23:22:43.428Z · LW(p) · GW(p)
Unfortunately, exactly what they invested in wasn't something I was very sensitive to, and I don't remember it.
Generally, they had fairly ordinary incomes, and they invested in things which were considered low-risk at the time. A fair number of them had real estate in the sense of owning car dealerships (used car lots?), with the land under the business being a large part of their wealth.
They disliked spending money. It was common for them to be men whose wives made a full-time job of running the household cheaply. (There was a later book called The Millionaire Woman Next Door.)
↑ comment by Roko · 2010-07-13T18:43:15.608Z · LW(p) · GW(p)
remain firmly on the fence.
But presumably you have a prior probability distribution over the heritability parameter?
Replies from: Hook, mattnewport↑ comment by Hook · 2010-07-13T19:41:21.831Z · LW(p) · GW(p)
For heritability, I think rationality is closer to reading than it is to intelligence.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-13T23:12:59.747Z · LW(p) · GW(p)
How heritable is reading?
Replies from: Hook↑ comment by Hook · 2010-07-14T12:43:44.997Z · LW(p) · GW(p)
For the time being, I'll just consider literacy as a binary quality, leaving aside differences in ability. In developed countries, with literacy rates around 99%, literacy is probably some what heritable because that <1% cannot read because of some sort of learning defect with a heritable component.
In Mail, with a 26.2% literacy rate, literacy is not very heritable. The illiterate there are a consequence of lack of educational opportunities. I think that the situation we are in regarding the phenotype of "rational" is closer to the Mali scenario rather than the developed world scenario.
↑ comment by mattnewport · 2010-07-13T19:11:09.420Z · LW(p) · GW(p)
You mean in general or for rationality in particular? Lots of things are heritable, to varying (and often disputed) extents. I tend to think genetic factors are often underestimated when explaining human variability. I'm not familiar enough with the evidence for heritability of other high level cognitive abilities to make a very good estimate for the heritability of rationality however.
I've just bought What Intelligence Tests Miss for my Kindle after reading the article series here. As I said, I'm skeptical that rationality as an independent factor from IQ is strongly heritable but I'm open to evidence to the contrary which is why I was curious if you had any.
Replies from: Roko↑ comment by Roko · 2010-07-13T19:19:29.125Z · LW(p) · GW(p)
You should still have a prior. "I don't have enough detailed info" is not an excuse for not having a prior.
Why not just take the probability distribution of heritability coefficients for traits-in-general as your prior?
Replies from: mattnewport↑ comment by mattnewport · 2010-07-13T20:19:15.984Z · LW(p) · GW(p)
You should still have a prior. "I don't have enough detailed info" is not an excuse for not having a prior.
No, it's not, but I think it's a reasonable excuse for not having a more specific prior than 'low and uncertain'. Being more specific in my prior would not be very useful without being more specific about what exactly the question is. I sometimes see a tendency here to overconfidence in estimates simply because a bunch of rather arbitrary priors have been multiplied together and produced a comfortingly precise number.
Why not just take the probability distribution of heritability coefficients for traits-in-general as your prior?
I don't know what it is. I suspect it is not a well established piece of information. I'm not convinced that heritability for 'traits-in-general' is a good basis for rationality in particular. Do you have a reference for a good estimate for this distribution?
Replies from: Roko↑ comment by Roko · 2010-07-13T20:30:07.125Z · LW(p) · GW(p)
I feel that people who refuse to give a numerical prior and use protestations of ignorance (that can be cured with a 5-second google search) as an excuse to say "very low" are really engaging in motivated cognition, usually without realizing.
Whenever one says "I don't have much info so I think the probability of X is really low", one should ask oneself:
(1) Would I apply the same argument to ~X ? A "very low" prior for X implies a very high prior for ~X. Am I exploiting the framing of using X rather than ~X to engage in motivated skepticism?
(2) Has it even crossed my mind to think about how easy it might be to find the info, and if not, am I willfully clinging to ignorance in order to avoid having to give up a cherished belief?
Replies from: mattnewport, Vladimir_Nesov↑ comment by mattnewport · 2010-07-13T20:53:06.783Z · LW(p) · GW(p)
I never qualified 'low' with 'very' or 'really'. If numbers make you feel better 'low' means roughly 1-10% probability. I find it a little backwards when someone focuses so much on precisely quantifying an estimate like this before the exact question is even clarified. I see it a lot from non-technical managers as a programmer.
I started this thread by asking for the information you were using to arrive at your (implied) high confidence in a genetic basis for rationality. There's been several recent articles about What Intelligence Tests Miss and I haven't started reading it yet (though it is now sitting on my Kindle) so I was already thinking about whether rationality as a separate trait from IQ is a distinct and measurable trait. I haven't seen enough evidence yet to convince me that it is so your implication that it is and is strongly heritable made me wonder if you were privy to some information that I didn't know.
While assigning numerical probabilities to priors and doing some math can be useful for certain problems I don't think it is necessarily the best starting point when investigating complex issues like this with actual human brains rather than ideal Bayesian reasoning agents. I'm still in data gathering mode at the moment and don't see a great benefit to throwing out exact priors as a badge of Bayesian honor.
Replies from: Roko↑ comment by Roko · 2010-07-13T21:10:36.543Z · LW(p) · GW(p)
means roughly 1-10% probability
of what threshold heritability coefficient?
Really, you should give a set of probabilities: P(H>0.01), P(H>0.02), P(H>0.03),, ... P(H>0.98), P(H>0.99).
A reasonable distribution might be some quasi-linear function?
Given that memory, verbal intelligence, spatial reasoning and general intelligence all have values of H of around 0.4, it seems that P{H>0.3) ~ 70%
Replies from: mattnewport↑ comment by mattnewport · 2010-07-13T21:29:15.960Z · LW(p) · GW(p)
of what threshold heritability coefficient?
This is what I meant about quantifying the probability estimate before clarifying the exact question. As I said originally, I'm skeptical of a strong heritability for rationality independent of IQ. I'm not sure what the correct statistical terminology is for talking about this kind of question. I think there is a low probability that a targeted genetic modification could increase rationality independent of IQ in a significant and measurable way. That belief doesn't map in a straightforward way onto a claim about the heritability of rationality. I'm expecting What Intelligence Tests Miss to help clarify my thinking about what kind of test could even be used to reliably separate a 'rationality' cognitive trait from IQ which would be a necessary precondition to measuring the heritability of rationality.
Given that memory, verbal intelligence, spatial reasoning and general intelligence all have values of H of around 0.4, it seems that P{H>0.3) ~ 70%
These all correlate significantly with IQ however I believe (correct me if you think I'm wrong on this). It's at least plausible that targeted genetic modifications could improve say spatial or verbal reasoning significantly more than IQ (perhaps by lowering scores in other areas) since there is some evidence of sex differences in these traits. Rationality seems more like a way of reasoning and a higher level trait than these 'specialized' forms of intelligence however.
Replies from: Roko↑ comment by Roko · 2010-07-13T23:09:55.291Z · LW(p) · GW(p)
Rationality seems more like a way of reasoning and a higher level trait than these 'specialized' forms of intelligence however.
Maybe. Actually, I think that the dominant theory around here is that rationality is actually the result of an atrophied motivated-cognition module, so perfect rationality is not a question of creating a new brain module, but subtracting off the distorting mechanisms that we are blighted with.
Replies from: Hook↑ comment by Hook · 2010-07-14T14:00:30.295Z · LW(p) · GW(p)
I realize that "brain module" != "distinct patch of cortex real estate", but have there been any cases of brain damage that have increased a person's rationality in some areas?
I am aware that depression and certain autism spectrum traits have this property, but I'm curious if physical trauma has done anything similar.
↑ comment by Roko · 2010-07-14T14:06:54.417Z · LW(p) · GW(p)
I don't know, but without a standardized test for rationality (like there is for IQ), how would we even notice?
Googling for "can brain injury cause autism" leads to conflicting info:
"This is a question which arises again and again, it's other form is 'Can brain injury cause autism?' Of course the answer is most definitely, yes!"
"Blows to the head, lack of oxygen, and other physical trauma can certainly cause brain damage. Brain damaged children may have behaviors similar to those of autistic children. But physical injury cannot cause accurately diagnosed autism. Certainly a few non-traumatic falls in infancy are NOT the cause of autism in a toddler."
↑ comment by Roko · 2010-07-14T14:09:16.966Z · LW(p) · GW(p)
To test this, you'd need to somehow identify a group of patients that were going to receive some kind of very specific brain surgery, and give them a pre- and post- rationality test.
Replies from: Hook, Roko↑ comment by Hook · 2010-07-14T14:57:12.169Z · LW(p) · GW(p)
At this point I was mostly wondering if there were any motivating anecdotes such as Phineas Gage or gourmand syndrome, except with a noticeable personality change towards rationality. Someone changing his political orientation, becoming less superstitious, or gambling less as a result of an injury could be useful (and, as a caveat, all could be caused by damage that has nothing to do with rationality).
Replies from: Roko↑ comment by Roko · 2010-07-14T15:27:51.037Z · LW(p) · GW(p)
This paper on schizophrenia is interesting.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-14T15:53:42.013Z · LW(p) · GW(p)
...because of the following insights ____.
↑ comment by Roko · 2010-07-14T14:24:06.967Z · LW(p) · GW(p)
and even then you would only expect 1 in 50 or so kinds of brain surgery to remove the part that caused (say) motivated cognition, and only one in 5 or so of those to not do so much damage that you could actually detect the positive effect.
Better, use high-precision real-time brain imaging to image somebody's brain when motivated cognition is happening, then use high-precision TMS to switch just that part off.
↑ comment by Vladimir_Nesov · 2010-07-13T21:22:37.201Z · LW(p) · GW(p)
You can apply laws of probability to intuitive notions of plausibility as well (and some informal arguments won't be valid if they violate these laws, like both X and ~X being unexpected). Specific numbers don't have to be thought up to do that.
↑ comment by NancyLebovitz · 2010-07-13T22:56:59.846Z · LW(p) · GW(p)
How about teaching self-experimentation?
This is interesting about creativity and project-oriented education.
comment by timtyler · 2010-07-13T06:26:56.614Z · LW(p) · GW(p)
Regarding the second point, here is my 2p on "The Lights in the Tunnel":
"The Lights in the Tunnel" is a whole book about a topic I am interested in: the effects of automation. However, there is a serious flaw that pervades the book's whole analysis:
Martin argues that the economy will crash - as machines take the jobs of consumers, they no longer have any money to spend on things - and cash flows spiral downwards.
Martin says: "Another way of expressing this is to say that although machines may take over people’s jobs, the machines - unless we are really going to jump into the stuff of science fiction - do not participate in the market as consumers" - page 24.
However, machines still participate in the market indirectly - via people. Humans buy fuel, spare parts, add-ons and "consumables" for their machines. Machines still "consume" - even if they don't have bank accounts and can't go to the shops. The resulting effect on the economy is much the same as if the machines were themselves consumers.
This seemingly-simple point destroys much of the book's DOOM-mongering analysis. There are some other good things in the book - but IMO, the author damages the reader's view of his competence by making this kind of mistake.
Replies from: mford, Richard_Kennaway↑ comment by mford · 2010-07-14T10:49:59.957Z · LW(p) · GW(p)
Hi,
This is Martin Ford, the author of The Lights in the Tunnel. I just wanted to respond to your comment here:
If the average consumer is unemployed and has no income, he is obviously not going to be purchasing stuff for his machines. In fact, ownership of the machines will concentrate into a shrinking elite as machines take the jobs of average people.
Remember that the focus here is on what we might call "end consumption." If you consider GDP, consumer spending is about 70% of that. It is important to note that that is only END consumption by PEOPLE.
When General Motors purchases steel for its cars that is not consumption and is NOT added to GDP. The value of the steel gets accounted for ONLY when someone buys a car. The same argument applies to this idea of machines using resources. If the machines are used in production--and if they replace human workers--then what the machines "consume" is not END consumption and does not drive the economy. It is intermediate consumption. A PERSON still has to purchase the end product. Machines cannot do this. If too few people have the ability to purchase END products, the mass market economy will collapse.
Everything produced by the human economy is ultimately consumed by individual human beings. This applies even to government spending since the services provided by government are consumed by people. The only other factor is business investment, and that occurs in response to anticipated future consumer spending--businesses will invest only if they anticipate future demand.
Anywone who is interested can read the book for free in PDF at http://www.thelightsinthetunnel.com. If you prefer to buy the book at Amazon, I would like that even bettter.. ;-)
Replies from: Roko, timtyler↑ comment by Roko · 2010-07-14T11:20:53.734Z · LW(p) · GW(p)
Martin,
I hate to seem rude about you or your book, which I must commend you for for going to the time and trouble of writing and publishing, but it seems that the thesis presented is contradicted by a simple piece of economics.
To see the mistake, look at this paragraph summarizing the chapter on "Make Work Bias" from The Myth of the Rational Voter
Make-work bias
Caplan refers to the make-work bias as a “tendency to underestimate the economic benefits from conserving labor.” (p40) People tend to equate economic growth with job creation, even if those jobs are wasteful or outright detrimental to growth. Economists argue that this is precisely wrong: growth comes from increases in the productivity of labor. The resulting increase in productivity, ceteris paribus, causes people to be dismissed from existing jobs and encourages them to seek more socially valuable opportunities. Caplan makes special emphasis of the movement away from farming over the past two hundred years—from 95% of Americans as farmers to just 3%—as an illustrative example. Those millions who are no longer farming can be employed to make iPods, maintain communication networks, and run restaurants
Just because the average consumer is unemployed doesn't mean he has no income. For example, he might own shares in robot companies that pay a big dividend. Why would they pay a big dividend? Because, by assumption, the robot company makes extremely useful products. The person might also get a government handout, which would be financed by heavy government taxes on these extremely productive robot/AI companies.
Furthermore, just because Robots and AI-systems can do all of the pragmatically necessary work, it doesn't follow that no-one will want a human to do something. For example, you might want a real human musician rather than a robot, etc.
In this happy state of affairs, people would probably cease to be motivated by money, and instead be motivated in part by pursuit of other scarce resources, such as the time of people who had rare and desirable traits, or by "making a difference to the world"-type considerations. Though I suspect that most of the motivation would come from just wanting to go and have a good time. Drinks are served, time to party, etc.
Replies from: timtyler, khafra↑ comment by timtyler · 2010-07-14T20:07:11.157Z · LW(p) · GW(p)
Re: "Just because the average consumer is unemployed doesn't mean he has no income. For example, he might own shares in robot companies that pay a big dividend. Why would they pay a big dividend? Because, by assumption, the robot company makes extremely useful products."
Most depend for their income on revenue from work - or government handouts. Few can live for long off interest on their savings. Increased production levels in the future are unlikely to make all that much difference to that situation. Martin makes a broadly similar point here.
Martin is also already well aware of the possibility of government handouts supported by taxation:
"In The Lights in the Tunnel, I argue that we will ultimately have to provide supplementary income to the majority of the population; if we don’t do so, we won’t be able to sustain consumption. That type of scheme, obviously, would have to be supported by some type of taxation ..." - source
Replies from: Roko↑ comment by Roko · 2010-07-14T20:18:17.753Z · LW(p) · GW(p)
Few can live for long off interest on their savings.
Why? If your savings of $50,000 start yielding 1000% returns per year, then I don't see a problem.
Replies from: Vladimir_M, timtyler↑ comment by Vladimir_M · 2010-07-14T21:16:18.106Z · LW(p) · GW(p)
We've been through this before. Very high returns by themselves give no guarantee that you'll be able to live off the interest on a modest amount, since the price of whatever you require for subsistence may be increasing at an even higher rate.
Replies from: timtyler, Roko↑ comment by timtyler · 2010-07-15T08:05:08.436Z · LW(p) · GW(p)
I generally approve of this analysis.
In a competition for limited resources, those relying on interest payments from capital seem likely to get progressively poorer - relative to those with interest payments from capital and income from work - even if the work income is taxed and redistributed.
Yes, the poor can vote in tax increases - but the rich can lobby the government, perform tax dodges, seek out tax havens - and so on. I know which position I would rather have.
↑ comment by Roko · 2010-07-14T21:27:53.497Z · LW(p) · GW(p)
Hang on -- before we were assuming that the Robots (ems) were consumers. Here we're assuming the opposite, that humans and only humans consume. Therefore the consumption basket can't go haywire.
Actually one way things could go wrong would be if an "elite" group of humans took the place of ems, and consumed 99.999999999999999999999% of output. So in order for things to be OK, economic disparity has to remain non-insanely-high. But even the modest taxes that we have today, plus wealth redistribution, would ensure this, and it seems that there would be stronger incentives to increase wealth redistribution than to decrease it.
Replies from: Vladimir_M, timtyler↑ comment by Vladimir_M · 2010-07-14T22:37:41.479Z · LW(p) · GW(p)
Roko:
Hang on -- before we were assuming that the Robots (ems) were consumers. Here we're assuming the opposite, that humans and only humans consume. Therefore the consumption basket can't go haywire.
What counts as "consumption" is a matter of definition, not fact. Even if you book the "consumption" by machines as capital investment or intermediate goods purchases, it's still there, and if machines play an increasingly prominent role, it can significantly influence the prices of goods that humans consume. With machines that approach human levels of intelligence and take over increasingly intelligent and dexterous human jobs, this difference will become an increasingly fictional accounting convention.
Land rent is another huge issue. Observe the present situation: food and clothing are nowadays dirt cheap, and unlike in the past, starving or having to go around without a warm coat in the winter are no longer realistic dangers no matter how impoverished you get. Yet, living space is not much more affordable relative to income than in the past, and becoming homeless remains a very realistic threat. And if you look at the interest rates versus prices, you'll find that the interest on a fairly modest amount would nowadays be enough to feed and clothe yourself adequately enough to survive -- but not to afford an adequate living space. (Plus, the present situation isn't that bad because you can loiter in public spaces, but in a future of soaring land rents, these will likely become much more scarce. Humans require an awful lot of space to subsist tolerably.)
So in order for things to be OK, economic disparity has to remain non-insanely-high.
When it comes to the earnings from rent and interest, the present economic disparity is already insanely high. What makes it non-insanely-high overall is the fact that labor can be sold for a high price -- and we're discussing the scenario where this changes.
Replies from: Roko↑ comment by Roko · 2010-07-14T23:55:58.919Z · LW(p) · GW(p)
I'll certainly agree that poorer humans might run out of land that's all owned by a few rich humans. If the value of labor dropped to zero, then land ownership would become critically important, as it is one of the few resources that are essentially not producable, and therefore the who-owns-the-land game is zero sum.
But is land really unproducable in this scenario? Remember, we're assuming very high levels of technology. Maybe the poorer humans would all end up as seasteaders or desert dwellers?
What about the possibility of producing land underground?
What about producing land in space?
The bottom line seems to be that our society will have to change drastically in many ways, but that the demise of the need for human labor would be a good thing overall.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-07-15T01:16:49.531Z · LW(p) · GW(p)
Roko:
But is land really unproducable in this scenario? Remember, we're assuming very high levels of technology. Maybe the poorer humans would all end up as seasteaders or desert dwellers?
What about the possibility of producing land underground?
What about producing land in space?
Given a well-organized and generous system of redistribution, the situation actually wouldn't be that bad. Despite all the silly panicking about overpopulation, the Earth is a pretty big place. To get some perspective, at the population density of Singapore, ten billion people could fit into roughly 1% of the total world land surface area. This is approximately the size of the present-day Mongolia. With the population density of Malta -- hardly a dystopian metropolis -- they'd need about 5% of the Earth's land, i.e. roughly the area of the continental U.S.
Therefore, assuming the powers-that-be would be willing to do so, in a super-high-tech regime several billion unproductive people could be supported in one or more tolerably dense enclaves at a relatively low opportunity cost. The real questions are whether the will to do so will exist, what troubles might ensue during the transition, and whether these unproductive billions will be able to form a tolerably functional society. (Of course, it is first necessary to dispel the delusion -- widely taken as a fundamental article of faith among economists -- that technological advances can never render great masses of people unemployable.)
Now, you write:
The bottom line seems to be that our society will have to change drastically in many ways, but that the demise of the need for human labor would be a good thing overall.
I'm not at all sure of that. I hate to sound elitist, but I suspect that among the common folk, a great many people would not benefit from the liberation from the need to work. Just look at how often lottery winners end up completely destroying their lives, or what happens in those social environments where living off handouts becomes the norm. It seems to me that many, if not most people need a clear schedule of productive work around which they can organize their lives, and lacking it become completely disoriented and self-destructive. The old folk wisdom that idle hands are the devil's tools has at least some truth in it.
This is one reason why I'm skeptical of redistribution as the solution, even under the assumption that it will be organized successfully.
Replies from: timtyler, Roko↑ comment by timtyler · 2010-07-15T07:30:57.131Z · LW(p) · GW(p)
Re: "Therefore, assuming the powers-that-be would be willing to do so, in a super-high-tech regime several billion unproductive people could be supported in one or more tolerably dense enclaves at a relatively low opportunity cost. The real questions are whether the will to do so will exist, what troubles might ensue during the transition, and whether these unproductive billions will be able to form a tolerably functional society."
Organic humans becoming functionally redundant is likely to be the beginning of the end for them. They may well be able to persist for a while as useless parasites on an engineered society - but any ultimate hope for becoming something other than entities of historical interest would appear to lie with integration into that society - and that would take a considerable amount of "adjustment".
↑ comment by Roko · 2010-07-15T01:37:40.239Z · LW(p) · GW(p)
It seems to me that many, if not most people need a clear schedule of productive work around which they can organize their lives, and lacking it become completely disoriented and self-destructive.
I think that that would just be another service or product that people purchased. Be it in the form of cognitive enhancement, voluntary projects or hobbies, etc. In fact lottery winners simply suffer from not being numerous enough to support a lottery-winner rehabilitation industry.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-07-15T03:59:12.488Z · LW(p) · GW(p)
I agree that such optimistic scenarios are possible; my gloomy comments aren't meant to prophesy certain doom, but rather to shake what I perceive as an unwarrantably high level of optimism and lack of consideration for certain ugly but nevertheless real possibilities.
Still, one problem I think is particularly underestimated in discussions of this sort is how badly the law of unintended consequences can bite whenever it comes to the practical outcomes of large-scale social changes and interventions. This could be especially relevant in future scenarios where the consequences of the disappearing demand for human labor are remedied with handouts and redistribution. Even if we assume that such programs will be successfully embarked upon (which is by no means certain), it is a non-trivial question what other conditions will have to be satisfied for the results to be pretty, given the existing experiences with somewhat analogous situations.
↑ comment by timtyler · 2010-07-15T07:22:09.893Z · LW(p) · GW(p)
Re: "before we were assuming that the Robots (ems) were consumers. Here we're assuming the opposite, that humans and only humans consume."
More accurately, Martin Ford was assuming that - and I was pointing out that trucks, fridges, washing machines, etc. are best modelled as consumers too - since they consume valuable low-entropy resources - and spit out useless waste products.
The idea that machines don't participate in the economy as consumers is not a particularly useful one. Machines - and companies - buy things, sell things, consume things - and generally do participate. Those machines that don't buy things have things bought for them on their behalf (by companies or humans) - and the overall effect on economic throughput is much the same as if the machines were buying things themselves.
If you really want to ignore direct consumption by machines - and pretend that the machines are all working exclusively for humans, doing our bidding precisely - then you have GOT to account for people and companies buying things for the machines that they manange - or your model badly loses touch with reality.
In practice, it is best to just drop the assumption. Computer viruses / chain letters are probably the most obvious illustration of the problem with the idea that machines are exclusively "on our side", labour on our behalf, and have no interests of their own.
The mis-handling of this whole issue is one of the problems with "The Lights in the Tunnel".
Replies from: Hook↑ comment by Hook · 2010-07-15T12:48:24.221Z · LW(p) · GW(p)
Would this analysis apply to the ecosystem as a whole? Should we think of fungus as consuming low entropy plant waste and spitting out higher entropy waste products? Is a squirrel eating an acorn part of the economy?
Machines, as they currently exists, have no interests of their own. Any "interests" they may appear to have are as real as the "interest" gas molecules have in occupying a larger volume when the temperature increases. Computer viruses are simply a way that machines malfunction. The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values. Nothing does.
Replies from: timtyler↑ comment by timtyler · 2010-07-15T20:03:21.829Z · LW(p) · GW(p)
Not without some changes; yes - and: not part of the human economy.
Various machines certainly behave in goal-directed ways - and so have what can usefully be described as "vested interests" - along the lines described here:
http://en.wikipedia.org/wiki/Vested_interest
Can you say what you mean by "interests"? Probably any difference of opinion here is a matter of differing definitions - and so is not terribly interesting.
Re: "The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values."
That wasn't what I meant - what I meant is that they don't completely share human values - not that they don't fulfill them.
Replies from: Hook↑ comment by Hook · 2010-07-16T12:33:36.078Z · LW(p) · GW(p)
By interests, I mean concerns related to fulfilling values. For the time being, I consider human minds to be the only entities complex enough to have values. For example, it is very useful to model a cancer cell as having the goal of replicating, but I don't consider it to have replicating as a value.
The cancer example also shows that our own cells don't fulfill or share our values, and yet we still model the consumption of cancer cells as the consumption of a human being.
If you really want to ignore direct consumption by machines - and pretend that the machines are all working exclusively for humans, doing our bidding precisely - then you have GOT to account for people and companies buying things for the machines that they manange - or your model badly loses touch with reality.
I think I might have the biggest issue with this line. Nobody is pretending that machines are all working exclusively for humans, no more than we pretend our cells are working exclusively for us. The idea is that we account for the machine consumption the same way we account for the consumption of our own cells, by attributing it to the human consumers.
Replies from: timtyler, timtyler↑ comment by timtyler · 2010-07-16T19:36:19.948Z · LW(p) · GW(p)
The idea being criticised is that - if a few humans dominate the economy by commanding huge armies of robot minions, then - without substantial taxation - the economy will grind to a halt - since hardly any humans are earning any money, and so therefore hardly any humans are spending any money.
The problem with that is that the huge armies of robot minions are consuming vast quantities of material while competing with each other for resources - and the purchase of all those goods is not being accounted for anywhere in the model - apparently because of the ideas that only humans are consumers and most humans are unemployed .
It seems like a fairly straightforwards modelling mistake to me. The purchase of robot fuel and supplies has GOT to be accounted for. Account for it as mega-spending by the human managing director if you really must - but account for it somewhere. As soon as you do that, the whole idea that increassed automation leads to financial meltdown vanishes like a mirage.
We already have a pretty clear idea about the effect of automation on the economy - from Japan and South Korea. The machines do a load of work, and their bodies need feeding - creating demand for raw materials and fuel - and the economy is boosted.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-07-16T20:54:23.905Z · LW(p) · GW(p)
How does needing raw materials create employment for the rest of the population? If everything is mechanized, then raw materials come from those who own mines/wells, and the extraction is done by robot labor. That doesn't involve very many people.
Replies from: timtyler↑ comment by timtyler · 2010-07-17T08:51:07.936Z · LW(p) · GW(p)
It doesn't create employment for the rest of the humans. In this scenario, most humans are unemployed - and probably rather poor - due to the hypothesised lack of "substantial taxation" and government handouts. The throughput of the economy arises essentially from the efforts of the machines.
↑ comment by timtyler · 2010-07-16T20:15:10.987Z · LW(p) · GW(p)
There is another take on the word "value" - which defines it to mean that which goal-directed systems want.
That way, you can say things like: "Deep Blue usually values bishops more than knights".
To me, such usage seems vastly superior to using "values" to refer to something that only humans have.
↑ comment by timtyler · 2010-07-14T20:22:37.979Z · LW(p) · GW(p)
Well, for one thing, most people on the planet don't have very much in the way of savings - and for another, sustained annual returns of 1,000% are a fantasy - for some actual figures, see here.
Replies from: Roko↑ comment by Roko · 2010-07-14T20:28:56.711Z · LW(p) · GW(p)
Why do you think that 1000% annual returns are implausible in a fully automated world?
Replies from: timtyler, mattnewport↑ comment by timtyler · 2010-07-14T20:45:03.804Z · LW(p) · GW(p)
Sustained substantial growth rates seem highly unlikely. Resources - at best - go according to t cubed - so growth is likely to relatively quickly flatten out to close to 0% - a la Malthus.
I won't estimate peak growth rates here - but the higher they are, the shorter their duration will be.
↑ comment by mattnewport · 2010-07-14T20:38:04.151Z · LW(p) · GW(p)
Assuming any kind of physical resource limits on real value (and this would include physics imposed computational limits so simulation is no get out) it can't go on for very long.
Replies from: timtyler↑ comment by timtyler · 2010-07-14T21:32:21.391Z · LW(p) · GW(p)
Bryan says:
"why couldn't economic growth of 1% (or 10%) continue forever in simulations? In the real world, we can't all be emperor of an infinite universe. But I don't see why every one of us couldn't preside over our own simulated utopias?"
My comment there was:
"The universe is made of atoms - and other particles. Virtual reality - and all the things economists study - is made of that stuff. You can't have simulations or value without things being ultimately made out of matter."
However, that doesn't prevent simulated high-utility worlds. It seems like the wirehead problem. Yes: we could go into a virtual world and award ourselves a centillion utility points - but then we and our simulated utopias would most likely be obliterated by the very first aliens that come across us - if a meteorite didn't dispatch us long before that.
Replies from: steven0461, mattnewport↑ comment by steven0461 · 2010-07-14T21:54:29.341Z · LW(p) · GW(p)
It's not an either/or thing. You could spend, say, 1% of your resources on setting up a colonization wavefront.
Replies from: timtyler↑ comment by timtyler · 2010-07-14T22:02:26.797Z · LW(p) · GW(p)
Right - but there are practically bound to be aliens out there somewhere who just want to conquer the universe - and couldn't care less about ecstacy or utopias. Those folk seem likely to eat utopias for breakfast, lunch and dinner. Unless we are a lot more confident that those folk are nowhere nearby, we should probably behave conservatively.
↑ comment by mattnewport · 2010-07-14T21:37:12.600Z · LW(p) · GW(p)
but then we and our simulated utopias would most likely be obliterated by the very first aliens that come across us
I doubt you'd have to wait for aliens - humans, trans-humans or AIs who stayed outside the simulation would likely do the job first. One reason I'd have no desire to enter a simulation.
Replies from: timtyler↑ comment by timtyler · 2010-07-14T21:44:22.201Z · LW(p) · GW(p)
Natural selection deals with most wireheads. However, there are scenarios where the entire civilisation does this sort of thing.
If there are no competitors in the short term then natural selection's action could be delayed - until some turn up.
↑ comment by khafra · 2010-07-14T16:44:37.034Z · LW(p) · GW(p)
I have to say I'm still missing the means by which the surplus of goods and services is transferred to people who are no longer needed. To be sure, in the long run labor-saving devices have increased the surplus of available goods, but each generation of workers which has its skills obsoleted by machines seems to undergo a permanent drop in standard of living; even very early automation had massive deletorious social effects.
Minimaxing the standard of living does not automatically flow forth from technological process; at best, it takes careful planning.
Replies from: Roko↑ comment by Roko · 2010-07-14T19:36:26.102Z · LW(p) · GW(p)
but each generation of workers which has its skills obsoleted by machines seems to undergo a permanent drop in standard of living
Permanent drop in standards of living sounds implausible. Maybe a permanent relative drop. What evidence do you have for this?
↑ comment by timtyler · 2010-07-14T16:54:10.185Z · LW(p) · GW(p)
Re: "If the average consumer is unemployed and has no income, he is obviously not going to be purchasing stuff for his machines. In fact, ownership of the machines will concentrate into a shrinking elite as machines take the jobs of average people."
Sure - but that was not the point which you were making that I was criticising:
You argued that unemployment would mean that spending would decline - and the economy would plummet or crash.
Whereas a more accurate analysis suggests that those in charge of the machines will ramp up their spending to feed the demands of their machines - thereby contributing to expenditure in the economy. In other words, the machines will buy and sell things - if not directly, then via companies or humans. This is not really a "jump into the stuff of science fiction" - people regularly buy things to "feed" their machines today.
The machines act as consumers - in that they consume things - even if there is a human or corporation who is buying the things involved somewhere. So: the whole idea that the economy will collapse because nobody is earning money and buying things falls down. Machines will still be earning money and buying things - even if they will be doing it by proxy agents in the form of corporate or human masters.
This idea is a fairly central one in "The Lights in the Tunnel" - and it is based on unsound thinking and bad economics :-(
Re: "Everything produced by the human economy is ultimately consumed by individual human beings."
That seems like a rather muddled way of looking at the situation. Machines have needs too. They slurp up gas, oil, electricity, raw materials. They consume - and excrete - along with all other agents in the biosphere. Companies act as non-human consumers too. It could be argued that machines and companies are slaves to humanity (to some extent - though the inverse perspective - that they are using us to manipulate the machine world into existence - also has considerable validity) - but that doesn't mean that we consume their waste products.
Re: "If too few people have the ability to purchase END products, the mass market economy will collapse."
No: the issue is not the number of human consumers, but their total spending power. Rich minorites commanding huge squads of machine minions could generate a large demand for resources - and the also ability to produce those resources - thus lubricating the economy very effectively.
Of course in a human democracy, voters would try hard to ramp up corporation taxes - in order to resist such huge inequality - but that is another issue entirely.
Replies from: Hook↑ comment by Hook · 2010-07-14T19:21:50.568Z · LW(p) · GW(p)
So, if say a million people owned all of the machines in the world, and they had no use for the human labor of the other billions of people in the world, you would still classify the economy as very effective?
I guess the question is what counts as an economic crash? A million extremely well off people with machines to tend to their every need and billions with no useful skills to acquire capital seems like a crash to most of the people involved.
Replies from: timtyler↑ comment by timtyler · 2010-07-14T19:28:39.120Z · LW(p) · GW(p)
The following quotation illustrates the context of the discussion:
"once full automation penetrates the job market to a substantial degree, an economy driven by mass-market production must ultimately go into decline. The reason for this is simply that, when we consider the market as a whole, the people who rely on jobs for their income are the same individuals who buy the products produced."
- "THE LIGHTS IN THE TUNNEL", page 24
This claim relates to the state of the marketplace - and not to the status of the unemployed humans. My comment about "lubricating the economy very effectively" was not intended to imply anything about human welfare.
↑ comment by Richard_Kennaway · 2010-07-13T16:06:00.836Z · LW(p) · GW(p)
The argument of the book looked to me on a brief eyeballing like a woolly mass of words, but the question it asks seems fair enough: If the material needs and desires of the whole population can be met by the labour of a small fraction, how do the rest of the population get the stuff they want? But this question has been asked since mass production was invented, and the scenario has still not come to pass. Somehow, the work has always expanded to use most of the population of working age.
Even if this time, massive technological unemployment really is going to happen, I'm not convinced by the book's answers. From the blurb:
The book directly challenges nearly all conventional views of the future and illuminates the danger that lies ahead if we do not plan for the impact of rapidly advancing technology.
Planning fallacy? We've had rapidly advancing technology for at least 200 years. What dangers of our rapidly advancing technology have in the past been avoided by planning? If automation does get to near-AGI levels, and a small fraction of the population can produce everything, the resulting society will look very different from today's, but I don't expect government planning to have much to do with the process of change.
Replies from: timtyler↑ comment by timtyler · 2010-07-14T17:27:06.146Z · LW(p) · GW(p)
Re: "Somehow, the work has always expanded to use most of the population of working age."
Machines are still very stupid in many work-related domains - relative to humans. The issue of what on earth the unemployed will do is likely to arise with greater acuity once machine capabilities shoot past our own in most industry-related domains - retail, farming, distribution, mining, etc.
I go into these issues on: http://alife.co.uk/essays/will_machines_take_our_jobs/
comment by Stuart_Armstrong · 2010-07-13T11:05:14.780Z · LW(p) · GW(p)
Upvoted those options on the website.
comment by Daniel_Burfoot · 2010-07-13T03:19:16.685Z · LW(p) · GW(p)
Where do I vote to disband the Council?