Defeating Mundane Holocausts With Robots

post by lsparrish · 2011-05-30T22:34:39.020Z · LW · GW · Legacy · 28 comments

Contents

28 comments

Causes of death such as malaria and hunger are certainly worth allocating resources towards preventing, for today's results-oriented philanthropist. It's almost ridiculous to realize we can put $1000 towards mosquito netting and save a human life. However, these kinds of things will eventually run out of low-hanging fruit, especially as countries become more developed. By advancing the adoption of certain key near-term technologies just a little sooner, we can make rather significant gains even in developed countries where the causes of death are more complex and occur later in life.

According to Brad Templeton's executive summary of the case for robotic cars:

Every year we delay deploying robocars (and related technology) in the USA, human driving will kill another 35,000, and 1.2 million worldwide.

Anything in the range of a million people per year definitely qualifies as a holocaust! And yet this is actually a fairly small percentage of the world death rate (about 57 million per year) overall. (Most deaths are caused by heart disease or infectious diseases.) Nonetheless, self-driving cars strike me as an attractive initial goal for the following reasons:

The robotic exo-suit is another near-term source for dramatically increased GDP -- a person wearing one can perform manual labor tasks with greater endurance and reduced danger of physical injury, without undergoing painful physical conditioning. It can delay forced retirement age, as a feeble body will no longer be an obstacle to a number of tasks. Furthermore, powered suits may prove the key to truly comfortable hermetically sealed environments -- something that can be very handy when old age hits and your immune system declines. It can also be useful for keeping infectious diseases in.

The artificial heart is something else that can reduce instances of death significantly. We are kept alive by two pounds of throbbing muscle just waiting to explode on us. Removing that risk from the picture would have a huge impact on the death rate in the developed world.

Another huge risk-reducer would be wider adoption of robotic surgery. This enables surgical interventions to take place under far more controlled circumstances, without hand-tremors and human error to complicate matters. As surgery becomes more safe and noninvasive, it can be used for preventative maintenance, rather than being conserved for when something is going wrong.

Cryonics and robust rejuvenation treatments still are very significant from a life-extension perspective. But proof that they will work is not necessarily going to be available until it is too late to convince people (in this generation) to start allocating significant resources to them. A better strategy might be to invest first in these less radical technologies (while still maintaining a healthy activist base for life extension memes) and use the economic gains to jump on the growing life extension market as it starts to open up.

Another thing to bear in mind is that cheap, accessible robotics can lead to cheaper, more accessible cryonics and life-extension drugs. These things tend to synergize. A factory where the workers are equipped with exo-suits can produce chemicals, drugs, and mechanical parts more quickly and cheaply. The more easily a new piece of robotic equipment can be prototyped and tested, the more likely it will see use sooner, resulting in the earlier introduction of safety and economic gains for humans.

28 comments

Comments sorted by top scores.

comment by Manfred · 2011-05-31T04:28:10.912Z · LW(p) · GW(p)

For topics like this, I recommend making use of a unit called "quality-adjusted years of life." We can never ultimately reduce instances of death except by reducing the birth rate or breaking the second law of thermodynamics.

Viewing things this way is useful because we can see that, for example, artificial hearts should be weighted less per "life saved" than self-driving cars because they target an older population, and so they result in fewer extra years of life.

Also, I think robotic surgery that's better than human surgery might be close to as hard as just building a general AI, so probably better to focus on that.

Replies from: Vladimir_Nesov, NancyLebovitz, lsparrish
comment by Vladimir_Nesov · 2011-05-31T21:39:43.220Z · LW(p) · GW(p)

For topics like this, I recommend making use of a unit called "quality-adjusted years of life." We can never ultimately reduce instances of death except by reducing the birth rate or breaking the second law of thermodynamics.

Where the limitation from laws of thermodynamics is a relevant idea, quality-adjusted years of life isn't.

Replies from: MixedNuts
comment by MixedNuts · 2011-05-31T21:54:07.641Z · LW(p) · GW(p)

If you allow quality to be greater than 1, it might be. Computation becomes uncertain, though.

comment by NancyLebovitz · 2011-06-01T18:05:35.629Z · LW(p) · GW(p)

Also, I think robotic surgery that's better than human surgery might be close to as hard as just building a general AI, so probably better to focus on that.

I think one kind of better is relatively easy and the other is close to GAI.

Robot surgeons which avoid the human mistakes which occur due to tiredness and distraction would be relatively straightforward.

Robot surgeons which are better than human surgeons at handling unusual problems-- especially if you want them to be better than the best human surgeons-- strike me as GAI territory.

I'm concerned that robot surgeons could mean that less knowledge would be accumulated because they'd be less likely to notice anomalies and possible improvements to existing procedures.

Replies from: GLaDOS
comment by GLaDOS · 2011-06-09T00:21:44.800Z · LW(p) · GW(p)

I'm concerned that robot surgeons could mean that less knowledge would be accumulated because they'd be less likely to notice anomalies and possible improvements to existing procedures.

Moderately high IQ humans are a valuable resources, there are plenty of things that simply aren't being done because humans prefer the high status profession of "surgeon" or "doctor". Reducing their numbers while maintaining the same quality of service would be a great boon.

comment by lsparrish · 2011-05-31T05:08:34.139Z · LW(p) · GW(p)

I agree that self-driving cars should be weighted higher than artificial hearts per life saved, but the number of lives (assuming we don't count the indirect economic effects of self-driving cars) could be quite a bit greater given that such a high percentage of deaths are heart-related.

The fact that they target an older population is slightly less of a factor if we consider that old age hits a mortality plateau, and assuming that this plateau rate can be reduced by such interventions. (Stroke, dementia, cancer, and so forth would definitely become higher priorities at this point.) Also the fact that the intervention occurs later in life (and hence later in time) increases the probability that it will serve as a bridge to robust rejuvenation or to more effective cryonics.

Currently, robotic surgery is teleoperated by humans. With software that learns from human interaction, automated surgery could probably be developed, starting with the most predictable operations and working towards more complex ones. It would never have to be human-level or general, narrow AI that is good at surgery should be sufficient.

comment by endoself · 2011-05-31T03:28:23.154Z · LW(p) · GW(p)

I think this would be a lot better if you gave numbers for everything. Efficient charity may tend to focus on the present (malaria in Africa) or the far future (SIAI), but if you want to expand this to a different range of times, give hard numbers so that your proposals can be easily compared to everything else.

Replies from: lsparrish
comment by lsparrish · 2011-05-31T05:25:07.227Z · LW(p) · GW(p)

I agree that more numbers would be better. However, it is worth noting that estimating probabilities for e.g. ubiquitous robotic surgery having more impact on life extension than self-driving cars is a matter of some pretty complex graphing with lots of hidden variables.

What kinds of surgery might become more popular if they were cheaper (how much?) and safer? How does it affect cryonics adoption rates, preventative surgery adoption rates, effectiveness, and so forth? What kinds of economic impacts are to be expected of self-driving cars? How much do the various approaches synergize, such that putting dollars into one is similar in effect to putting dollars into another?

Replies from: endoself
comment by endoself · 2011-05-31T19:11:06.292Z · LW(p) · GW(p)

If you believe that these future timescales are so hard to predict, what evidence makes you think that robotics will be particularly valuable as a solution to future problems (as opposed to anything else one could come up with)?

Replies from: lsparrish
comment by lsparrish · 2011-05-31T20:48:03.341Z · LW(p) · GW(p)

Overall I would rate the possibility that robotic solutions will be useful as high because these are currently seeing enough incremental advancement to be useful. It's a matter of scaling, ironing out the bugs, overcoming regulatory hurdles, etc. from here. So in my estimation the probability of them being vaporware (in the nearer term) is lower than nanotech, genetic engineering, or AGI.

comment by Kevin · 2011-05-31T13:57:11.324Z · LW(p) · GW(p)

I expect the Mundane Holocaust of African dictators will indeed be defeated by armies of small killer robots controlled remotely by recruits that played lots of first person shooter videogames as kids. Armies quickly lose the will to fight against robots. People just don't find it worth it to give their lives in a heroic act of destruction of an inanimate object.

Replies from: MixedNuts, GLaDOS
comment by MixedNuts · 2011-05-31T14:05:09.501Z · LW(p) · GW(p)

Source?

Also, though morale helps, officers don't lose the will to shoot soldiers who refuse to give their lives to destroy objects.

Replies from: timtyler
comment by timtyler · 2011-05-31T14:55:14.682Z · LW(p) · GW(p)

For killer robots controlled remotely see: http://dronewarsuk.wordpress.com/

Replies from: MixedNuts
comment by MixedNuts · 2011-05-31T15:04:16.317Z · LW(p) · GW(p)

No, I know they exist - I've seen the xkcd and right now I'm at work (trying to make sense of a $#@ paper) on a project that has to do with them. I was asking how Kevin knows that enemy soldiers lose motivation when fighting robots.

Replies from: timtyler
comment by timtyler · 2011-05-31T15:14:49.892Z · LW(p) · GW(p)

Hmm. Well, fighting high-flying drones is tricky. The usual story is that they rain hellfire down on you - and then you die.

I suppose you can appeal to the United Nations Security Council - if you are still not dead yet.

Replies from: CronoDAS
comment by CronoDAS · 2011-05-31T21:07:36.392Z · LW(p) · GW(p)

The usual trick is to hide among the local civilian population...

comment by GLaDOS · 2011-06-09T00:33:45.784Z · LW(p) · GW(p)

Afterwards I assume the teenagers would then proceed to rule the territories? Perhaps employ the local humans for testing or resource extraction? That's what I would do.

Or are we observing failed states and need to procure a few more by destabilizing governments?

comment by timtyler · 2011-06-01T21:07:57.202Z · LW(p) · GW(p)

We do want smarter cars - but it seems worth noting that machines being deployed while they were still too stupid is what caused this problem in the first place.

Too-stupid machines may yet cause many more problems as robots get rolled out. Now: where's my jetpack?

Replies from: lsparrish, MixedNuts
comment by lsparrish · 2011-06-01T21:27:13.721Z · LW(p) · GW(p)

Not exactly a jetpack, but pretty close.

comment by MixedNuts · 2011-06-01T21:22:20.022Z · LW(p) · GW(p)

Deeming current jetpacks too stupid may be sour grapes.

comment by AlphaOmega · 2011-05-31T02:01:43.884Z · LW(p) · GW(p)

If your goal is to maximize human life, maybe you should start by outlawing abortion and birth control worldwide. Personally I think reducing human values to these utilitarian calculations is absurd, nihilistic and grotesque. What I want is a life worth living, people worth living with and a culture worth living in -- quality, not quantity. The reason irrational things like religion, magical thinking and art will never go away, and why I find the ideology of this rationality cult rather repulsive, is because human beings are not rational robots and never will be. Trying to maximize happiness via rationality is a fool's quest! The happiest people I know are totally irrational! If maximal rationality is your goal, you need to exterminate humanity and replace them with machines!

(Of course it may be that I am off my meds today, but I don't think that invalidates my points.)

Replies from: Bobertron, lsparrish
comment by Bobertron · 2011-05-31T09:17:20.323Z · LW(p) · GW(p)

What I want is a life worth living, people worth living with and a culture worth living in -- quality, not quantity

There might be differences in how to archive that, but I'm pretty sure everyone here agrees to that in general.

irrational things like religion, magical thinking and art

One of those things definitely doesn't belong in this list (hint: it's art).

Trying to maximize happiness via rationality is a fool's quest! The happiest people I know are totally irrational!

You are confusing the concept of increasing happiness by rational means and increasing happiness by teaching rationality to people. If you only care about happiness and people that engage in magical thinking are systematically happier, it would be completely rational to teach magical thinking. If you teach rationality to people it will destroy some of their irrational beliefs. Depending on whether those irrational beliefs make them happy or unhappy, the impact on happiness would (I think) depend heavily on the person.

If maximal rationality is your goal

It certainly isn't.

Replies from: endoself
comment by endoself · 2011-06-01T02:30:23.215Z · LW(p) · GW(p)

What I want is a life worth living, people worth living with and a culture worth living in -- quality, not quantity

There might be differences in how to archive that, but I'm pretty sure everyone here agrees to that in general.

I don't. Quantity times quality. Or do you count that as agreement?

Replies from: Bobertron
comment by Bobertron · 2011-06-01T10:03:19.401Z · LW(p) · GW(p)

Yes. You still care about quality, just not exclusively.

I agree with you that quantity is important, too.

comment by lsparrish · 2011-05-31T02:31:39.610Z · LW(p) · GW(p)

I'm not sure how you think this applies to anything said in my post. I never said anything about maximizing the total number of humans in existence. Your strategy for doing so sounds like a recipe for a Malthusian disaster, which would probably diminish the number of humans in existence in the long run.

Humans are rational compared to most other naturally existing entities -- rationality is one of the key aspects which sets us apart from the other animals. And while you may feel repulsion at the fact that others value rationality higher than you do, you should know that many of us feel repulsion at those who value rationality less than we do. The feeling of repulsion isn't the issue though; the fact that millions will die painfully and pointlessly because of irrational behavior is the issue.

Replies from: AlphaOmega
comment by AlphaOmega · 2011-05-31T02:50:28.149Z · LW(p) · GW(p)

I'm not sure either, it was a general rant against hyper-rational utilitarian thinking. My utility function can't be described by statistics or logic; it involves purely irrational concepts such as "spirituality", "aesthetics", "humor", "creativity", "mysticism", etc. These are the values I care about, and I see nothing in your calculations that takes them into account. So I am rejecting the entire project of LessWrong on these grounds. Have a nice day.

Replies from: lsparrish, Yvain
comment by lsparrish · 2011-05-31T02:52:14.720Z · LW(p) · GW(p)

My utility function can't be described by statistics; it involves purely irrational concepts such as "spirituality", "aesthetics", "humor", "creativity", "mysticism", etc. These are the values I care about, and I see nothing in your calculations that takes them into account. So I am rejecting the entire project of LessWrong on these grounds.

The fact that you don't see these things accounted for is a fact about your own perception, not about utilitarian values (which actually do account for these things).

The fact that you are reluctant to assign numbers to them is a feature of your own psychology, not whether they can in fact be modeled accurately by numbers.

Your rejection of Less Wrong and similar approaches is something you are free to do -- but chances are you won't find a better way to implement your most idealistic goals for the world than rationality.