How Safe are Uploads?

post by paulfchristiano · 2011-03-27T22:31:37.160Z · LW · GW · Legacy · 55 comments

I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.

I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?

 

Suppose human society has some hope of designing FAI. Then I strongly suspect that a community of uploads have at least as good a chance of designing FAI. If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI. Moreover, if emulated brains eventually outproduce us signfiicantly, then they have a higher chance of designing an FAI before something else kills them. The main remaining question is how safe an upload would be, and how well an upload-initiated singularity is likely to proceed.

There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.

The main factor contributing to the risk of an upload-initiated singularity is that uploads already have access to uploads. It is possible that uploads will self-modify unsafely, and that this may be (even relatively) easier than for existing humans to develop AI. Is this the crux of the argument against uploads? If so, could someone who has thought through the argument please spell it out in much more detail, or point me to such a spelling out?

55 comments

Comments sorted by top scores.

comment by Vladimir_M · 2011-03-27T23:34:45.335Z · LW(p) · GW(p)

My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?

You're probably familiar with Robin Hanson's writings on the economics of uploads. If you accept his arguments -- and I do find them very convincing -- this means that uploads will lead quickly and directly to an extremely grim Malthusian equilibrium. (Though Hanson himself, who accepts the Repugnant Conclusion but sees nothing repugnant about it, wouldn't characterize it as grim. Most people would however find it rather horrible -- including, I think, most people on LW too -- assuming they really understand the implications.)

I'm not at all optimistic about what awaits us if any sort of machine intelligence gets developed, but the upload scenario strikes me as especially dismal.

Replies from: FAWS, paulfchristiano
comment by FAWS · 2011-03-28T00:04:09.565Z · LW(p) · GW(p)

Robin's vision is actually far, far worse than the ordinary Repugnant Conclusion because it doesn't necessarily preserve or even increase total net utility. You do end up with huge numbers of people with lives barely worth living, but they are only a tiny fraction of the people you'd end up with under the ordinary Repugnant Conclusion (which implies free resources sufficient to bring newly created people up to worth living level) given the same starting point.

comment by paulfchristiano · 2011-03-27T23:43:43.380Z · LW(p) · GW(p)

I'm not incredibly familiar with Robin Hanson's arguments; I think I disagree with his assumption/conclusion that a singleton is unlikely. The balance of power he presumes seems incredibly unlikely to persist for long.

Moreover, the question is whether we can engineer a future where uploads design a friendly singularity. To me (naively) this seems easier than friendliness. Hanson's writings don't really speak to this question.

Replies from: atucker
comment by atucker · 2011-03-28T03:47:58.118Z · LW(p) · GW(p)

I'm not incredibly familiar with Robin Hanson's arguments;

I'm going to try to summarize them (feedback on how well that works please). One relates to how uploads come to dominate humanity, and another is how ruthless resource exporters come to dominate their section of the universe.

Uploads beating Humans

  1. An upload will be as mentally capable as a human, but faster

  2. An upload will be easy to copy

  3. Uploads require much less to survive

Basically, the argument goes that uploads will be able to replace large swathes of normal humans who work on cognitive tasks, such as lawyers, engineers, writers, academics, programmers, etc.

Imagine a mathematician who has literally 100 times as much time as you do. Where you spend a day on a problem, they have months.

  • Once you have any uploads willing to be duplicated, they will be willing to be duplicated for as many tasks as they can be paid to do.

  • As supply of labor goes up, wages go down.

  • Wages can only go down to subsistence level before you can't push them any lower.

  • As long as there is incentive to copy, people will copy, and supply will increase until wages go down to upload subsistence level in a wide variety of fields. If uploads can control robots, then they number of fields they dominate is even higher.

  • Since the subsistence level of uploads is much cheaper than natural humans, then maintaining a body is going to be ridiculously expensive, and beyond the reach of most people. Unable to support themselves, many people die, leaving mostly the uploads.

Expanders beating Non-expanders

The Nash equilibrium for controlling matter in the universe is to use everything you can to the cause of getting more matter to control. When you're up against an enemy like that, they will have more stuff than you with which to destroy you and repurpose your matter.

Groups which encounter planets and turn as much as they can into probes to do more colonization will wind up reaching and controlling more planets than groups that don't.

Replies from: Vladimir_M, None
comment by Vladimir_M · 2011-03-28T04:48:37.548Z · LW(p) · GW(p)

I'd say that's a good summary. To complete the grimness of the picture, the vast swarms of uploads toiling for the absolute minimum subsistence would be massively annihilated whenever they'd become even slightly obsolete or otherwise a suboptimal way to use the hardware on which they are running, and a recession in an upload economy would have a similar effect as a bad harvest leading to a cataclysmic famine among Malthusian farmers. As Hanson put it, "When life is cheap, death is cheap as well."

On top of all that, to make things even more ghastly from the perspective of LW ideals, Hanson has made the shrewd observation that in order to make their subsistence more bearable, their behavior more productive and cooperative, and the acceptance of their eventual demise easier, uploads may well end up having their minds indoctrinated with religion and ideology, not trained in LW-style epistemic rationality (beyond what's necessary for their main task, of course).

Replies from: atucker, Eneasz
comment by atucker · 2011-03-28T14:11:59.079Z · LW(p) · GW(p)

So the universe gets tiled with a sea of barely surviving people similar to humans, who are optimized towards whatever makes them most likely to remain productive in their dreary existence.

So like, the opposite of everyone becoming happy and rich.

Replies from: Eneasz
comment by Eneasz · 2011-03-28T20:15:10.791Z · LW(p) · GW(p)

Well, in Hanson's words: Poor Folks Do Smile

Our ancestors were designed with pleasure and pain to motivate them in a near subsistence world. ... Our descendants will be similarly adapted to find joy and meaning in their near subsistence lives.

Replies from: atucker
comment by atucker · 2011-03-28T21:02:48.868Z · LW(p) · GW(p)

I guess it was a stretch to say that its not like everyone becoming happy.

I don't think that uploads would require nearly as much matter to lead a happy life. Basically right now if I want to have a nice, warm, comfortable place to sleep and a stomach full of nutritious food, I need to rearrange lots of stuff to physically construct those.

Contrast that with an upload, who can simply have a computer stimulating his emulated neurons in such a way as to make them believe that they were.

I think it's likely that uploads will need a simulated environment anyway, and I doubt if a pleasant one is harder to simulate than an unpleasant one.

For that reason, I personally think that an upload is more likely to be living in a state of sensory darkness and deprivation (which I would find pretty terrifying) from having no stimulation than in an unpleasant simulation for not being able to afford a nicer one.

comment by Eneasz · 2011-03-28T20:18:36.682Z · LW(p) · GW(p)

uploads may well end up having their minds indoctrinated with religion and ideology, not trained in LW-style epistemic rationality

IIRC, he says that religion and ideology are symptoms of modern-day wealth/excess, and future folk won't be able to afford non-adaptive/non-correct beliefs. He calls our current position in history as "The Dreamtime"

we live in the brief but important “dreamtime” when delusions drove history. Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-28T21:39:06.299Z · LW(p) · GW(p)

IIRC, he says that religion and ideology are symptoms of modern-day wealth/excess, and future folk won't be able to afford non-adaptive/non-correct beliefs. He calls our current position in history as "The Dreamtime"

Well, it is possible that he has said inconsistent things at different times, but in the posts I linked in my above comment, he argues (in my opinion plausibly) that the social mechanisms of control and coordination for ems may well end up being based on similar (epistemically) irrational beliefs as in historical human societies, i.e. religion, ideology, strict custom, etc. ("Onward Christian robots!," as he put it.)

[Edit - forgot to add: ] And of course, adaptive and correct beliefs are not always one and the same, and it's a huge fallacy to argue as if they were.

comment by [deleted] · 2011-03-28T18:38:50.885Z · LW(p) · GW(p)

Basically, the argument goes that uploads will be able to replace large swathes of normal humans who work on cognitive tasks, such as lawyers, engineers, writers, academics, programmers, etc.

This resembles the "Luddite fallacy", which was debunked by experience, which is to say, had the Luddites been right that the majority of the workforce would be replaced by a much more productive minority working the labor-saving machines (compare: humans would be replaced by much more productive uploads), we would already be living in something like a Hansonian upload dystopia, which we are not.

What instead happened was that the labor force stayed the same and production greatly expanded, and labor reaped a large part of the benefit of the expansion.

Extending what actually happened in the Luddite scenario to the upload scenario, then we might expect the amount of work to be done to expand to fully accomodate the number of people (human and upload) available to do it.

What of the fact that uploads need much less to survive? Won't this mean that they will be willing to work for much less, and therefore drive their human competitors out of business? Well, we in the US already earn far more than we need to barely survive, which is very little. So in a sense we are already modeling the upload scenario with respect to the low requirement to survive. So it is not obvious that uploads will end up working for upload-subsistence-level.

As long as there is incentive to copy, people will copy, and supply will increase until wages go down to upload subsistence level in a wide variety of fields.

But demand will also increase. If you increase the total number of people, it's true that you are increasing the total number of potential producers (sellers), but you are also increasing the total number of consumers (buyers) by the exact same amount. You are simply expanding the population. And we have already seen the effects of this. The population of the US expanded enormously over the past 200 years, and wages have not gone down.

Am I forgetting the low subsistence level of uploads? Won't the heavy competition for jobs reduce salaries to subsistence level? Well, this is what might have been predicted for the same reason in the Luddite scenario, and it turned out not to be the case. Here is an alternative suggestion: if the number of workers increase, say, by a factor of 1000, so that the effective population balloons from 5 billion to 5 trillion, then the work will also increase by that same factor, so that the amount of work done by each person remains the same and the standard of living enjoyed by each person remains the same.

But let's assume that the following is true:

Since the subsistence level of uploads is much cheaper than natural humans, then maintaining a body is going to be ridiculously expensive, and beyond the reach of most people. Unable to support themselves, many people die, leaving mostly the uploads.

Indulge me and let's suppose a billion (non-upload) humans agree to trade only with each other and not with uploads (don't worry, I know about the instability of such arrangements and I'll relax this restriction soon enough, I just want to set up the scenario). In that case, they can continue surviving with an economy just like the pre-upload economy, in which people were after all surviving and doing quite well. Now let's relax the restriction. The humans start trading freely with uploads. We are now in the "free trade" versus "protectionism" scenario, and economists have plenty to say about that, mostly in favor of free trade as being to the mutual benefit of both populations. Vladimir has repeatedly made the point that comparative advantage is not all it's cracked up to be - but neither is it nothing. While it is probable that free trade will put some sectors of the human economy largely out of business (but couldn't they just move to a different sector?) nor was this ever denied by economists arguing for free trade (putting certain sectors largely out of business in one country is after all what must be entailed by the country's population focusing on areas where they have a comparative advantage), free trade does not lead to the entire economy going out of business and everybody starving to death.

By the way, if possible and if it is necessary to survive, I intend to become an upload. Here is my plan for increasing my own productive power beyond that of a single upload, thus increasing my standard of living. I duplicate myself many times, a thousandfold, but with constraints. The thousand copies will remain in existence for, say, an hour subjective time, during which time they will work, and then all will be deleted except for one randomly selected copy. This will not be much like death for the 999 copies; it will be much more like losing memories, memories which are likely to be lost anyway through normal forgetfulness. (See Derek Parfit Reasons and Persons Part 3 for full discussion of personal identity which I essentially agree with.) If I repeat this a few times, I will build up an intuitive expectation of survival and a willingness to keep doing it. Moreover it might be possible to merge some of my memories, minimizing loss of significant memories.

I am not saying that Hanson is wrong. I am just pointing out areas of the argument which seem to me to be incomplete. By the way, it was my understanding that Hanson's dystopia is independent of whether we upload or not. Rather, it is the very nature of life to expand, expand, expand, until a malthusian limit is reached. Now, this argument is quite a bit stronger. But I am here dealing specifically with the upload scenario.

Replies from: Vladimir_M, jimrandomh, Eneasz
comment by Vladimir_M · 2011-03-28T20:01:25.106Z · LW(p) · GW(p)

Constant:

This resembles the "Luddite fallacy", which was debunked by experience... Extending what actually happened in the Luddite scenario to the upload scenario, then we might expect the amount of work to be done to expand to fully accomodate the number of people (human and upload) available to do it.

This is not a correct comparison. None of the technological advances in human history so far have produced machines capable of replacing human labor across the board at much lower cost. Uploads would be a totally unprecedented development in this regard.

The closest historical analogy is what happened to draft horses after motor transport was invented. The amount of work in pulling things has indeed expanded, but it is no longer possible for a draft horse to earn subsistence, since machines can do it by orders of magnitude cheaper and better.

The economist Nick Rowe wrote an excellent analysis along these lines (see also the very good comment thread):
http://worthwhile.typepad.com/worthwhile_canadian_initi/2011/01/robots-slaves-horses-and-malthus.html

The population of the US expanded enormously over the past 200 years, and wages have not gone down.

That’s because the economic growth and technical progress have been too fast for the slow and fickle human reproduction to catch up. With uploads, in contrast, the population growth necessary to hit the Malthusian limit is possible practically instantaneously -- and there will be incentives in place to make it happen.

As for the remainder of your post, rather than criticizing your reasoning point by point, let me ask you: why didn’t the draft horses benefit from trading with motor transport then, but ended up in slaughterhouses instead? Your entire argument can be reworded as telling an American draft horse circa 1920 that he has no reason to fear displacement by motor vehicles. What is the essential difference when it comes to human labor versus uploads supposed to be?

Replies from: None
comment by [deleted] · 2011-03-28T21:33:32.755Z · LW(p) · GW(p)

None of the technological advances in human history so far have produced machines capable of replacing human labor across the board at much lower cost.

You seem to think that the luddite fallacy depends on the possibility of substitution not being across the board. I've already answered a similar point by jimrandomh but I will answer again. Suppose that we have a series of revolutions in one sector after another in which labor-saving machines greatly increase the productivity of workers within that sector. So, what will happen? First, let's see what will happen in just one sector. According to Wikipedia, the main critique of luddism is as follows:

The term "Luddite fallacy" has become a concept in neoclassical economics reflecting the belief that labour-saving technologies (i.e., technologies that increase output-per-worker) increase unemployment by reducing demand for labour. Neoclassical economists believe this argument is fallacious because they assert that instead of seeking to keep production constant by employing a smaller and more productive workforce, employers increase production while keeping workforce size constant.

So, what will happen according to this critique is that the workforce size remains constant within the sector that has been affected by the change. Hold your horses - I know what you're going to say, but one thing at a time. Recall, we are trying to predict what will happen if there is a series of tech revolutions in one sector after another eventually covering all sectors. The result, based on the Wikipedia quote, is that all sector employment will remain the same.

Now, what you were going to say is, I think, that when labor saving devices hit a sector, labor shifts to other sectors. Am I right? We have the example of agriculture which seems to show this happening. So we want to know, what happens if labor-saving devices hit all sectors? Let's say they hit them simultaneously. If the solution to the luddite nightmare was that labor would shift to another sector, then the solution here must be that labor would shift entirely out of the economy, there not being any other sectors to shift to - right? But that does not follow.

We can reproduce the same sector-shifting phenomenon with immigration. In the US, immigrants have taken over certain sectors of the economy in many regions. In my region, Spanish- and Portuguese-speaking Latin Americans have taken over many fast food kitchens. The result has been that Americans have shifted into other sectors.

But what if immigrants came to the US and entered into every sector simultaneously? Would they totally displace Americans completely out of the economy? No, they would not. They would simply expand the economy.

So there's a puzzle here. If immigrants enter one sector, American labor shifts away from the sector. But if immigrants enter all sectors, Americans stay in place. What explains this? What explains is is that the reason for the shift is relative wages. Americans shift to sectors where wages are higher. But if wages in all sectors remain the same (which they very well might in an economy which is simply expanding) then there won't be any shifting. But here's another objection: immigrants depress wages in the sectors they enter. Therefore if they entered all sectors simultaneously, they would depress all wages simultaneously, right? But that does not at all follow. Immigrants depress wages in the sector they enter but they do so only by offering more value for money to the customer - in short, they depress wages in their sector only by boosting effective wages in other sectors. If they enter all sectors simultaneously, they depression and boosting could very well cancel out.

With uploads, in contrast, the population growth necessary to hit the Malthusian limit

Yes, the Malthusian limit. I specifically said this was a strong argument. The arguments that I answered were not Malthusian. It's not because of the Malthusian limit that uploads would supposedly replace humans, but because they are better and cheaper - which is not because of the Malthusian limit. I do have many thoughts on Hanson's Malthusian argument, but they have nothing specifically to do with uploads. I want to postpone discussion about the Malthusian limit for another time. Here I am specifically talking about uploads versus humans at a time before the Malthusian limit is reached.

why didn’t the draft horses benefit from trading with motor transport then, but ended up in slaughterhouses instead? Your entire argument can be reworded as telling an American draft horse circa 1920 that he has no reason to fear displacement by motor vehicles. What is the essential difference when it comes to human labor versus uploads supposed to be?

Every species has a natural niche in which the species is fully able to support itself, and the niche can only support so many members of that species. Draft horses greatly outnumbered their natural niche, but they did not outnumber their artificial niche - the niche created for them by humans who were supporting them in exchange for work. When humans ceased to support them, then the draft horses, which greatly outnumbered their natural niche, died out. For all I know some of the wild horses are descendants draft horses.

The niche of a species is roughly determined by the total amount of food that the species is able to produce or obtain. Draft horses obtained food from humans, and could obtain only a small fraction of that food by themselves. Humans do not outnumber their natural niche, because humans make enough food to support themselves.

In order for uploads to lead to a mass dying off of humans, uploads would have to massively reduce the total quantity of food that humans produce. This would require that the uploads take over the land. However, Hanson's upload scenario depends on uploads needing very few resources. Let's take this to the limit and suppose that uploads require zero resources and work for free, demanding absolutely nothing in return. This merely takes to an extreme the very factors that were used to argue that humans will starve to death in the upload scenario. Given uploads that use no resources and work for free, it is not obvious that uploads would take over any agricultural land. So the same amount of food can still be produced.

It had been imagined that uploads would replace doctors, academics, etc. But none of this reduces the amount of food available to humans. And actually it increases the amount of medicine, education, etc. available to humans.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-28T22:28:37.841Z · LW(p) · GW(p)

Constant,

What you are observing are the effects of relatively small rates of immigration, small enough that all kinds of complex and non-obvious effects are possible in a dynamic and diversified economy, especially since the skills profile of the immigrants is very different from the native population. However, if you kept adding an unlimited number of immigrants to a country at arbitrarily fast rates, including an unlimited number of immigrants skilled at each imaginable profession, the wages of all kinds of labor would indeed plummet. At some point, they would fall all the way down to subsistence, and if you kept adding extra people beyond that, they would fall even further and there would be mass famine.

Remember, we're not talking about a country that accepts an annual number of immigrants equal to 1%, or 5%, or even 10% or 20% of its population. We're talking about a magical world where the number of people can be increased by orders of magnitude overnight, with readily available skills in any work you can imagine. That is what uploads mean, and there's no way you can extrapolate the comparably infinitesimal trends from ordinary human societies to such extremes.

As for the issues of land, housing, and food production, that would also be fatal for humans. Uploads still require non-zero resources to subsist, and since the marginal cost of copying them is zero as long as there are resources available, they will be multiplied until they fill all the available resources. Now, a human requires a plot of land to produce his food and another plot of land for lodging (future technology may shrink the former drastically, but not to the level of an upload's requirements, and moreover the latter must remain substantial).

Unless the human owns enough land, he must pay the land rent to subsist (directly for the lodging land and through his food bills for the farming land). But the rent of land must be at least as high as the opportunity cost of forsaking the option to fill it up with a vast farm of slaving uploads and reap the profits, which will be many orders of magnitude above what a human can earn. It would be as if presently there existed a creature large enough to fill a whole state and requiring its entire agricultural output to subsist, but incapable of doing more productive work than a single human. How could such a creature support itself?

Replies from: None
comment by [deleted] · 2011-03-28T23:51:11.622Z · LW(p) · GW(p)

However, if you kept adding an unlimited number of immigrants to a country at arbitrarily fast rates, including an unlimited number of immigrants skilled at each imaginable profession, the wages of all kinds of labor would indeed plummet....

That is what uploads mean, and there's no way you can extrapolate the comparably infinitesimal trends from ordinary human societies to such extremes.

One the one hand you express near certainty about what would happen (wages "would indeed plummet"), and on the other hand you caution about extrapolating from the known to the unknown.

My position, as you will recall, is not that Hanson is wrong, but that his argument is incomplete. My position is skeptical, in the sense that I see important gaps in the argument (at least as reproduced here). You are defending Hanson's prediction - the prediction about which I am expressing skepticism. Warnings about extrapolating from the known to the unknown work in favor of skepticism about predictions and against confidence in predictions, and therefore they work in my favor.

Uploads still require non-zero resources to subsist

Indeed they do, but my point is that if you look at the two ends of this spectrum - one end at which they take up the same amount of resources as humans, and the other end in which they take up nothing, at both ends there is no clear reason to believe that humans will die off. Now, this does not necessarily means that something funny won't happen in between, but since it is very common that if A causes B then more of A will cause more of B, then the fact that taking A to an extreme does not obviously cause any more B should at least make a person who reasoned that A caused B start to suspect that maybe they missed something.

Imagining that the uploads take zero resources and charge zero for their services is unrealistic, granted - about as unrealistic as imagining that you are traveling along at the speed of light and trying to imagine what you observe. Unrealistic, yes, but not necessarily useless. It's inherently hard to think about most things, and so as an assist - a dangerous assist granted - it is useful to consider cases which are simpler to think about, as extremes often are.

You are tremendously confident in a certain prediction. I am not confident. I am objecting, pointing out why certain supposed extrapolations do not really follow because the larger picture matters - the larger picture being what you call "all kinds of complex and non-obvious effects" and which you continue to neglect and which you argue does not matter if the increase is sufficiently fast - as if increasing the speed of the transition would by magic somehow enhance the effects that you happen to have considered while negating the effects that I have pointed out. Which is not the case. If an upload replaces a human at some task because the upload does it better for less, then the customer is immediately benefits. So the speed of that neglected effect (benefit to customer) is precisely as fast as the speed of the considered effect (harm to competitor). Speed up one by a million times, and the other also speeds up by a million times, because they are flip sides of precisely the same occurrence.

But the rent of land must be at least as high as the opportunity cost of filling it up with swarms of slaving uploads and reaping the profits, which will be many orders of magnitude above what a human can earn. It would be as if presently there existed a creature large enough to fill a whole state and requiring its entire agricultural output to subsist, but incapable of doing more productive work than a single human.

To say that one quantity would be much larger than another does not mean that the second quantity would be absolutely low. The first quantity could be absolutely very high.

We already have a kind of land use similar to what you are describing: skyscrapers. These allow an enormous number of people to occupy a minuscule square footage. So, where is the mass starvation? Do you think that the American economy would be enhanced by blowing up skyscrapers full of people? Or do you think that the American economy would be harmed? I think the latter.

But rent would definitely be lowered in NYC if all of its buildings were blown up. So, yeah, rent is high because of the high concentration of minds. But lowering the rent would not accompany a net benefit to humanity. I don't think we would be benefited by lowering rents in NYC by means of blowing up the buildings with the people in them. So, why would we necessarily be benefited by blowing up a square yard of land with trillions of minds on it? And if we would not be benefited by their destruction, then we would not be harmed by their introduction.

How could such a creature support itself?

That scenario imagines a creature with a certain absolute size and a certain absolute productivity. Given that absolute size and that absolute productivity, the creature cannot support itself. But given only that a human is much less productive than a trillion minds in a box, then we cannot draw any conclusions about how well the human can support themselves.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-29T00:17:25.156Z · LW(p) · GW(p)

One the one hand you express near certainty about what would happen (wages "would indeed plummet"), and on the other hand you caution about extrapolating from the known to the unknown.

I don't caution about extrapolating from the known to the unknown in this case -- on the contrary. The economic effects of the (relatively) low rates of migration and population growth in today's world are unclear, complicated, and controversial, since these phenomena are intertwined with many others of similar magnitudes. In contrast, the economic effects of the upload scenario (or the infinite immigration in the thought experiment) are much clearer, since these feature a few simple effects that are strong enough to dominate everything else.

[I]f you look at the two ends of this spectrum - one end at which [robots/uploads] take up the same amount of resources as humans, and the other end in which they take up nothing, at both ends there is no clear reason to believe that humans will die off.

In the first case, a straightforward analysis leads to the classic Malthusian scenario, i.e. both human and robot wages are reduced to the barest subsistence, which happens to be the same for both. (This assuming robots can be multiplied cheaply and rapidly.)

The second case is more interesting. Suppose we find a way to summon magical ghosts which will do any work doable by humans for nothing in return. Now all labor becomes free, like air. The cost of capital becomes equal to the cost of land (which in economic parlance also includes other natural resources) necessary to produce it -- once you have that, you can just summon the ghosts to make whatever you want out of it. Your fate as a human depends on whether you own enough land to enable you to subsist with the help of ghosts. If not, you're screwed, since you can't sell your labor to afford the land rent to lodge and feed yourself, and even the ghosts can't summon land out of thin air.

The realistic upload scenarios are somewhere between these two grim possibilities.

We already have a kind of land use similar to what you are describing: skyscrapers. These allow an enormous number of people to occupy a minuscule square footage. So, where is the mass starvation?

However, Manhattan is situated right next to a vast and much less densely populated continent from which it's cheap to bring stuff, so that food prices in Manhattan reflect the farming land rent in these neighboring places, not Manhattan itself. If the land rents in the whole world were as high as in Manhattan, you bet there would be mass starvation. (And with uploads, it's hyper-Manhattan everywhere.)

Replies from: None
comment by [deleted] · 2011-03-29T02:00:05.611Z · LW(p) · GW(p)

In contrast, the economic effects of the upload scenario (or the infinite immigration in the thought experiment) are much clearer, since these feature a few simple effects that are strong enough to dominate everything else.

The only effect that I discern that dominates everything else involves hitting the Malthusian limit - which I have already allowed is a strong argument (though not, I think, decisive - but I'm putting off that discussion for some other time). The other elements of the argument look to me like a question of ignoring the unseen, of assuming that the non-obvious is trivial.

In the first case, a straightforward analysis leads to the classic Malthusian scenario, i.e. both human and robot wages are reduced to the barest subsistence

Again my response to mention of the Malthusian scenario is that I want to put off discussion of that. However, the first case as I intended it (without cheap duplication) was essentially what we have now, which is not a Malthusian scenario. I assumed all costs are the same as for humans, including duplication. So, we would simply have two kinds of human, a flesh one and a silicon/metal one (say).

Cheap duplication is the key factor, not low resource use or high productivity, because duplication is, of course, the mechanism by which a population reaches the Malthusian limit. Low resource use and high productivity don't have any clear effect one way or another, because consider the following two scenarios:

1) You have a trillion minds in a cube (hence: low resource use and high productivity).

2) You have a cube-shaped portal to another world, and on that other world there are a trillion minds.

The scenarios are (from your point of view) effectively identical. But the second scenario is just the international trade scenario. Free trade is usually better, draft horses notwithstanding.

Suppose that I were living in Manhattan and there were no Japan in the world. Then one day, I find a box, and inside that box is Japan (in fact the box is a portal to Japan which is on some other world). So now the population of Manhattan is half the (previous) population of the US, because of the people in the box. The net economic impact is positive, for the same reason that the impact of trade with Japan is positive. Rent goes up in Manhattan a bit, because of the advantage of being near the portal to Japan. But the higher rent necessarily is counterbalanced by the higher advantage of being near the portal since that is the reason for the higher rent, so that the net effect is not obviously either positive or negative (I could argue that it is actually positive). Notice that at this point there is not necessarily any displacement of Americans out of the economy, even though there are 150 million minds in a box. Displacement doesn't even begin at this point, even though the population of the box is comparable to the population of the US (i.e. half).

If this is about right, then everything happens at or near the Malthusian limit. That's what we need to look out for. Not merely the existence of masses of uploads, so long as the Malthusian limit remains far.

The cost of capital becomes equal to the cost of land (which in economic parlance also includes other natural resources) necessary to produce it

Let's specify the scenario more explicitly. We assume the ghosts are completely friendly, innumerable, will do anything we want for free, but need powered bodies to do it (they have only the minutest ability to direct physical events). I think this comes closest to the upload scenario. (If we assume the ghosts have significant psychokinetic power then the scenario is I think very different from the upload scenario). The ghosts are essentially Indian subcontractors, only much cheaper (free) and much more numerous (infinite).

Immediately I think we can see that there are fairly severe bottlenecks on the ability of uploads, sorry, ghosts to direct significant physical activity. There may be infinitely many ghosts, but there are at any given time only so many powered bodies for them to direct. Alongside these powered bodies directed by external ghosts, there are powered bodies directed by internal ghosts - namely, human bodies, which have their own ghosts. There is no upper limit on the mental work that ghosts can do, but there is a severe limit on the physical work that ghosts can do no matter how many ghosts there are. So we would have an economy which was essentially all a mind-work economy, with only a minuscule fraction (zero percent, considering the infinity of minds) of the population (human or ghost) doing any physical work.

Anyway, for there to be any Malthusian result, it seems to me that it would have to involve competition for resources between human bodies and robot bodies, not between humans and ghosts directly. But I wanted to discuss events prior to hitting any Malthusian limit.

So, all a person has to do to get a ghost to help him is to build a robot with a ghost interface and supply the robot with energy. One more specification - we suppose that ghosts will help whoever owns the bodies (that easily takes care of the decision about who they help).

In principle, once a person owns a number of ghost-directed robot bodies, the bodies can do all the work required to keep themselves (and him) alive and might furthermore be able to increase their own number (by buying raw materials on the market and constructing another body, which can then be inhabited by a new ghost).

For a long time, until the Malthusian limit is reached, it's not obvious that this would significantly affect the employability of humans. Some humans would create robots and get their robots to work for them, but not all humans would have robots, and those humans would have to trade with each other as usual. And even the humans with robots would have to trade with the wider economy to get raw materials (just as slave plantation owners did), and therefore probably trade with robotless humans. After a long time, a very long time, a Malthusian limit might be reached, but the unemployability of people before that happens seems to me to be greatly exaggerated.

However, Manhattan is situated right next to a vast and much less densely populated continent from which it's cheap to bring stuff, so that food prices in Manhattan reflect the farming land rent in these neighboring places, not Manhattan itself. If the land rents in the whole world were as high as in Manhattan, you bet there would be mass starvation.

You are again assuming that the Malthusian limit is already reached. You have relied over and over on the Malthusian argument, which in my original comment - the one that you objected to - I had already acknowledged as strong (and as not specific to uploads), and I had already said that I was not critiquing it (yet).

Initially, long before the Malthusian limit is reached, it makes sense to situate the uploads in a highly populated area, like Manhattan (a cousin of mine explained that companies are buying warehouses near Wall Street and filling them with computers, because light speed is a limiting factor; it's no good to have the trading computers situated far from Wall Street). And the effect of placing the uploads in Manhattan should be much like the effect of turning a city into an international port - which raises the local rents high only to the extent that it is made more worthwhile to be close to the port, so that the net effect of the raised rents is not obviously negative (in fact I would argue positive). Far from Manhattan rents would not be much affected, and meanwhile people would benefit to some degree, just as they would increased trade from a port.

It would be a long time before the whole world turned into one large city.

Adding one new upload box is a bit like adding one new port. Imagine that every day somebody opens a new port to a new Japan on a new planet. What's the effect? Well, suppose that there is already five ports open to five Japans within a ten mile radius, and somebody opens a new port to a new Japan right next door. You ask me, this has the aroma of diminishing marginal returns about it. The port owner tries to profit from trade to a Japan via his port, but the nearness of the other ports (and the existence of hundreds or thousands of ports further away) means that he can't charge monopolistic prices. The amount of profit that a person can make from his port to a new Japan rapidly approaches the cost of setting up the port, possibly long before the countryside is completely covered with ports to Japans, and beyond that point there is no net profit to building yet another port to yet another Japan. Since that happens long before ports to Japans completely cover the landscape, then there is still much land left over for people to live on.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-29T02:47:16.486Z · LW(p) · GW(p)

Cheap duplication is the key factor, not low resource use or high productivity, because duplication is, of course, the mechanism by which a population reaches the Malthusian limit.

Low resource use is by itself not a problem. If suddenly half the humanity gained the magic ability to subsist on much less resources, that wouldn't cause wages to drop, ceteris paribus. In principle, it wouldn't even have to have any visible consequences, at least in places where everyone's labor can earn wages well above subsistence so there's no need to ever test the limit.

High productivity is a mixed bag. If suddenly half the humanity magically became much more productive, it would benefit the rest by making some things cheaper (basically all stuff that can be mass-produced), but it would also hurt them by bidding up the price of zero-sum things (most notably status and land). The net effect would depend on the concrete scenario.

Cheap duplication is an express ticket to a Malthusian equilibrium. Now, the point is that in the Malthusian equilibrium, you are definitely worse off if there is other labor that is far more productive and/or capable of subsisting on less resources, because this will push your wage below your subsistence. This is why the ordinary human Malthusian situation means dire but (usually) survivable poverty, but in the robot/upload Malthusian situation humans are kaput.

Suppose that I were living in Manhattan and there were no Japan in the world. Then one day, I find a box, and inside that box is Japan (in fact the box is a portal to Japan which is on some other world). [...]

The effect of the box depends on how much you have to pay the minds in the box for their services. The problem in your example is that it fails to distinguish clearly between two scenarios:

  • The box is a portal to another rich country with its own rich endowment of land and capital and accordingly high wages, so you have to trade expensively for the labor of these folks. This won't (in general) drop the wages on the U.S. side.

  • The box contains millions of uploads willing to work for their subsistence wage of a few cents a year. In this case, the U.S. wages of people competing with them will drop significantly, and if the number of uploads is large enough, the wages will plummet asymptotically down to the upload subsistence level.

The problem with your subsequent "port to Japan" analogy is similar. If Japan is in the business of selling dirt-cheap labor that directly competes with yours, then this is certainly very bad news for you if you sell labor for a living. If it's a high-wage country in its own right, everything is great.

Regarding the ghosts, I should have been more precise about my assumptions, which were that ghosts can do any intellectual or physical labor that humans do nowadays, but they can't conjure land and resources out of nothing. So you're screwed if you don't own enough land that you can make the ghosts eke out sufficient food and lodging out of it, because your labor is worth zero, and even capital is worth only as much as the land rent opportunity cost that goes into making it.

This is very different from the upload scenario only if you assume that as the price of mental labor falls to near-zero, the price of physical labor remains high because machines adequate to replace human labor are expensive. This however seems very unlikely to me -- what are these tasks that couldn't be cheaply automated once uploads are available to control the machinery?

But I wanted to discuss events prior to hitting any Malthusian limit.

The whole point is that with uploads the Malthusian limit (and that's the nasty upload-subsistence one) is reached in the blink of an eye.

Replies from: None
comment by [deleted] · 2011-03-29T02:55:52.005Z · LW(p) · GW(p)

The whole point is that with uploads the Malthusian limit (and that's the nasty upload-subsistence one) is reached in the blink of an eye.

Since this is the whole point, then rather than prolong the rest of the exchange I will eventually consider the Malthusian limit, whether there is any defense against it and if so what it is, and what it would really come to. However, later. Maybe much later.

comment by jimrandomh · 2011-03-28T19:14:51.305Z · LW(p) · GW(p)

Basically, the argument goes that uploads will be able to replace large swathes of normal humans who work on cognitive tasks, such as lawyers, engineers, writers, academics, programmers, etc.

This resembles the "Luddite fallacy", which was debunked by experience, which is to say, had the Luddites been right that the majority of the workforce would be replaced by a much more productive minority working the labor-saving machines (compare: humans would be replaced by much more productive uploads), we would already be living in something like a Hansonian upload dystopia, which we are not.

No, these are not the same. Labor-saving machines replace humans with a smaller number of humans, at some ratio. If the economy grows by that same ratio, then the demand for humans is back where it started. Labor-saving machines also apply only to some domains but not others, so the tasks which machines can't do become limiting and expand. But with uploads, there is no such ratio; no operators are needed, and there are no domains of things uploads can't do. so growing the economy further does not make humans valuable again.

Replies from: None
comment by [deleted] · 2011-03-28T19:44:39.866Z · LW(p) · GW(p)

No, these are not the same.

It does not matter whether they are the same or not - what matters is whether the differences change the conclusion.

Labor-saving machines replace humans with a smaller number of humans, at some ratio. If the economy grows by that same ratio, then the demand for humans is back where it started.

Why does this change the conclusion? You need to explain how this makes the conclusion any different. You have an "if" there. I can propose an equivalent "if" in my scenario - and I did, which is that the total economy, both supply and demand, expands along with the population (where population = human + upload). Remember, the uploads are potential demanders, not just suppliers.

Labor-saving machines also apply only to some domains but not others, so the tasks which machines can't do become limiting and expand. But with uploads, there is no such ratio; no operators are needed, and there are no domains of things uploads can't do. so growing the economy further does not make humans valuable again.

Again, why does this change the conclusion? You say that there are no domains of things that uploads can't do. But you could say the same thing of new babies. There are no things that new babies can't do. So, suppose that our population is increased a hundredfold by new babies. Now we have 99 new-population for each old-population. And they can do everything the old-population can do. So, is the old-population no longer valuable? No - the economy simply expands a hundredfold. The 99 new-population compete for the exact same jobs that the old-population were doing, sure, but they are also 99 new customers. The new people buying stuff exactly matches the new people selling stuff. So the old-population remains in business.

comment by Eneasz · 2011-03-28T20:24:28.757Z · LW(p) · GW(p)

The population of the US expanded enormously over the past 200 years, and wages have not gone down.

Real wages for most Americans have gone down over the past few decades.

Admittedly this is speculated to be due to returning to a more "natural" two-class society rather than population changes.

Replies from: None
comment by [deleted] · 2011-03-28T21:42:14.269Z · LW(p) · GW(p)

There is no way for me to answer that without getting deeply into politics.

comment by jimrandomh · 2011-03-28T02:22:33.042Z · LW(p) · GW(p)

If you take a sane and trustworthy human brain, and damage it by randomly perturbing its chemistry or adding a tumor, you sometimes end up with a dangerously insane human. Uploading is a bigger change than that.

Replies from: DanielLC
comment by DanielLC · 2011-03-29T17:05:09.482Z · LW(p) · GW(p)

True, but an uploaded human won't go foom on its own. It won't be easy trivial to fix him or her, but you have plenty of time, and tools far more powerful than any biological drugs.

Replies from: jimrandomh
comment by jimrandomh · 2011-03-29T17:19:18.487Z · LW(p) · GW(p)

Are you sure about that? Uploaded humans get the power to copy and modify themselves, which can probably be leveraged into fooming somehow.

Replies from: DanielLC
comment by DanielLC · 2011-04-11T01:52:25.981Z · LW(p) · GW(p)

Uploaded humans get the power to copy and modify themselves

They can be given those powers. They don't inherently have them. Just because they're running on a computer doesn't mean that they have a terminal.

comment by Kaj_Sotala · 2011-03-28T08:32:38.700Z · LW(p) · GW(p)

Right now, the problem is that UFAI seems easier to program than FAI, so people will probably stumble upon UFAI first.

Create a considerable number of uploads, and what changes? Not much. Building UFAI is still easier to program than FAI; you've just increased the speed at which this may happen. Yes, this might eliminate part of the subjective speed advantage of any AIs. But it would still leave open the possibility of e.g. algorithmic enhancements leading to an increased subjective speed. And you've given the AIs a rich virtual world in which they could exist even better than in the Internet, full of human brains that can be hacked into directly.

Now certainly, if we could pick the people who became uploads, and police them to make sure they were only developing FAI... but this seems to require that the decision-makers across the world were convinced of an UFAI-triggered Singularity being a Serious Bad Thing. They'd also need to be convinced of it pretty quickly, and to take all the appropriate safeguards to limit the potential damage the uploads of their country could do. They'd need to do this at a time when there might be an upload arms race going on between various countries. All of this seems to me rather unlikely, and it seems more likely that such a scenario would only bring UFAI closer.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-03-28T15:34:12.647Z · LW(p) · GW(p)

Right now, the problem is that UFAI seems easier to program than FAI, so people will probably stumble upon UFAI first. Create a considerable number of uploads, and what changes? Not much.

Well, that's part of the problem. Another part is that many people -- including AI researchers -- don't take the threat of UFAI seriously.

After all, there are plenty of situations where the dangerous thing is easier than the safe thing, and where we still manage to some degree or another to enforce not doing the dangerous thing. It's just that most of those cases are for dangers we are scared of. (Typically more scared than is justified, which has its own problems.)

And in that context, it's perhaps worthwhile to think about how uploads might systematically differ from their pre-upload selves. It's not clear to me that we'd evaluate risks the same way, so it's not clear to me that "not much" would change.

For example, even assuming a perfect upload (1), I would expect the experience of being uploaded to radically alter the degree to which I expect everything to go on being like it has been for the last couple of thousand years.

Which might lead an uploaded human to apply a different reference class from the same human pre-upload to calculating their priors for UFAI being an existential threat, since the "well, computer science has never been an existential threat before, so it probably isn't one now either" prior won't apply as strongly.

Then again, it might not.

===

(1) All this being said, I mostly think the whole idea of a "perfect upload" is an untenable oversimplification. In reality I expect there will be huge differences between pre-upload and post-upload personalities, and that this will cause a lot of metaphysical hand-wringing about whether we've "really preserved identity," and ultimately we will just come to accept uploading as one of those events that changes the way people think (much like we do now about, say, suddenly becoming wealthy, or being abused by a trusted authority figure, or any number of other important life events), without being especially troubled by the implications of that for individual identity.

comment by Manfred · 2011-03-27T23:18:09.346Z · LW(p) · GW(p)

In order to run a brain without first understanding AI, you have to simulate the brain as a physical object.

This is difficult, the brain is complicated (example). Currently the IBM SyNAPSE project can simulate a lot of nodes in a network that behave sort of like neurons (still 5 orders of magnitude away from even running a realtime network with as many "neurons" as a human brain, never mind as complex), but if these tricky physical interactions have to be simulated as well the problem grows exponentially. So what looks like 5 orders of magnitude is more like... lots. We can probably take some shortcuts, but even subtle changes to the brain can produce things like schizophrenia, so I'm reluctant to estimate. In the hardest case, where you have to treat each neuron as affecting every other neuron via electric fields... what's the factorial of (# of neurons in the human brain)?

Replies from: JoshuaZ, paulfchristiano, XiXiDu
comment by JoshuaZ · 2011-03-28T00:12:37.579Z · LW(p) · GW(p)

what's the factorial of (# of neurons in the human brain)?

You have on the order of 10^11 neurons. We can use Stirling's formula which is a good, quick, approximation for n! to get around 10^(11*10^11 - 4*10^10 + 6)

Note that there's growing evidence that glial cells play a role in neural interaction. Thus, a total for all neorons is not necessarily an upper bound. However, at the same time, it seems that electric field interactions aren't that important (humans react pretty normally when in the presence of strong electromagnetic fields unless they are of specific types. So the interaction can't be that sensitive). Moreover, we know that killing a few neurons doesn't drastically alter personality, which is a strong argument for not having such complicated interactions.

Replies from: RobinZ
comment by RobinZ · 2011-03-28T03:18:16.443Z · LW(p) · GW(p)

Not a strong argument - it is known that the brain has a fair bit of redundancy, as demonstrated by the ways parts of damaged brain can be trained to perform tasks the corresponding parts do not handle in healthy brains.

comment by paulfchristiano · 2011-03-27T23:50:55.601Z · LW(p) · GW(p)

This is difficult

So is AI. If I had to bet, I would give very good odds (70%? Incredibly arbitrary guess) for the hypothesis: "Understanding how a brain works well enough to build something with basically the same behavior is easier (society will do it first) than designing a completely foreign AI."

Notice, for example, that if our current understanding of physics is correct the amount of time needed to simulate a brain is probably (# of neurons in the brain) * (time required to simulate a neuron in sufficient detail). Nature never deals with complexities like (# of neurons in the brain)!.

Replies from: Manfred
comment by Manfred · 2011-03-28T00:54:38.280Z · LW(p) · GW(p)

Note that I went on to talk about how difficult, which with a Moore's law progression of computing power gives a timescale of a century to millennia, using our current simulations as a yardstick.

I don't think "nature never deals with exponential complexities" is a good enough reason why we won't see them in simulating the brain. It's a bit dubious to start with (linear complexity isn't true of planets, why should it be true of neurons?), and porting the brain to the Von Neumann architecture can introduce plenty of things nature never intended. Obviously the timescale cuts off when we have nano-scale engineering good enough to build a brain and not have to port it anywhere, but given the requirements for that I don't think it will change the probable lower bound of centuries.

Replies from: DanielLC
comment by DanielLC · 2011-03-29T17:16:01.581Z · LW(p) · GW(p)

which with a Moore's law progression of computing power gives a timescale of a century to millennia

Are you saying Moore's law will keep working for centuries or millennia? You can only make transistors so small.

Also, the capital cost has been increasing exponentially.

Replies from: Manfred
comment by Manfred · 2011-03-29T17:39:10.161Z · LW(p) · GW(p)

Definitely not, but it's reasonable in the near future and probably an upper bound in the father future.

comment by XiXiDu · 2011-03-28T14:53:01.860Z · LW(p) · GW(p)

In order to run a brain without first understanding AI, you have to simulate the brain as a physical object.

What reasons are there to believe that we can understand intelligence without understanding the brain first? AIXI is to narrow AI as an universal turing machine is to a modern Intel chip. To produce a modern Intel CPU you need a US$2.5 billion chip factory. To produce something like IBM Watson you need a company with a revenue of US$99.870 billion and 426,751 employees to support it. What reasons do you have to believe that in order to develop artificial general intelligence that is capable of explosive recursive self-improvement you need orders of magnitude less resources than to figure out how the brain works? After all the human brain is the only example of an efficient general intelligence that we have.

Replies from: Manfred, timtyler
comment by Manfred · 2011-03-28T18:33:46.270Z · LW(p) · GW(p)

Because there aren't any indications that general intelligence is so narrow a category that we have to copy the brain, so the question is "which is faster - normal AI research starting now, or modeling the brain starting later?" Once the brain is understood to some high degree, in order to base an intelligence off of it you get a cheat sheet for most the the decisions of normal AI research, and you still have to implement it computationally, which will be harder than normal AI research. So I think there's a good chance, though I'm not certain, that normal AI research will be able to make good on its head start and create a self-improving AI first. Both will be faster than simulating a specific human brain, which is what I said would take orders of magnitude more resources.

Replies from: torekp, timtyler
comment by torekp · 2011-04-03T14:00:21.641Z · LW(p) · GW(p)

Another consideration favoring normal AI over whole brain emulation is that evolution finds local optima. It may be possible to exceed the brain's effectiveness or efficiency at some intellectual tasks by using a radically different architecture.

comment by timtyler · 2011-03-28T21:42:41.663Z · LW(p) · GW(p)

Yes, that is about the correct answer to this question. We can see that emulations of scanned brains won't come first, since they require more advanced technology and understanding to develop. The same situation as with scanning birds - broadly speaking.

comment by timtyler · 2011-03-28T21:44:28.580Z · LW(p) · GW(p)

AIXI is to narrow AI as an universal turing machine is to a modern Intel chip.

I am not sure what you were going for here - but FWIW, AIXI is pretty general.

comment by Pavitra · 2011-03-27T23:16:18.792Z · LW(p) · GW(p)

Given the history of sociopathic humans, it seems to me that unfriendly upload self-modifications are significantly more likely than unfriendly AGI to produce the kind of dystopia that only results from an almost-Friendly takeover.

It also seems likely that even a rogue upload would still be at significant risk of being eaten by a proper AGI. Modifying a human in such a way as to become a full-strength singularity seems equivalent to the problem of building a singularity AGI from scratch, with an additional requirement to understand a certain amount about the human brain and mind.

Replies from: paulfchristiano
comment by paulfchristiano · 2011-03-27T23:39:01.342Z · LW(p) · GW(p)

Modifying a human in such a way as to become a full-strength singularity seems equivalent to the problem of building a singularity AGI from scratch

The question is whether a human solves that problem or leaves it to an upload (or it never gets solved).

Replies from: Pavitra
comment by Pavitra · 2011-03-28T00:46:02.477Z · LW(p) · GW(p)

True. And, based on no math whatsoever, I would guess that we're more likely to get FAI if we make uploads than if we don't make uploads.

comment by XiXiDu · 2011-03-28T13:54:49.592Z · LW(p) · GW(p)

If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI.

It might be much easier to clone Yudkowsky a hundred times within the next 10 years, make them all read the sequences at some point and make each one focus on a different FAI related problem. By ~2040 we could have a hundred Yudkowsky's working on FAI.

Why that route might be better than uploading:

  • It is feasible with current technology.
  • We already know that Yudkowsky is friendly and works.
  • There are no existential risks associated with cloning humans (only indirectly).
Replies from: timtyler, Vaniver, atucker, timtyler
comment by timtyler · 2011-03-29T00:22:02.803Z · LW(p) · GW(p)

We already know that Yudkowsky is friendly and works.

Election promises.

comment by Vaniver · 2011-03-28T23:22:26.945Z · LW(p) · GW(p)

We already know that Yudkowsky is friendly and works.

Provably friendly?

comment by atucker · 2011-03-31T13:21:29.952Z · LW(p) · GW(p)

Overall I like this idea -- its at the very least amusing, and much harder to think of ways in which its dangerous.

We already know that Yudkowsky is friendly and works.

We know that Yudkowsky as he experienced the life that he went through is friendly, I'm not so sure about 100 Yudkowskys raised differently. Never having met him, I won't assume that there's no possible way he could go wrong.

comment by timtyler · 2011-03-29T00:21:24.922Z · LW(p) · GW(p)

By ~2040 we could have a hundred Yudkowsky's working on FAI.

Probably too late, IMO. Eyeballing my graph I have maybe 15% probability mass out there.

comment by Nick_Tarleton · 2011-03-28T01:17:48.396Z · LW(p) · GW(p)

I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don't yet understand/believe these arguments.

I don't believe the literal "as hard as" claim, or have the impression that such a strong claim is common.

I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?

The latter is usually pursued without safety in mind at all; supporting that seems like a bad idea. As far as I know, only SIAI and the Future of Humanity Institute have studied safe uploading, and no one is both doing actual neuroscience and taking safety into account.

comment by endoself · 2011-03-28T02:06:36.653Z · LW(p) · GW(p)

I have encountered the argument that safe brain uploads are as hard as friendly AI.

Where? Does it address the possibility of brain uploads or their safety? The former could be about as hard, but the later seems much easier.

comment by XiXiDu · 2011-03-28T14:04:56.466Z · LW(p) · GW(p)

There is another possibility. Why not work on brain implants and BCIs (brain–computer interfaces)? Such could be used to augment and add various capabilities and should be much easier to achieve than uploading. People are already working on it:

An iPlant is a brain implant that is technically no different from today's deep brain stimulation implants, but which has not yet been developed for human use. Fully implemented, the implant would electronically regulate monoamines and the reward system in the brain, thus giving its user increased control over his or her motivation, mood, learning and creativity.

Replies from: timtyler
comment by timtyler · 2011-03-28T21:49:16.702Z · LW(p) · GW(p)

I think the short answer is that implants make little sense in the short term. There's a section on this in Redesigning Humans - and I cover it in: Against cyborgs. Around the same time this kind of thing becomes plausible, the human brain's sell-by date comes along. Fyborgs, rather than cyborgs.

comment by Kaj_Sotala · 2011-03-28T08:35:22.591Z · LW(p) · GW(p)

There are three factors suggesting the safety of an upload-initiated singularity. First, uploads always run as fast as the available computing substrate. It is less likely for an upload to accidentally stumble upon (rather than design) AI, because computers never get subjectively faster. Second, there is hope of controlling the nature of uploads; if rational, intelligent uploads can be responsible for most upload output, then we should expect the probability of a friendly singularity to be correspondingly higher.

Was the "three" a typo, or did you forget to write out the third?

comment by timtyler · 2011-03-29T00:29:01.122Z · LW(p) · GW(p)

Is this the crux of the argument against uploads?

I am sceptical about the supposed safety virtues of uploads, but IMO, the most obvious case against uploads is that they are an irrelevance - since they will come too late to make any difference.