Utilitarianism- WBE (uploading) > FAI

post by Curiouskid · 2011-12-05T23:41:44.722Z · LW · GW · Legacy · 43 comments

Contents

43 comments

If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE (whole brain emulation) and hence virtual reality (or digital alteration of your source code) and hence utopia? Wouldn't you want to try to prevent AI research and just promote WBE research? Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly? Or maybe uploading isn't possible for technological or philosophical reasons (substrate dependence)? 
Is there a link to a discussion on this that I'm missing out on?

43 comments

Comments sorted by top scores.

comment by Vladimir_M · 2011-12-06T01:19:34.865Z · LW(p) · GW(p)

You say (emphasis mine):

If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE (whole brain emulation) and hence virtual reality (or digital alteration of your source code) and hence utopia?

That's an enormous non sequitur. The resources necessary for maintaining a utopian virtual reality for a WBE may indeed be infinitesimal compared to those necessary for keeping a human happy. However, the easiness of multiplying WBEs is so great that it would rapidly lead to a Malthusian equilibrium, no matter how small the cost of subsistence per WBE might be.

For an in-depth treatment of this issue, see Robin Hanson's writings on the economics of WBEs. (Just google for "uploads" and "ems" in the archives of Overcoming Bias and Hanson's academic website.)

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T02:00:06.598Z · LW(p) · GW(p)

I'll look into it. What is the motivation for these uploads to multiply? I can understand the human desire to. But even if uploads cannot directly change their source code, it seems pretty likely that they could change their utility function to something that is a little more logical (utilitarian). If they don't have the desire to copy themselves indefinitely (something which humans basically have due to our evolutionary history), doesn't this lower the probability of a population explosion uploads?

Replies from: Vladimir_M
comment by Vladimir_M · 2011-12-06T02:37:51.061Z · LW(p) · GW(p)

Clearly, there's the immediate incentive to multiply uploads as cheap labor. Then there's the fact that in the long run (possibly not even that long by our present standards), sheer natural selection will favor philoprogenitive inclinations, until it hits the Malthusian wall.

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T02:46:07.021Z · LW(p) · GW(p)

Cheap labor towards what end? Has the motivation of future uploads been addressed by Hanson? I think the true rejection is the fact that there's an evolutionary advantage to mass replication. If there's ever a scarcity of resources, the side with there could be a war or something and the side with fewer but smarter uploads would win and take the computing power from the mass replicators.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-12-06T03:00:55.624Z · LW(p) · GW(p)

Cheap labor towards what end? Has the motivation of future uploads been addressed by Hanson?

He foresees a continuation of the property-based economy. (Which, if we discard naive and vague utopian thinking, is in fact an optimistic assumption given the realistic alternatives.)

Whether the uploads will themselves be property (i.e. slaves) or freely competing on the labor market, the obvious incentives will lead to them being multiplied until the marginal product of an upload is equal to the cost of the server space it occupies. (Which is to say, bare subsistence, thus leading to a Malthusian equilibrium.)

I think the true rejection is the fact that there's an evolutionary advantage to mass replication. If there's ever a scarcity of resources, the side with there could be a war or something and the side with fewer but smarter uploads would win and take the computing power from the mass replicators.

This sounds like an arbitrary story. What basis do you have for assuming such things would happen? And how would the community of "fewer but smarter" uploads avoid falling into its own tragedy of the commons where a subset of them defects by reproducing?

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T21:34:40.491Z · LW(p) · GW(p)

If it sounds arbitrary, it's because I'm just speculating (which it seems most predictions about the FAR future are anyway). Not to say that all speculations are of equal merit.

My point isn't necessarily about a war that could happen between uploads. I don't assume that it will happen; just that it's a possibility.

In evolution, it's not the species that reproduces the most that survives. It's the species with the most surviving members. Computational capacity concentrated in the hands of the few would enable greater coordination if there ever came to a fight over computation substrate. Although I don't see what the motivation for this sort of war would be.

With uploads being nearly immortal, incredibly intelligent compared to modern humans, and able to digitally alter their hedonic set points, I don't see how anything even moderately similar to modern society could persist after the invention of WBE. Why would an upload want to own a home or fight over property? They can just alter their hedonic-set point. Why would a WBE want to go to war? They aren't going to be gaining any more happiness. They would just be needlessly endangering their lives. Why would they want to "work" hard? They don't really need to pay for dentists, food, college, homes. What sort of work is being done in this economy? What are the motivations of the uploads to do this work? What are the motivations of the "employers" (if there still are employers)? The fact that you could alter you set-point and learn at an exponential rate, and live for what seems like an eternity makes me think that uploads will be nothing like humans today. But this is just my speculation, and I'm open to other ones.

comment by Vladimir_Nesov · 2011-12-06T08:08:07.335Z · LW(p) · GW(p)

If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE...

You are not the only agent in the world, so these considerations have no impact. Others will work towards creating UFAIs out of ignorance, and as WBEs will be doing the same faster.

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T21:08:04.953Z · LW(p) · GW(p)

That's why I wrote this:

Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly?

Isn't is strange that the people smart enough to build AGI would be stupid enough not to even TRY to make sure it's friendly? I'm not saying that strange things don't happen, but they to me it seems like they would have a lower probability. I mean, the one of the goals of SIAI is to make sure that anybody who would have the means to create WBE would know about the risks involved. But who are the people who are likely to develop AGI? That seems to be a key question that I haven't seemed discussed in very many places. If we can try to identify those people, then SIAI could make sure that they are informed about the risks. Also, if we can identify who those people are, we could try to tell them to hold off until WBE comes along.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-06T21:23:44.329Z · LW(p) · GW(p)

This doesn't work reliably enough. You need just one failure, and actually convincing (as opposed even to eliciting an ostensible admission of having been convinced) is really difficult. A serious complication is that it's not possible to "make AGI Friendly", most AGI designs can't be fixed without essentially discarding everything, and so people won't be moved deeply enough to kill their mind baby, they would instead raise up defenses against offending arguments, failing to understand the point and coming up with rationalizations that claim that whatever they are doing already happens to be Friendly (perhaps with minor modifications). Just look at Goertzel (see my comment).

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T21:41:16.173Z · LW(p) · GW(p)

Good point. Do you know if SIAI is planning on trying to build the first AGI? Isn't the only other option to try to persuade others?

Also, I don't really know too much about the specifics of AGI designs. Where could I learn more? Can you back up the claim that "most AGI designs can't be fixed without essentially discarding everything"?

comment by [deleted] · 2011-12-05T23:53:08.077Z · LW(p) · GW(p)

There's no guarantee that an em doesn't elevate itself into becoming unfriendly, is there?

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T00:40:43.368Z · LW(p) · GW(p)

Good point. then again, I don't think/care if humanity on a biological substrate died out (which is a relatively harsh way of putting it). I prefer saying we leave our inferior substrates behind. So even if an upload became unfriendly, it wouldn't pose an existential risk to post-humans because it would still preserve itself (not so sure about biological ones).

Replies from: J_Taylor
comment by J_Taylor · 2011-12-06T06:11:04.825Z · LW(p) · GW(p)

An unfriendly AI would probably just kill us. An unfriendly em? A human wrote The 120 Days of Sodom.

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T21:38:12.148Z · LW(p) · GW(p)

Well. That's not an existential risk, but it would be bad if we had a sadistic upload in charge. But I think that if we had enough knowledge of neuroscience to create WBE, then we should be able to eliminate the pathologies of the mind that create deranged lunatics, sadists, and psychopaths. Who would want to be like that anyway, when the alternative is to live in a digitally created state of bliss? You could still be part of what you consider to be "reality", so that you wouldn't feel bad if you were in a "fake" virtual reality.

Replies from: J_Taylor
comment by J_Taylor · 2011-12-06T23:22:45.406Z · LW(p) · GW(p)

First, I believe the creation of em-hell is worse than human extinction. Second, I have no idea how neurologically different sociopaths are.


Who would want to be like that anyway, when the alternative is to live in a digitally created state of bliss?

At least part of me prefers friendship with real people to being wireheaded into pure bliss. Another part of me values winning zero-sum games against real people. Although no part of me values torturing innocent people, I can certainly understand why a monster-em would prefer ruling over em-hell to being wireheaded.

Our monster-em could perhaps endorse this:

http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/

Replies from: Curiouskid
comment by Curiouskid · 2011-12-07T00:27:59.261Z · LW(p) · GW(p)

agree with the point of em-hell. But I don't think it's very likely because I think that you could screen against sociopaths being uploaded.

The full quote:

Who would want to be like that anyway, when the alternative is to live in a digitally created state of bliss? You could still be part of what you consider to be "reality", so that you wouldn't feel bad if you were in a "fake" virtual reality.

You could still have real friends. What zero-sum games do you have in mind? Surely anything you find enjoyable now pales in comparison to what's possible and likely with WBE?

So, our monster-em would be an existential risk IF all these conditions are met:

  1. it gets through a screening process.
  2. it prefers staying psychopathic to enjoying em-bliss.
  3. it somehow gains power to torture not only a few ems, but all the ems.
  4. it prefers to torture real people/ems rather than torturing things that are like highly realistic videogame simulations that aren't conscious.
  5. No other ems are able/willing to stop it.

Doesn't that seem unlikely?

Replies from: J_Taylor
comment by J_Taylor · 2011-12-07T01:16:37.671Z · LW(p) · GW(p)

First, I value friendships with real people. If I were an em, I would probably value friendships with real ems and not unconscious simulations.

If I valued torturing real people, em-version of me would probably value torturing real ems and not unconscious simulations.

Not all humans have pleasure as their summum bonum. Not all humans would want em-bliss.

Second, you do not know what this screening process would entail and whether it would be possible to fool it. You also do not know how unfriendly neurotypical ems could become.

Third, yes. Em-hell is unlikely. However, if it is possible for a monster-em to arise and gain power, Pascalian reasoning begins to apply.

Fourth, em-hell and em-heaven are both fantasies. Other hypothetical futures probably deserve more focus.

comment by timtyler · 2011-12-06T10:44:13.934Z · LW(p) · GW(p)

Or is it that AGI is more likely to come before WBE

That's correct, yes.

Replies from: Logos01
comment by Logos01 · 2011-12-06T12:08:33.321Z · LW(p) · GW(p)

Or is it that AGI is more likely to come before WBE

That's correct, yes.

I've seen this belief espoused many times. I can't say that I entirely agree with it. Rather -- I don't think there's sufficient justification for the belief. WBE can be achieved agnostic to the fundamental function of the various structures involved in the brain, with regards to how they relate to cognition. AGI requires theoretical, not observational, breakthroughs -- and those are far harder to predict. I've seen 'serious' predictions which peg early WBE around thirty years from now.

I am not at all confident that fully-synthetic AGI will be achieved by then.

Replies from: timtyler
comment by timtyler · 2011-12-06T14:30:00.913Z · LW(p) · GW(p)

I've seen 'serious' predictions which peg early WBE around thirty years from now. I am not at all confident that fully-synthetic AGI will be achieved by then.

Well you probably should be, if you think that is a reasonable WBE timescale - since it looks much easier.

WBE takes bioinspiration to an extreme level. Engineers just don't use bioinspiration that much.

Perhaps also check out some other people's estimates of the time to superintelligence.

The WBE advocates mostly seem to consist of neuroscientists who are strugging to stay relevant.

Replies from: Logos01
comment by Logos01 · 2011-12-06T15:03:44.761Z · LW(p) · GW(p)

Your links are of depressingly low quality. I've seen the whole brain emulation roadmap report from 2008 floating around here on LW more than once, and it is a far more robust document than what you've presented. I suggest you familiarize yourself with it. (It's worth noting that none of its assumptions have been falsified unlike your Lotus link; and it is also an actual scientifically conducted report, as opposed to either your SEED Magazine or your Lotus page.)

That being said:

WBE takes bioinspiration to an extreme level. Engineers just don't use bioinspiration that much.

No, it doesn't. It takes modeling to an extreme level. WBE is more likely to me for a simple reason: we can implement a brain without understanding how it works. To achieve human-equivalent AGI requires that we know what we are doing. And despite six decades' worth of research on that topic, I cannot as of yet see any discernable indications that we are significantly closer to mastering general intelligence than we were when we began the effort. Certainly, we have no theoretical mechanic for describing how consciousness or non-conscious general intelligence operate.

We'd need that before we could begin to go any further. Now, yes -- having a working emulated brain would allow us to use our current methods of study to attempt to discern what is or is not necessary for cognition and/or problem solving, through brute-force reverse-engineering. And it is possible that the process to figuring out how to emulate a brain will enable us to do that before-hand. I am more than willing to adjust my assesments of probability on this in the light of updated information.

But according to the information as yet available -- AGI is distantly remote. WBE is not.

Replies from: timtyler
comment by timtyler · 2011-12-06T15:45:27.405Z · LW(p) · GW(p)

Unspecified false assumptions seems to be too weak to respond to. That page refers to a bunch of estimates and surveys relating to the estimated time to superintelligence by a range of individuals and groups.

WBE is more likely to me for a simple reason: we can implement a brain without understanding how it works. To achieve human-equivalent AGI requires that we know what we are doing.

If we can build a model of a worm brain, we can probably scale it up a billion times without any deep understanding of how it works. That's just one type of shortcut to superintelligence on the path to WBE. In practice there are lots of similar shortcuts - and taking any one of them skips WBE, making it redundant.

You simply don't need to understand how an adult brain works in order to build something with superior functionality. We did not need to understand how birds worked to make a flying machine. We did not need to understand how fish worked to build a submarine. Brains are unlikely to be very much different in that respect.

And despite six decades' worth of research on that topic, I cannot as of yet see any discernable indications that we are significantly closer to mastering general intelligence than we were when we began the effort.

So: you can't see much progress. However there evidently has been progress - now we have Watson, Siri, W.A., Netflix, Google and other systems doing real work - which is a big deal. Machine intelligence is on the commercial slippery slope that will lead to superintelligence - while whole brain emulation simply doesn't work and so has no commercial applications. Its flagship "products" are silly PR stunts.

The Whole Brain Emulation Roadmap is silent about the issue - but its figures generally support the contention that WBE is going to arrive too slowly to have a significant impact.

according to the information as yet available -- AGI is distantly remote. WBE is not.

That's just a baseless fantasy, IMHO.

Replies from: Logos01
comment by Logos01 · 2011-12-06T16:04:22.990Z · LW(p) · GW(p)

If we can build a model of a worm brain, we can probably scale it up a billion times without any deep understanding of how it works. That's just one type of shortcut to superintelligence on the path to WBE.

Ten million dogs cannot contemplate what Shakespeare meant when he said that a rose, by any other name, would still smell as sweet. Even a billion dogs could not do this. Nor could 3^^^3 nematodes.

This belief is just plain unintelligent.

You simply don't need to understand how an adult brain works in order to build something with superior functionality.

... I am incomprehensible of how one could come to this belief.

We did not need to understand how birds worked to make a flying machine.

No, but we did need to know how flight works to build a flying machine. We don't need to know how thought works to make a WBE happen. We only need to know how emulations work -- and we already know this. What we don't know is how cognition / intelligence works.

So: you can't see much progress. However there evidentlly has been progress - now we have Watson, Siri, W.A., Netflix, Google and other systems doing real work - which is a big deal.

Do you understand the conceptual difference between narrow intelligence and general intelligence? AI researchers gave up decades ago on the notion of narrow AI yielding general AI.

So yes. I can't see any progress at all to speak of. And yes, I know of Watson, Siri, Google, etc., etc..

Replies from: wedrifid, timtyler, MixedNuts
comment by wedrifid · 2011-12-07T16:03:52.696Z · LW(p) · GW(p)

Ten million dogs cannot contemplate what Shakespeare meant when he said that a rose, by any other name, would still smell as sweet.

Of all the metaphors you could have chosen you picked the one that dogs are the closest to contemplating.

Replies from: Logos01
comment by Logos01 · 2011-12-07T17:15:41.910Z · LW(p) · GW(p)

And yet the difference in the ability of a collection of dogs to so contemplate it is so negligible that it could be ten million dogs or ten million-zeroes dogs. It still wouldn't happen.

comment by timtyler · 2011-12-06T16:20:49.550Z · LW(p) · GW(p)

Even a billion dogs could not do this. Nor could 3^^^3 nematodes. This belief is just plain unintelligent.

That was a straw man, though. The idea was to scale up a small brain into a big brain - not to put lots of small brains together.

You simply don't need to understand how an adult brain works in order to build something with superior functionality.

... I am incomprehensible of how one could come to this belief.

Right - so: this has already been done in many domains - e.g. chess. Engineers will just mop up the remaining domains without bothering with the daft and unnecessary task of reverse engineering the human brain.

No, but we did need to know how flight works to build a flying machine. [...] What we don't know is how cognition / intelligence works.

Well, we do know what the equivalent of "lift" is. It's inductive inference. See: Inductive inference is like lift. We can already generate the equivalent of lift. We just don't yet know how to get a lot of it in one place.

Do you understand the conceptual difference between narrow intelligence and general intelligence? AI researchers gave up decades ago on the notion of narrow AI yielding general AI.

No, they didn't. A few researchers did that, in an attempt to distinguish themselves from the mainstream.

So yes. I can't see any progress at all to speak of. And yes, I know of Watson, Siri, Google, etc., etc..

So: I think progress is happening. It looks something like: this and this. Machines already make most stockmarket trades, can translate languages and do speech recognition, and bots have conquered manufacturing and are busy invading retail outlets, banks, offices and call centres. One wonders what it would take for you to classify something as progress towards machine intelligence.

Replies from: roystgnr
comment by roystgnr · 2011-12-07T17:32:44.093Z · LW(p) · GW(p)

I can "scale up" a threaded program by giving more processors for the threads to run on, but this doesn't actually improve the program output (apart from rounding error and nondeterministic effects), it just makes the output faster. I can "scale up" an approximation algorithm that has a variable discretization size N, and that actually improves the output... but how do you adjust "N" in a worm brain?

Replies from: timtyler
comment by timtyler · 2011-12-07T18:49:46.662Z · LW(p) · GW(p)

I can "scale up" a threaded program by giving more processors for the threads to run on, but this doesn't actually improve the program output (apart from rounding error and nondeterministic effects), it just makes the output faster.

Sure. Computers don't always behave like brains do. Indeed, they are mostly designed to compensate for brain weakenesses - to be strong where we are weak.

I can "scale up" an approximation algorithm that has a variable discretization size N, and that actually improves the output... but how do you adjust "N" in a worm brain?

We know that nature can easily scale brains up - because it did so with chimpanzees. Scaling brains up may not be trivial - but it's probably much easier than building one in the first place. Once we can build a brain, we will probablly be able to make another one that is bigger.

Replies from: roystgnr
comment by roystgnr · 2011-12-08T04:29:38.257Z · LW(p) · GW(p)

Once we can build a brain, we will certainly be able to make another one that is bigger, but that won't make it better. "Given this arrangement of neurons, what firing patterns will develop" is almost a completely different task than "given this problem to solve, what arrangement of neurons will best solve it", which itself is merely a footnote to the task of "wait, what are our definitions of 'problem' and 'best' again?"

Nature scaled chimpanzee brains up by creating billions of them and running them through millions of years of challenging environments; that's many orders of magnitude more difficult than building a single brain, and the result is merely expected to be whatever works best in the testing environments, which may or may not resemble what the creators of those environments want or expect.

Replies from: timtyler
comment by timtyler · 2011-12-08T10:37:15.239Z · LW(p) · GW(p)

Once we can build a brain, we will certainly be able to make another one that is bigger, but that won't make it better.

It is very likely that it would, IMHO.

Nature scaled chimpanzee brains up by creating billions of them and running them through millions of years of challenging environments; that's many orders of magnitude more difficult than building a single brain [...]

Nature had already done the R+D for buillding a partly-resizable brain, though. Turning a chimp brain into a human brain was mostly a case of turning a few knobs relating to brain development, and a few more relating to pelvis morphology. There is no good reason for thinking that resizing brains is terribly difficult for nature to do - at least up to a point.

Replies from: roystgnr
comment by roystgnr · 2011-12-09T03:22:33.506Z · LW(p) · GW(p)

Could you define how we make a brain "bigger"? Do we replace every one neuron with 2, then connect them up the same way? Without a specific definition there's nothing but handwaving here, and it's my contention that finding the specific definition is the difficult part.

But more shockingly: do you really have evidence that the last six million years of human evolution was "turning a few knobs"? If so, then I would very much like to hear it. If not, then we seem to be operating under such divergent epistemologies that I'm not sure what else I can productively say here.

Replies from: timtyler
comment by timtyler · 2011-12-09T12:45:41.136Z · LW(p) · GW(p)

Could you define how we make a brain "bigger"? Do we replace every one neuron with 2, then connect them up the same way? Without a specific definition there's nothing but handwaving here, and it's my contention that finding the specific definition is the difficult part.

That would be model-specific. Since we don't actually have the model under discussion yet it is hard to go into details - but most NN models have the number of neurons as a variable.

But more shockingly: do you really have evidence that the last six million years of human evolution was "turning a few knobs"?

I didn't claim that the last six million years of human evolution was "turning a few knobs". I referred explicitly to changes in the brain.

I was mosly referring to the evidence from genetics that humans are chimpanzees with a relatively small number of functional genetic changes. Plus the relatively short time involved. However, it is true that there are still some thirty-five million SNPs.

There's other evidence that nature's brains are relatively easy to dynamically resize. Dwarfism, gaintism and other growth disorders show that nature can relatively easily make human-scale brains of variable size - at least up to a point. Even kids illustrate that point pretty well. It does not require a lot of evolutionary R+D to make a brain bigger or smaller - that R+D was done by evolution long ago - and what we now have is resizable brains.

Replies from: roystgnr
comment by roystgnr · 2011-12-09T20:09:38.891Z · LW(p) · GW(p)

If all we wanted was to triple human brain size just like Nature can, we'd be breeding more whales. And if tripling brain size inherently tripled intelligence, we'd be asking the whales for their opinions afterwards.

If increasing brain size instead merely provides enough raw matter for other changes to eventually mold into improved intelligence, then it doesn't just matter how hard the size increase is, it also matters how hard the other changes are. And at least in nature's case, I reiterate that the improvement process was several orders of magnitude harder than the one-brain process. We might be able to do better than nature, but then we're no longer talking about "it was easy in nature, so it will be easy for us too", we're talking about "it was hard in nature, so it might be hard for us too".

Nature does seem to be able to scale brains down without the millions of years of evolution it took to scale them up, but that at least makes perfect sense as a pre-evolved characteristic. Accidents and poor nutrition are ubiquitous, so there's a clear selection pressure for brains to develop to be robust enough that restricted growth or damage can still leave a functional result. Is there any similarly strong evolutionary pressure for brains to develop in such a way that opportunities for increased growth produce a better-functional result? If so it may not have been enough pressure; supernormal growth opportunities do seem to exist but aren't necessarily positive.

(edited to fix broken link)

Replies from: timtyler
comment by timtyler · 2011-12-09T21:00:57.359Z · LW(p) · GW(p)

And at least in nature's case, I reiterate that the improvement process was several orders of magnitude harder than the one-brain process.

If we take "time" as a proxy for "difficulty" we have:

  • Origin of life: 3500 MYA.
  • Origin of brains: 600 MYA.
  • Origin of chimp-human split: 7 MYA.

According to those figures, scaling brains up a few thousand times was much easier than making one in the first place. Scaling one up by a factor of 3 was much, much easier.

As for modern big human brains, the main examples I know of arise from giantism and head binding.

comment by MixedNuts · 2011-12-06T16:42:28.063Z · LW(p) · GW(p)

If you have arbitrarily large computing power, you can just brute-force all problems. 3^^^3 nematodes could probably play chess.

Replies from: Logos01
comment by Logos01 · 2011-12-06T16:49:17.660Z · LW(p) · GW(p)

... "An arbitrarily large population of nematodes" are not a turing-complete computational substrate.

comment by Manfred · 2011-12-06T02:36:54.888Z · LW(p) · GW(p)

Your post may not go far enough.

I think that if you were a utilitarian of this sort, you'd want to take uploaded minds and reprogram them so they were super fulfilled all the time by their own standards, even if they were just sitting in a box for eternity. According to that view, making a FAI would be a HUGE missed opportunity, since it wouldn't do that.

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T21:18:05.442Z · LW(p) · GW(p)

How did it not go far enough? What would you like me to add?

even if they were just sitting in a box for eternity.

The could be super fulfilled doing other things as well. Some people (I think EY is included in this group) wouldn't want to just sit in a box for eternity. However, they could still be super fulfilled by altering their hedonic set-point digitally.

According to that view, making a FAI would be a HUGE missed opportunity, since it wouldn't do that.

There were too many pronouns for me to understand what you were talking about. Which view? And what wouldn't the FAI do? Generally when I hear missed opportunity I think of something that you don't do, but you should have done. So I don't really understand how making an FAI is a missed opportunity.

Replies from: Manfred
comment by Manfred · 2011-12-06T21:58:15.497Z · LW(p) · GW(p)

even if they were just sitting in a box for eternity.

The could be super fulfilled doing other things as well.

Right, but if they can be just as fulfilled in the box, and it allows more humans to be in this simulated utopia, why not stack 'em in boxes?

If this sounds bad, it's because it's horrible from a utility-maximizing standpoint. Utility maximizers use their own standards when evaluating the future, not the standards of the beings of the time. And so when you hear "stack 'em in boxes," the utility maximizer part of you goes "that sounds like an awful life." But that's pretty provincial - according to the people in the boxes, they're having the most wonderful life possible.

According to that view, making a FAI would be a HUGE missed opportunity, since it wouldn't do that.

There were too many pronouns for me to understand what you were talking about.

If an AI filled the universe with computers simulating happiest people in boxes, I'd label it as not FAI. But from the standpoint of cosmopolitan, "whatever floats your boat" utilitarianism, it would be a smashing success, maybe the greatest success possible. And so the greatest success comes from not making an FAI.

Replies from: Curiouskid
comment by Curiouskid · 2011-12-06T22:05:51.878Z · LW(p) · GW(p)

True, and it is a great possible explanation of the Fermi paradox as well. All the advanced alien civilizations could just be stacked in boxes with no desire to expand. On the other hand, it seems more conducive to survival to want to expand through the universe, and then stack in boxes. Also, it hardly seems objective to say that we should maximize the number of ems possible. That sounds like Kantian ethics of valuing people in themselves. Even if you agreed with Kant, (I personally Kant stand him :D) you might no value uploads in themselves.

Replies from: Manfred
comment by Manfred · 2011-12-07T00:14:10.206Z · LW(p) · GW(p)

That sounds like Kantian ethics of valuing people in themselves

Or just increasing the number of people if they have positive utility.

comment by JoshuaZ · 2011-12-06T00:14:24.744Z · LW(p) · GW(p)

I don't consider myself in a virtual reality situation to be the same as myself exploring the actual universe. If one values truth then being an upload might be appealing, but being an upload in a fantasy universe is much less so.

Replies from: amcknight
comment by amcknight · 2011-12-06T07:28:13.749Z · LW(p) · GW(p)

There isn't much of a difference as long as your interactions are with other minds and there is some consistency. We can create much more interesting worlds than the one we have now. Why not explore a virtual world?