Will the ems save us from the robots?
post by Stuart_Armstrong · 2011-11-24T19:23:56.148Z · LW · GW · Legacy · 43 commentsContents
43 comments
At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
43 comments
Comments sorted by top scores.
comment by Giles · 2011-11-24T21:36:17.126Z · LW(p) · GW(p)
A few random thoughts on ems... (None of this is normative - I'm just listing things I could imagine being done, not saying they're a good or a bad idea).
After the creation of the first em...
A lot seems to hinge on whether em technology:
- can be contained within research institutes that have strong ethical & safety guidelines, good information security and an interest in FAI research, or
- can't be contained and get released into the economic jungle
Bear in mind general difficulties of information security, as well as the AI box experiment which may apply to augmented ems.
Ems can hack themselves (or each other):
- Brain-computer interfacing seems much easier if you're an em
- Create a bunch of copies of yourself, create fast brain-to-brain interfaces and with a bit of training you have a supermind
- Grow more neocortex; simulate non-Euclidean space and grow into configurations which are way larger than what would fit into your original head
- Research ways to run on cheaper hardware while maintaining subjective identity or economic productivity
- Can restore from backups if something goes wrong
With those kinds of possibilities, it seems that ems could drift quite a long way into mind design space quite quickly, to the point where it might make more sense to view them as AGIs than as ems.
In the released-into-the-jungle scenario, the ems which are successful may be the ones which abandon their original human values in favor of whatever's successful in an economic or Darwinian sense. Risk of a single mind pattern (or a small number of them) coming to dominate.
Someone has to look after the bioconservatives, i.e. people who don't want their mind uploaded.
Em technology can allow enormous amounts of undetectable suffering to be created.
Ems -> enormous economic growth -> new unforeseeable technologies -> x-risks aplenty.
I can't think of a scenario where ems would make AGI creation less likely to happen (except if they trigger some (other) global catastrophe first). Maybe "we already have ems so why bother researching AI?" It seems weak - ems are hardware-expensive and anyway people will research AGI just because they can.
Robin Hanson's vision of em-world seems to include old-fashioned humans surviving by owning capital. I'm not sure this is feasible (requires property rights to be respected and value not to be destroyed by inflation).
Before the creation of the first em...
The appearance of a serious WBE project might trigger public outcry, laws etc. for better or for worse. Once people understand what an em is they are likely to be creeped out by it. Such laws might be general enough to retard all AI research; AI/AGI/em projects could be driven underground (but maybe only when there's a strong business case for them); libertarians will become more interested in AI research.
WBE research is likely to feed into AGI research. AGI researchers will be very interested in information about how the human brain works. (There's even the possibility of a WBE vs AGI "arms race").
Replies from: DanielLC, Curiouskid↑ comment by Curiouskid · 2011-11-27T06:24:25.700Z · LW(p) · GW(p)
Maybe "we already have ems so why bother researching AI?" It seems weak - ems are hardware-expensive and anyway people will research AGI just because they can.
From a utilitarian point of view, once you can create a VR, why would you want to create an FAI? On the other hand, if you create an FAI, wouldn't you still want to create WBE and VR? If you had WBE and VR, would you want to create more existential risks by making an AGI? The motive would have to be non-utilitarian. "Because they can" seems about right.
Also, about the bioconservatives objecting and libertarians becoming more interested. A Seastead would be the perfect retreat for WBE researchers.
comment by lukeprog · 2011-11-24T20:40:53.376Z · LW(p) · GW(p)
Shulman & Salamon, Whole brain emulation as a platform for creating safe AGI.
Shulman, Whole brain emulation and the evolution of superorganisms
The stuff Carl & Nick have been passing back and forth recently. (I suspect you've seen this, Stuart?)
The post-Summit2011 workshop report I wish I had more time to write.
↑ comment by Kaj_Sotala · 2011-11-25T10:41:33.814Z · LW(p) · GW(p)
Also (shameless plug) Sotala & Valpola, Coalescing Minds: Brain uploading-related group mind scenarios.
I would expect exocortex-based approaches to uploading (as discussed in the paper) to come earlier than uploads based on destructive scanning: exocortexes could be installed in living humans, and are a natural outgrowth of various brain prosthetic technologies that are already being developed for medical purposes. There's going to be stigma, but far less stigma than in the thought of cutting apart a dead person's brain and scanning it to a purely VR environment. Indeed, while destructive uploading is a sharp and discrete transition, with exocortexes the border between a baseline human and an upload might become rather fuzzy.
This might relatively quickly lead to various upload/AGI-hybrids that could initially outperform both "pure" uploads and AGIs. Of course, they'd still be bottlenecked by a human cognitive architecture, so eventually an AGI would outperform them. There are also the various communication and co-operation advantages that exocortex-enabled mind coalescence gives you, which might help in trying to detect and control at least early AGIs. But it still seems worth looking at.
comment by Vladimir_Nesov · 2011-11-24T20:16:19.387Z · LW(p) · GW(p)
WBE singleton or faster-than-realtime research project conducted by WBEs seem to be the only feasible scenarios for gaining more time for solving FAI before a global catastrophe or significant value drift. On the other hand, WBE tech released into the wild would probably quickly generate a humane-value-indifferent AGI (either the old-fashioned way or via value drift of modified uploads or upload-based processes).
(A half-point between that and a FAI is running an AGI that implements a human-conducted FAI research project specified as an abstract computation.)
comment by CSalmon · 2011-12-07T12:55:40.380Z · LW(p) · GW(p)
Time-accelerated research with forking seems like the only safe (sane) thing one could do with WBEs. The human brain breaks easily if anything too fancy is attempted. Not to mention that unlike UFAIs which tile the universe with paperclips or something equally inane insane neuromorphs might do something even worse instead, a la "I Have No Mouth And I Must Scream". The problem of using fiction as evidence is evident but since most fictional UFAIs are basically human minds with a thin covering of what the author thinks AIs work like I think the fail isn't so overwhelmingly strong with this one; unlike a "true" AGI a badly designed neuromorph might definitely feel resentment towards a low tribal status, for example. The risks of having a negative utilitarian or a deep ecologist as an emulation whose mind might be severely affected by unknown factors are something Captain Obvious would be very enthusiastic about.
Even simple time acceleration with reasonable sensory input might have unforeseen problems when the rest of the world runs in slow motion. Multiply it if there is only one mind instead of several that can still have some kind of a social life with each other. Modify the brain to remove the need for human interaction and you're back in the previous paragraph.
Now, considering the obvious advantages of copying the best AI researchers and running them multiple times faster than real-time it would be quite reasonable to expect AGI to follow quickly after accelerated WBE. This could mean that WBE might not be very widespread; the organization that had the first fast EMs would probably focus on stacking supercomputing power to take over the future light cone instead of just releasing the tech to take over the IT / human enhancement sector.
If the organization is sane it could significantly increase the chance of FAI as the responsible researchers had both speed and numbers advantage over the non-WBE scenario. On the other hand, if the first emulators don't focus on creating human-compatible AGI the chance of Fail would be the one growing massively.
comment by shokwave · 2011-11-25T02:12:56.288Z · LW(p) · GW(p)
It seems that ems are much harder to make friendly than a general AI. That is, some of what we have to fear from unfriendly AIs is present in powerful WBEs too, and you can't just build a WBE provably friendly to start with; you have to constrain it or teach it to be friendly (both of which are considered dangerous methods of getting to friendliness).
Replies from: FAWS, roystgnr↑ comment by roystgnr · 2011-11-26T04:50:21.243Z · LW(p) · GW(p)
I'm afraid don't recall who I'm (poorly) paraphrasing here, but:
Why would we expect emulated humans to be any Friendlier than a de novo AGI? At least no computer program has tried to maliciously take over the world yet; humans have been trying to pull that off for millennia!
Replies from: Stuart_Armstrong, faul_sname, John_Maxwell_IV, Curiouskid↑ comment by Stuart_Armstrong · 2011-11-27T17:20:58.353Z · LW(p) · GW(p)
The case for WBE over AGI, simply put: the chances of getting a nice AI is vanishingly small. The chance of getting an evil AI is vanishingly small. The whole danger is in the huge "lethally indifferent" zone.
WBE are more likely to be nice, more likely to be evil, and less likely to be lethally indifferent. Since evil and lethally indifferent have similar consequences (and a lot of evil will be better than indifferent), this makes WBE better than AGI.
↑ comment by faul_sname · 2011-11-26T06:04:20.034Z · LW(p) · GW(p)
Usually, said humans want to take over a world that still contains people.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-11T19:45:44.772Z · LW(p) · GW(p)
That's individual humans. The percentage of the population that's trying to take over the world maliciously at any given time is very low.
↑ comment by Curiouskid · 2011-11-27T06:40:15.487Z · LW(p) · GW(p)
What is the psychological motivation of a WBE upload? The people who try to take over the world are psychopaths and we would be able to alter their brain structures to remove those psychotic elements.
comment by gwern · 2011-11-24T19:46:03.598Z · LW(p) · GW(p)
Hasn't Eliezer argued at length against ems being safer than AGIs? You should probably look up what he's already written.
Replies from: CarlShulman, jsteinhardt, Logos01↑ comment by CarlShulman · 2011-11-24T22:08:51.744Z · LW(p) · GW(p)
Thinking that high-fidelity WBE, magically dropped in our laps, would be a big gain is quite different from thinking that pushing WBE development will make us safer. Many people who have considered these questions buy the first claim, but not the second, since the neuroscience needed for WBE can enable AGI first ("airplanes before ornithopters," etc).
Eliezer has argued that:
1) High-fidelity emulations of specific people give better odds of avoiding existential risk than a distribution over "other AI, Friendly or not."
2) If you push forward the enabling neuroscience and neuroimaging for brain emulation you're more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
3) Pushing forward the enabling technologies of WBE, in accelerating timelines, leaves less time for safety efforts to grow and work before AI, or for better information-gathering on which path to push.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-06T20:01:43.573Z · LW(p) · GW(p)
If you push forward the enabling neuroscience and neuroimaging for brain emulation you're more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations, the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won't be tempted to use low-fidelity scans.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it's hard to tell from the outside.)
Replies from: CarlShulman↑ comment by CarlShulman · 2012-03-06T20:27:38.840Z · LW(p) · GW(p)
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations,
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Different relative weightings of imaginging, comp neurosci, and hardware would seem to give different probability distributions over brain-inspired AI, low-fi WBE, and hi-fi WBE, but I don't see a likely track that goes in the direction of "probably WBE" without a huge (non-competitive) willingness to hold back on the part of future developers.
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won't be tempted to use low-fidelity scans.
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it's the worst because of more abrupt/unequal transitions), but that doesn't mean one should push any of them.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it's hard to tell from the outside.)
A number of person-months, but not person-years.
Replies from: Wei_Dai, Wei_Dai, John_Maxwell_IV↑ comment by Wei Dai (Wei_Dai) · 2012-03-08T21:03:06.893Z · LW(p) · GW(p)
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
Good point, but brain-inspired AI may not be feasible (within a relevant time frame), because simulating a bunch of neurons may not get you to human-level general intelligence without either detailed information from a brain scan or an impractically huge amount of trial and error. It seems to me that P(unfriendly de novo AI is feasible | FAI is feasible) is near 1, whereas P(neuromorphic AI is feasible | hi-fi WBE is feasible) is maybe 0.5. Has this been considered?
Of the three, neuroimaging seems most attractive to push (to me, Robin would say it's the worst because of more abrupt/unequal transitions), but that doesn't mean one should push any of them.
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What's the rationale for pushing decision theory but not neuroimaging?
I guess both of us think abrupt/unequal transitions are better than Robin's Malthusian scenario, but I'm not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions. I'm curious what the reasoning is.
Replies from: CarlShulman↑ comment by CarlShulman · 2012-03-08T23:29:38.138Z · LW(p) · GW(p)
P(neuromorphic AI is feasible | hi-fi WBE is feasible) Has this been considered?
Yes. You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc. Those paths would lose the chance at using humans with pre-selected, tested, and trained skills and motivations as WBE templates (who could be allowed relatively free rein in an institutional framework of mutual regulation more easily).
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What's the rationale for pushing decision theory but not neuroimaging?
As I understand it the thought is that an AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
I guess both of us think abrupt/unequal transitions are better than Robin's Malthusian scenario, but I'm not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions.
Well, there are conflicting effects for abruptness and different kinds of inequality. If neuroimaging is solid, with many scanned brains, then when the computational neuroscience is solved one can use existing data rather than embarking on a large industrial brain-slicing and analysis project, during which time players could foresee the future and negotiate. So more room for a sudden ramp-up, or for one group or country getting far ahead. On the other hand, a neuroimaging bottleneck could mean fewer available WBE templates, and so fewer getting to participate in the early population explosion.
Here's Robin's post on the subject, which leaves his views more ambiguous:
Replies from: Wei_DaiCell modeling – This sort of progress may be more random and harder to predict – a sudden burst of insight is more likely to create an unexpected and sudden em transition. This could induce large disruptive inequality in economic and military power,
Brain scanning – As this is also a relatively gradually advancing tech, it should also make for a more gradual predictable transition. But since it is now a rather small industry, surprise investments could make for more development surprise. Also, since the use of this tech is very lumpy, we may get billions, even trillions, of copies of the first scanned human.
↑ comment by Wei Dai (Wei_Dai) · 2012-03-09T01:00:23.972Z · LW(p) · GW(p)
You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc.
There seems a reasonable chance that none of these will FOOM into a negative Singularity before we get hi-fi WBE (e.g., if lo-fi WBE are not smart/sane enough hide their insanity from human overseers and quickly improve themselves or build powerful AGI), especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
As I understand it: An AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
This argument can't be right and complete, since it makes no reference at all to WBE, which has to be an important strategic consideration. You seem to be answering the question "If we had to push for FAI directly, how should we do it?" which is not what I asked.
Replies from: CarlShulman↑ comment by CarlShulman · 2012-03-09T01:57:05.784Z · LW(p) · GW(p)
especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
This seems to me likely to be very hard, without something like a singleton or a project with a massive lead over its competitors that can take its time and is willing to despite the strangeness and difficulty of the problem, competitive pressures, etc.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented. The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
If you were convinced that the growth of the AI risk research community, and a closed FAI research team, were of near-zero value, and that decision theory of the sort people have published is likely to be a major factor for building AGI, the argument would not go through. But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-11T23:52:15.536Z · LW(p) · GW(p)
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented.
I don't understand why you say that. Wouldn't safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn't ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
It may well be highly targeted, but still a bad idea. For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit. Conversely, pushing neuroimaging may help safety-oriented WBE projects only slightly more than non-safety-oriented, but still worth doing.
But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
I certainly agree with that, but I don't understand why SIAI isn't demanding a similar level of analysis before pushing decision theory.
Replies from: CarlShulman↑ comment by CarlShulman · 2012-03-12T02:38:08.694Z · LW(p) · GW(p)
I don't understand why you say that. Wouldn't safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn't ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
In the race to first AI/WBE, developing a technology privately gives the developer a speed advantage, ceteris paribus. The demand for hi-fi WBE rather than lo-fi WBE or brain-inspired AI is a disadvantage, which could be somewhat reduced with varying technological ensembles.
For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit.
As I said earlier, if you think there is ~0 chance of an FAI research program leading to safe AI, and that decision theory of the sort folk have been working on plays a central role in AI (a 10% bonus would be pretty central), you would come to different conclusions re the tradeoffs on decision theory. Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
I certainly agree with that, but I don't understand why SIAI isn't demanding a similar level of analysis before pushing decision theory.
Most have seemed to think that decision theory is a very small piece of the AGI picture. I suggest further hashing out your reasons for your estimate with the other decision theory folk in the research group and Eliezer.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-12T20:17:51.713Z · LW(p) · GW(p)
Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
Is the standard WBE analysis written up anywhere? By that phrase do you mean to include the "number of person-months" of work by FHI/SIAI that you mentioned earlier? I really am uncertain how far FHI/SIAI has pushed the analysis in these areas, and my questions were meant to be my attempt to figure that out. But it does seem like most of our disagreement is over decision theory rather than WBE, so let's move the focus there.
Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that's most likely the case, but there's a significant chance that it isn't. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn't seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
As for thinking there is ~0 chance of an FAI research program leading to safe AI, my reasoning is that with FAI we're dealing with seemingly impossible problems like ethics and consciousness, as well as numerous other philosophical problems that aren't quite thousand-years old, but still look quite hard. What are the chances all these problems get solved in a few decades, barring IA and WBE? If we do solve them, we still have to integrate the solutions into an AGI design, verify its correctness, avoid Friendliness-impacting implementation bugs, and do all of that before other AGI projects take off.
It's the social consequences that I'm most unsure about. It seems like if SIAI can keep "ownership" over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
Replies from: timtyler↑ comment by timtyler · 2012-03-12T23:01:10.902Z · LW(p) · GW(p)
Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that's most likely the case, but there's a significant chance that it isn't. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn't seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
On one hand, machine intelligence is all about making decisions in the face of uncertainty - so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated - you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with "esoteric" corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea - but I am very happy that it isn't an idea that I am faced with promoting.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-12T23:12:17.517Z · LW(p) · GW(p)
On the other hand, the basics of decision theory do not look that complicated - you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
Isn't AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
Replies from: timtyler↑ comment by timtyler · 2012-03-12T23:35:44.749Z · LW(p) · GW(p)
Kinda, yes. Any problem is a decision theory problem - in a sense. However, we can get a long way without the wirehead problem, utility counterfeiting, and machines mining their own brains causing problems.
From the perspective of ordinary development these don't look like urgent issues - we can work on them once we have smarter minds. We need not fear not solving them too much - since if we can't solve these problems our machines won't work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.
↑ comment by Wei Dai (Wei_Dai) · 2012-03-06T20:45:41.320Z · LW(p) · GW(p)
A number of person-months, but not person-years.
Was there something written up on this work? If not, I think it'd be worth spending a couple of days to write up a report or blog post so others who want to think about these problems don't have to start from scratch.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-11T19:56:29.631Z · LW(p) · GW(p)
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it's the worst because of more abrupt/unequal transitions), but that doesn't mean one should push any of them.
It looks to me as though Robin would prefer computing power to mature last. Neuroimaging research now could help bring that about.
http://www.overcomingbias.com/2009/11/bad-emulation-advance.html
↑ comment by jsteinhardt · 2011-11-24T20:02:07.283Z · LW(p) · GW(p)
Ems seem quite likely to be safer than AGIs, since they start out sharing values with humans. They also decrease the likelihood of a singleton.
Uploads in particular mean that current humans can run on digital substrate, thereby ameliorating one of the principle causes of power imbalance between AGIs and humans.
Replies from: jhuffman↑ comment by Logos01 · 2011-11-24T21:43:31.291Z · LW(p) · GW(p)
Safer for who? I am not particularly convinced that a whole-brain emulation wouldn't still be a human being, even if under alience circumstances to those of us alive today.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2011-11-25T19:35:25.791Z · LW(p) · GW(p)
Safer for everyone else. Humans aren't Friendly.
Replies from: Logos01↑ comment by Logos01 · 2011-11-25T21:32:45.422Z · LW(p) · GW(p)
Fair enough. But then, I am of the opinion that so long as the cultural/psychological inheritor of humanity can itself be reliably deemed "human", I'm not much concerned about what happens to meatspace humanity -- at least, as compared to other forms of concerns. Would it suck for me to be converted to computronium by our evil WBE overlords? Sure. But at least those overlords would be human.
Replies from: None↑ comment by [deleted] · 2011-11-29T00:59:31.797Z · LW(p) · GW(p)
I'm not much concerned about what happens to meatspace humanity -- at least, as compared to other forms of concerns. Would it suck for me to be converted to computronium by our evil WBE overlords? Sure. But at least those overlords would be human.
He may be a murderous despot, but he's your murderous despot, eh?
Replies from: Logos01comment by Curiouskid · 2011-11-27T06:31:13.760Z · LW(p) · GW(p)
I've posted a similar thread here. I focus a bit more on the implications for somebody wanting to accelerate the future.
- If WBE and the creation of VR is the ultimate utilitarian goal, then why create AGI? It just adds existential risk with no utilitarian benefit.
- More likely. FAI researches can be uploaded and think quicker/better. The AGI that is designed will benefit from the knowledge needed to create WBE.
comment by timtyler · 2011-11-25T18:07:46.675Z · LW(p) · GW(p)
One important question is if getting to whole brain emulations first would make subsequent AGI creation 1. more or less likely to happen, 2. more or less likely to be survivable.
Only important if there's much chance of whole brain emulations coming "first" - which still looks vanishingly unlikely to me. Think: aeroplane, not bird emulation - or you go too far off the rails.
comment by Gedusa · 2011-11-24T22:59:37.723Z · LW(p) · GW(p)
Maybe some kinds of ems could tell us how likely Oracle/AI-in-a-box scenarios were to be successful? We could see if ems of very intelligent people run at very high speeds could convince a dedicated gatekeeper to let them out of the box. It would at least give us some mild evidence for or against AIs-in-boxes being feasible.
And maybe we could use certain ems as gatekeepers - the AI wouldn't have a speed advantage anymore, and we could try to make alterations to the em to make it less likely to let the AI out.
Minor bad incidents involving ems might make people more cautious about full-blown AGI (unlikely, but I might as well mention it).
comment by Manfred · 2011-11-24T20:28:13.368Z · LW(p) · GW(p)
It does seem like there are all sorts of experiments you could do with a brain in a computer that would get you closer to AI, and they would also be the researchers. Are there any counterbalancing factors that could lead to brains in computers making AGI development less likely? The only ones I can think of would be tiny.
comment by lavalamp · 2011-11-24T20:17:10.731Z · LW(p) · GW(p)
Once we have ems, it's certain that enough computer power would be available for AGI.
My speculation is that the sort of people who would be first in line to be uploaded would also tend to be the sort of people who, given enough computer power and time, might be able to brute-force some sort of AGI. So, if somehow we start making ems before we get FAI, my guess is that AGI will be forthcoming quite shortly, and it will be unfriendly.
But I have a hard time seeing how we can actually get that much computer power without some fool having already made some sort of UFAI.
(All this is under the assumption that ems will not be terribly efficient at first because we'll be running them without actually understanding them-- thus simulating perhaps 10x-100x+ more stuff than we actually need to.)
So, I say more likely to #1, and less likely to #2.
comment by turchin · 2012-01-03T12:56:24.706Z · LW(p) · GW(p)
Some possible benefits from emulated brain (EMB):
1) It does not have a framework of goals, expressed as a hierarchy with one supergoal. This makes it orders of magnitude safer, becauses one of the major problems of friendly artificial intelligence – is that it finds a wrong way to reach its supergoal.
2) EMB understoods many human details, which are obvious to anyone, but you have to explain to a AI. Moreover, a complete list of these subtleties is not known beforehand. That is, he understands what we mean. Bad example: The robot is asked to draw from the room all round objects, and tore off a man's head.
3) We can choose a human, which will be scanned, and thus know in advance its moral and ethical qualities, and attitude to the problem of FAI.
4) We can also scan a group of people known to have developed high moral and ethical liability, but representing different points of view. A good example of this is a jury. That is a collective intelligence of ordinary people. I think its is ideal to have three to five scanned people who would represent different types of people: male mathematics, a woman with children, religious philosopher, businessman, a child and therefore have maximum mapping diversity and unity of human values. I do not believe that a group of IT people in the basement will be able to properly develop a system of values for all mankind, without missing something important. I would also try before, whether these five people are capable to effective reciprocal communication before they will be scanned. That is, do they form a collective intelligence.
5) The process of creation and dissemination of the World AI will be the most secure, if he will be held under the auspices of the World State. That will stop for a while other AI projects, so there was no war of AI-projects among themselves, and which agrees to give control to our AI after its succesful creation. So our AI will not have the initial task to conquer the world. Such task would make it more aggressive and sharpened to the destruction. Any AI from the basement have to conquer the world first.
6) It is virtually the only way that I or any of us can get a "useful immortality" that is, immortality, in which he is using and developing his best qualities for the benefit of mankind, and not immortality, which one gets as a present from the God and with which he does not know what to do.