Should We Shred Whole-Brain Emulation?

post by SteveG · 2015-07-09T10:02:23.620Z · LW · GW · Legacy · 38 comments

I am opening this thread to test the hypothesis that SuperIntelligence is plausible but that Whole-Brain Emulations would most likely become obsolete before they were even possible.

Further, given the ability to do so, entities which were near to being Whole-Brain Emulations would rapidly choose to cease to be near Whole-Brain Emulations and move on to become something else.

I'll let people fire back with discussion and references before presenting more evidence.  My hope is to turn this thread into something publishable in the end.

38 comments

Comments sorted by top scores.

comment by Stuart_Armstrong · 2015-07-09T10:27:17.340Z · LW(p) · GW(p)

The key paper: Whole Brain Emulations: a Roadmap

Replies from: SteveG
comment by SteveG · 2015-07-11T02:57:32.352Z · LW(p) · GW(p)

There is a tremendous amount of good material in here, thanks...

The thing that I would like to see added is a perspective on how changeable, or malleable, WBEs would be, once created.

One of the main reasons I am challenging WBEs is because I think that brain emulations would be very easy to change, alter and improve along the axes of defined performance metrics.

If they will be highly malleable, those who wish to use them to generate productivity would instead use improved (neuromorphic) versions. Additionally, a WBE which had some control of its own make-up would consider improving itself.

I believe that we can show that WBEs would be very readily improvable and changeable. Once they are improved and changed enough, they are no longer WBEs and are instead neuromorphic entities.

comment by AABoyles · 2015-07-09T15:53:05.577Z · LW(p) · GW(p)

I am opening this thread to test the hypothesis that SuperIntelligence is plausible but that Whole-Brain Emulations would most likely become obsolete before they were even possible.

I'm not sure of what you're claiming here. Are you hypothesizing that a path to Superintelligence which requires WBE will likely be slower than a path which does not? Or something else, like that brain-based computation with good APIs will hold a relative advantage over WBE indefinitely?

Further, given the ability to do so, entities which were near to being Whole-Brain Emulations would rapidly choose to cease to be near Whole-Brain Emulations and move on to become something else.

Again, this could be clearer. Are you implying that a WBE in the process of being constructed will opt not to be completed before beginning to self-improve (i.e. become a neuromorph)?

Replies from: SteveG, SteveG
comment by SteveG · 2015-07-09T17:13:47.033Z · LW(p) · GW(p)

I wish to see whether we can show that human whole-brain emulations wll be essentially neuromorphic in a great many ways.

Almost as soon as they exist, something more effective and productive will become available.

comment by SteveG · 2015-07-09T17:08:45.235Z · LW(p) · GW(p)

The hypothesis is that human Whole-Brain Emulation will not be a recognizable stage in the development of AGI that lasts for any significant amount of time. Also, an "algorithmic economy" of human whole-brain emulations is highly unlikely to be anything but science fiction.

The goal is to examine whether there are some fundamental flaws in the the nature of this forecast.

I will lay out the case after more opinions and reading material are available to us...

comment by SteveG · 2015-07-11T00:10:38.385Z · LW(p) · GW(p)

For the next few years and possibly decades, the development of brain emulation technology will occur alongside the development of neuromorphic technology.

Some teams will be primarily focused on achieving extremely accurate renditions of sections of actual brain tissue, as well as increasingly accurate neural maps which are sometimes based on high-throughput scans of actual brain tissue. These teams will wish to based their work on individual neurons and glia that are very much like actual cells.

However, Henry Markram, director of Europe's Human Brain Initiative, has asserted that we need not model anything like the full complexity of gene expression and protein formation in human neurons in order to accurately represent firing patterns. Those pursuing the path toward WBEs will be willing to compromise on the issue of the level of detail at which individual cells are modeled, to varying degrees. Perhaps some will discover ways to measure whether these simplifications generate a statistically significant difference in how the simulated brain might react to stimulus.

Other teams will be more concerned with using models of groups of neurons as a calculation tool may be less concerned with whether they accurately represent individual neurons. "Neural net" technology was not intended to accurately model the brain, and the individual elements in a neural net are nothing like cells.

Nonetheless, these teams will learn everything they can from those who are trying to simulate actual brain function, and some of the same people will work in both sub-disciplines and different points in their careers.

If people from the simulation and human connectome camps develop an understanding of some new aspect of brain function, those who are just trying to find new AI methods to build into software will be able to take advantage of the results. However, they may be able to shave a lot of compute cycles out by utilizing abstractions of the newly-realized insight about structure and function to idealized neurons and glia that do not really try to approximate the function of living tissue much at all.

We cannot entirely predict whether extremely detailed models of individual cells are necessary for neuromorphic AI. However, I am interested in whatever evidence is available.

comment by SteveG · 2015-07-10T23:51:48.456Z · LW(p) · GW(p)

Engineers attempting to improve either a WBE or a piece of neuromorphic tissue would have considerable advantages that are unavailable to medical teams working with actual brains and nerves.

Medical teams who work to repair spinal injuries are able to stimulate nerve fibers and trace the nerves into the brain. However, a vast set of experimental tools would be available to WBE or Neuromorphic Engineers.

These engineers would be able to write program which cause any specific neuron or group of neurons to fire at any time. They would be able to select the firing pattern for each, and the relative timing of a group of neurons.

They would be able to configure neurotransmitter output at will, and, importantly, they would also be able to set the number of neurotransmitter receptors on cell surfaces.

Altering the concentration of cell surface receptors would, for example, allow the neuromorphic tissue engineers to greatly influence what stimuli are pleasurable. They would be able to set patterns for these cell surface receptors which never occur in the natural course of gene expression in the brain.

We have already done a fair amount of mapping of the neural basis of pain and pleasure. Forthcoming in the next ten years, we will also have results from NIH's human connectome project. Neuromorphic tisssue engineers will begin their work with vast resources of data on the generation of pain and pleasure, the purpose and use of these sensations.

If they had either a WBE, or a differently-configured piece of neural tissue available to them, seemingly they would have a strong ability to re-wire what causes pain and pleasure in order to suit their needs.

Such techniques alone could allow a WBE to cross the line from an accurate representation of a human mind to something fundamentally different.

comment by SteveG · 2015-07-10T23:35:36.975Z · LW(p) · GW(p)

Apparently, an advantage of creating a thread with a controversial and heterodox first entry is that for a time you get to write all of it yourself! :) That's OK, because I have a fair amount of brain dumping to do on this subject.

comment by SteveG · 2015-07-10T23:18:56.179Z · LW(p) · GW(p)

The future of AI will come out very differently if sections of neural tissue cannot be made to function usefully, separately from from a WBE.

Similarly, the future of AI will come out very differently if removing parts of the brain from an emulation causes the brain to become non-functional.

We know from studies of stroke and other forms of brain damage that brain function does not immediately degrade if a small section of brain is injured. Therefore, removing sections from a WBE might reduce the functionality of the WBE, but would not diminish it entirely.

There is no precedent for adding sections of brain matter to an existing brain. If such operations were performed on a WBE, however, the changes would be very different than they would be to actual brain tissue.

Replies from: SteveG
comment by SteveG · 2015-07-10T23:33:43.147Z · LW(p) · GW(p)

Brain grafts are a very difficult idea in actual brain tissue today.

One of the key reasons, however, will begin to become a non-factor: tissue rejection. We can now grow neurons that have the same genetic code as yours or mine in the lab (I actually did this.) A method is to turn induced pluripotent stem cells (iPSCs) which may have been created from your own skin, into nerve cells.

I grew a small plate of such cells. I did not try to distinguish which among them were neurons, and which were glia. I am sure how far along we are toward growing a complete neural column, or a section of brain.

Assuming the neurons were grown, however, installation would still be very difficult. I am not willing to say impossible, but we have some challenges.

The balance and configuration of the glia would be difficult to control. Blood flow through both large vessels and capillaries would have to be restored to the added section.

Another issue importantly, perhaps, is that neurons in the brain have long axons. The "white matter" of the brain contains portions of these axons that string from brain region to brain region. It is a tangled net. Replacement neurons might have to be literally "woven in" to this net. Advantageously, the axons that are already there are sometimes bundled, but they are stuck together. It is not like stripping a large wire and seeing many filaments pop out.

Physically "weaving" new neurons into the brain is a lot more challenging than weaving them into a WBE or a piece of neuromorphic tissue.

At any given point in the early history of neuromorphic engineering, there will be a greater or lesser understanding of the relationship between structure and function. However, using WBEs and neuromorphic tissue in experiments to try to elicit function from structure will be very inexpensive. Tens of thousands or billions of experiments could be run with a single set of macros.

For this reason, I forecast, with considerable but not complete certainty, that the existence of either WBEs or functional neuromorphic tissue would quickly lead to a much greater understanding of the relationship between structure and function.

Can we be absolutely certain that this understanding would very quickly permit designs of purpose-built brain configurations that improve along the dimensions of particular performance metrics? We should try to build the case for and against that hypothesis. My instinct is to believe that these experiments would facilitate mind design, but people could present other evidence.

comment by SteveG · 2015-07-10T14:04:15.684Z · LW(p) · GW(p)

AABoyles also begins to address another important and much-discussed question:

Can the emulation interface with:

Sensory inputs unavailable to the human brain?

Reasoning, calculation, memory modules and other minds in more direct ways?

Rather than inputting data into computers and observing the outputs of computers and sensory devices as we do today.

Replies from: SteveG
comment by SteveG · 2015-07-10T19:04:24.065Z · LW(p) · GW(p)

Additionally, at what point does such a combination cease to be more like a human mind-computer interface and instead require re-classification as a neuromorphic or otherwise novel entity?

Replies from: SteveG, SteveG
comment by SteveG · 2015-07-10T19:18:57.411Z · LW(p) · GW(p)

A human WBE could have a very high-speed link, either with conventional computers running algorithms which the WBE triggers regularly, or with other WBEs.

If these links were sufficiently fast and robust, then we would do best to analyze the cognitive capacity of the system of the WBE and the links taken together, rather than thinking of them as separate units.

At a certain point, linking a WBE to many other software tools creates an enhanced system which is very different from a human mind. Whether we call the combined system neuromorphic or just highly enhanced is a question of definitions. However, the combined system could develop to the point where it is very different than an ordinary person or team of people who can call on a powerful computer to calculate a result.

Replies from: SteveG
comment by SteveG · 2015-07-10T19:20:56.027Z · LW(p) · GW(p)

Even without extending the definition of neuromorphic, a WBE with a high-speed link to algorithms is clearly neuromorphic once significant portions of the neural simulation components are altered or removed.

Replies from: SteveG
comment by SteveG · 2015-07-10T19:23:48.889Z · LW(p) · GW(p)

If we are able to conclude that alteration or removal of part of the WBE would be desirable for the purposes of the emulation's controllers, then we should conclude that WBE technology in a sense flows into neuromorphic technology, and is not separate from it in a fundamental way.

comment by SteveG · 2015-07-10T19:12:59.372Z · LW(p) · GW(p)

Today, people are able to input data into calculating machines through speech and gestures, including drawing and typing.

Additionally, machines can gather biomarker data produced by the person. We can also issue a simple command to transmit a large block of previously prepared data.

These input mechanisms have certain potential disadvantages:

-They are somewhat inaccurate -They are slow. (Although triggering a larger file transmission makes up for the speed deficit, under many circumstances.)

We can receive more information through sensory input than we can transmit out.

comment by SteveG · 2015-07-10T13:56:58.928Z · LW(p) · GW(p)

Just laying some more groundwork... One distinction the discussion requires:

Who is in control of the components and the environment of the emulation?

Possibilities:

An outside entity, attempting to gain economic or other value by using the emulation to complete information processing tasks. (I'll call this "The Boss.")

-The environment was established to maintain the emulation, which is not "given a job," but was created for scientific observation by outsiders.

-The emulation is not given a job, but environment was created by outsiders as a platform for experimentation on emulations.

-Perhaps the emulation was created as an "upload" of a person, or as their designed child or progeny.

-The emulation has a greater or lesser degree of control over its own environment or composition.

Example of lesser degree of control: It can decide to select some of the content it sees and listens to.

Example of greater degree of control: It can directly alter one of its emotions by "twisting a knob."

Replies from: SteveG, SteveG, SteveG
comment by SteveG · 2015-07-11T03:33:11.346Z · LW(p) · GW(p)

Uploads and those creating a WBE-like entity as progeny most likely would prefer to add improvements to a greater or lesser extent, rather than complete fidelity.

Some people may argue that WBEs should lead as natural an existence as possible, one very much like people.

On the assumption that these people value their uploads or progeny, however, some aspects of life experience would be edited out. For example, what would motivate one of these creators to pass their WBEs through an unpleasant end-of-life experience, like vascular dementia?

The emulated lives of uploads and progeny would, to a greater or lesser extent, be edited. We could try to reason more about that.

Replies from: SteveG
comment by SteveG · 2015-07-11T03:42:59.192Z · LW(p) · GW(p)

Would uploads avoid self improvement? If we are going to try to address this question, we should first consider the plausibility and importance of the whole upload concept.

Given the power and relatively young age of some Silicon Valley executives who seem to see uploading as part of their future, we might want to check to see whether the pursuit of uploading would have any side-effects.

If we believe that uploads are malleable and improvable, then the technology to create uploads would also permit the creation of more powerful minds, with all the consequences.

comment by SteveG · 2015-07-11T03:18:53.447Z · LW(p) · GW(p)

Suppose that a emulations will be created to study how the brains of flesh-and-blood people work in general, or to study and forecast how a particular, living person will react to stimulus.

This is a reasonable application of high-fidelity whole-brain emulation. To use such emulations to forecast behavior, though, the emulation would have to be "run" on a multi-dimensional distribution of possible future sets of environmental stimuli. The variation in these distributions grows combinatorially, so even tens of thousands of runs would only provide some information about what the person is likely to do next.

Such WBEs would be only one tool in a toolbox to predict human behavior. However, they would be useful for that purpose. Your WBE could be fed many possible future lives, allowing you to make better choices about your future in the physical world, if using WBEs in that manner was considered ethical.

People on this site generally seem to agree, though, that using a high-fidelity WBE as a guinea pig to test out life scenarios is ethically problematic. If these life scenarios were biased in favor of delivering positive outcomes to the WBEs, maybe we would not have as much of a problem with that. Perhaps the interaction of two WBEs could be observed over many scenarios, allowing people to better choose companions.

WBEs could end up being used for this purpose, ethical or not. Again, though, I suspect that more data about people's reactions could be gained if modified WBEs were used in some of the tests.

It's worth exploring, but high-performance neuromorphic or algorithmic minds would still be the better choice for actually controlling physical conditions.

comment by SteveG · 2015-07-10T14:14:02.108Z · LW(p) · GW(p)

If the emulation is controlled by "The Boss," what incentives does "The Boss" have?

-to increase the emulation's throughput and efficiency -to increase the emulation's focus on the task that generates value -to avoid activities by regulators, protesters or other outsiders which could cause work stoppages.

Replies from: SteveG, SteveG, SteveG
comment by SteveG · 2015-07-10T14:31:19.040Z · LW(p) · GW(p)

These characteristics are more available to "The Boss" if "The Boss" considerably alters a malleable emulation.

Such an altered emulation is now neuromorphic.

Thus: if one or more "Bosses" is constructing a workforce, these "Bosses" will prefer neuromorphic components over whole-brain emulations.

Thus, if emulations are sufficiently malleable, there is no economy of whole-brain emulations: There is an economy of neuromorphic computing resources.

Replies from: SteveG
comment by SteveG · 2015-07-10T14:41:41.020Z · LW(p) · GW(p)

So, if we can establish that progress in emulation technology will quickly result in functional, malleable products, then for the most part future productivity will be generated by purpose-built neuromorphic computing resources rather than by human-like WBEs.

Replies from: SteveG
comment by SteveG · 2015-07-10T15:46:29.956Z · LW(p) · GW(p)

Unless, prior to the emergence of neuromorphic AI, forms of AI that do not include neurologically-inspired elements become more dominant.

comment by SteveG · 2015-07-10T14:25:55.132Z · LW(p) · GW(p)

If the technology is available, "The Boss" will prefer that its work force have high-speed connections to other computing resources. "The Boss" will also prefer that its work force have high-speed connections to whatever sensory input is relevant to the task.

comment by SteveG · 2015-07-10T14:22:22.941Z · LW(p) · GW(p)

"The Boss" can get more done if it can create new workers, and turn them on and off at will, without ethical or regulatory constraints.

If the technology is available,"The Boss" will prefer to employ cogntive capacity which has no personhood, and to which it has no ethical obligation.

comment by SteveG · 2015-07-10T13:44:35.713Z · LW(p) · GW(p)

Another aspect of malleability: How much can the structure and activity of the brain be influenced using means that we presently considered external or environmental? These influences would include sensory inputs and inputs through the blood.

Replies from: SteveG
comment by SteveG · 2015-07-10T13:49:23.932Z · LW(p) · GW(p)

Seemingly, controlling sensory inputs, and controlling the blood supply would permit a vast degree of control over the activity of the brain.

comment by SteveG · 2015-07-10T13:41:01.721Z · LW(p) · GW(p)

One aspect of malleability: At a specific point in the forecast timeline, how easy or difficult is it to create an emulation with a replacement subsystem or component that is functional, but functions differently? Does the emulation continue to work if these sub-systems are replaced or altered to a lesser or greater degree?

Replies from: SteveG
comment by SteveG · 2015-07-10T13:53:19.616Z · LW(p) · GW(p)

Obviously, replacing neural components with others could create an emulation which diverges from the human mind, becoming more and more neuromorphic.

comment by SteveG · 2015-07-10T13:32:57.254Z · LW(p) · GW(p)

One aspect of what I call "fidelity" is the degree to which the emulation incorporates various aspects of neurophysiology.

For example, the emulation might or might not incorporate:

-Good models of fluid flow within the brain, and between the brain, the blood and the cerebrospinal fluid.

-Good models of the components of the blood itself, and how these components would influence brain activity.

comment by SteveG · 2015-07-10T13:27:27.733Z · LW(p) · GW(p)

Future timelines could assume that a great deal of additional knowledge about structure and function of components of the brain is developed before functional WBEs are developed.

Or, perhaps scanning technology improves rapidly, allowing for higher and higher levels of fidelity, but our knowledge of how the brain actually works does not advance as rapidly.

comment by SteveG · 2015-07-10T13:21:46.097Z · LW(p) · GW(p)

We could imagine a timeline where extremely high fidelity emulations that function are created without functional, low-fidelity emulations being created first.

Or, we could imagine a stepwise process where lower-fidelity emulations that function are created first, then these are "improved" to the point that they represent the workings of the human mind more and more accurately.

comment by SteveG · 2015-07-10T13:12:05.388Z · LW(p) · GW(p)

Another distinction:

Many people have worked to reason about the level of "fidelity" of WBEs. That is to say, how near is a WBE to being and accurate representation of a human brain-what does it leave in, and what does it leave out.

comment by SteveG · 2015-07-10T13:09:16.792Z · LW(p) · GW(p)

In order to analyze the future of brain emulations further, I want to begin to add some distinctions:

The level of "malleability" of an emulation represents the degree to which its progress through time can be influenced by specific attempts to change it or control its environment.

The precision of this distinction needs to be increased, and people can comment about that here.

comment by SteveG · 2015-07-10T12:59:43.874Z · LW(p) · GW(p)
comment by chaosmage · 2015-07-09T11:41:43.223Z · LW(p) · GW(p)

I presume you mean whole-brain emulations of humans? Even if these can only arise under circumstances that make them obsolete, this is not necessarily true for emulations of simpler nervous systems.

Replies from: SteveG
comment by SteveG · 2015-07-09T12:00:40.949Z · LW(p) · GW(p)

I am keen to explore WBEs of other animals, but let's focus on humans and their plausible successors for the moment.

There are complications well worth considering, of course...an animal mind could be economically productive, and a human-animal chimera emulation might seem to be one plausible successor....evidence for that forecast could also be developed...

So many questions...