Brain-Brain communication

post by Jordan · 2011-12-09T17:05:47.957Z · LW · GW · Legacy · 22 comments

Contents

22 comments

A pair of conjoined twins, sharing a direct neural connection. There is evidence that the girls can sense what the other twin is sensing:

http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?pagewanted=all

 

This suggests two things:

* High bandwidth Brain-Computer Interfaces (BCI) ought to be possible (no surprise, but it's good to have strong evidence)

* The brain is a general purpose machine. It doesn't have specific modules for 'Left Hand', 'Right Hand', etc. Rather, it takes in information and makes sense out of it. It does this even when the setup is haphazard (as the connection between the twins' brains must be). On the other hand, we know the brain *does* have specific modules (such as the visual cortex among many others), which makes an interesting dichotomy.

I predict that the main hindrance to high functioning BCI is getting sufficient bandwidth, not figuring out how to decode/encode signals properly.

22 comments

Comments sorted by top scores.

comment by jacob_cannell · 2011-12-10T00:26:00.062Z · LW(p) · GW(p)

On the other hand, we know the brain does have specific modules (such as the visual cortex among many others), which makes an interesting dichotomy.

Actually, all that I've read about the visual cortex leads me to conclude that is just as generic as any other patch of cortex. It becomes visually specific only as a result of being fed visually specific information. In a congenitally blind person, the same patch of cortex will learn entirely other pattern processing functions.

Replies from: Kaj_Sotala, wedrifid
comment by Kaj_Sotala · 2011-12-10T10:37:13.427Z · LW(p) · GW(p)

That's what we argued in our brain-to-brain communication paper:

3.2.1. A general cortical algorithm. An adult human brain consists of several areas which are to varying degrees specialized to process different types of information. The functional specialization is correlated with the anatomical differences of different cortical areas. Although there are obvious differences between areas, most cortical areas share many functional and anatomical traits. There has been considerable debate on whether cortical microcircuits are diverse or canonical [Buxhoeveden & Casanova, 2002; Nelson, 2002] but we argue that these differences should be considered as variations of the same underlying cortical algorithm rather than different algorithms. This is because most cortical areas seem to have the capability of processing any type of information. The differences seem to be a matter of optimization to a specific type of information, rather than a different underlying principle.

The cortical areas do lose much of their plasticity during maturation. For instance, it is possible to lose one’s ability to see colors if a specific visual cortical area responsible for color vision is damaged. However, this reflects learning and specialization during the lifespan of the brain rather than innate algorithmic differences between different cortical areas. Plenty of evidence supports the idea that the different cortical areas can process any spatio-temporal patterns.

For instance, the cortical area which normally receives auditory information and develops into the auditory cortex will develop visual representations if the axons carrying auditory information are surgically replaced by axons carrying visual information from the eyes [Newton & Sur, 2004]. The experiments were carried out with young kittens, but a somewhat similar sensory substitution is seen even in adult humans: relaying visual information through a tactile display mounted on the tongue will result in visual perception [Vuillerme & Cuisiner, 2008]. What first feels like tickling in the tongue will start feeling like seeing. In other words, the experience of seeing is not in the visual cortex but in the structure of the incoming information.

Another example of the mammalian brain’s ability to process any type of information is the development of trichromatic vision in mice that, like mammalian ancestors, normally have a dichromatic vision [Jacobs et al., 2007]. All it takes for a mouse to develop primate-like color vision is the addition of a gene encoding the photopigment which evolved in primates. The cortex is able to adapt to this new source information from the retina and can make sense of it. Finally, even the adult cortical areas can be surprisingly adaptive as long as the changes happen slowly enough [Feuillet et al., 2007].

comment by wedrifid · 2011-12-10T02:38:04.483Z · LW(p) · GW(p)

That matches my understanding. It's the parts of brain that are not the cortex that tend to have the specific functions.

comment by J_Taylor · 2011-12-10T00:52:47.037Z · LW(p) · GW(p)

David Carmel, a cognitive neuroscientist at New York University, suggested that even when the girls deliver right answers, the phenomenon could be explained by something other than a neural bridge. “If they’re really close, through minute movements that one makes — maybe a typical movement her sister cannot see, but can feel — the other sister intuits the association. Maybe she associates her sister’s reaction with a robin they once liked, not a turkey.” The connection then might be scientifically mundane, but a marvel nonetheless to the casual observer.

Epistemic caution recommended.

comment by Kaj_Sotala · 2011-12-10T13:09:15.046Z · LW(p) · GW(p)

It doesn't have specific modules for 'Left Hand', 'Right Hand', etc. Rather, it takes in information and makes sense out of it. It does this even when the setup is haphazard (as the connection between the twins' brains must be). On the other hand, we know the brain does have specific modules (such as the visual cortex among many others), which makes an interesting dichotomy.

This depends on how you interpret the term "module". One could say that once the brain starts to receive a specific type of information, it begins to form a module for that type of information.

Note that the notions of "modularity" and "adapts to environmental inputs" are not mutually exclusive in any way. As an analogy, consider embryo development. An embryo starts out as just a single cell, which then divides into two, the two of which divide into four, and so on. Gradually the cells begin to specialize in various directions, their development guided by the chemical cues released by the surrounding cells. The cells in the developing fetus / embryo respond very strongly to environmental inputs in the form of chemical cues from the other cells. In fact, without those cues, the cells would never find their right form. If those environmental cues direct the cells' development in the right direction, it will lead to the development of a highly modularized system of organs with a heart, liver, lungs, and so on. If the environmental cues are disrupted, the embryo will not develop correctly.

Now consider the brain. Like with other organs, we start off with a pretty unspecialized and general system. Over time, various parts of it grow increasingly specialized as a result of external outputs. Here external outputs are to be understood both as sense data coming from outside the brain, and the data that the surrounding parts of the brain are feeding the developing part. If the part receives the inputs that it has evolved to receive, then there's no reason why it couldn't develop increasingly specialized modules as a response to that input. On the other hand, if it doesn't receive the right inputs during the right parts of its development, the necessary cues needed to push it in a specific direction will be missing. As a result, it might never develop that functionality.

Obviously, the kinds of environmental inputs that a brain's development should be expected to depend on are the ones that have been the most consistently recurring ones during our evolution.

All of that being said, it should be obvious that "the brain takes in information and makes sense out of it" does not imply "the brain doesn't have specific modules for 'Left Hand', 'Right Hand', etc". In individuals who have developed in an ordinary fashion, without receiving extra neural inputs from a conjoined twin, the brain might have developed specific modules for moving various parts of the body. In individuals who have unexpectedly had a neural link to another brain, different kinds of modules may have developed, as the neural development was driven by different inputs.

Replies from: Jordan
comment by Jordan · 2011-12-13T02:30:08.269Z · LW(p) · GW(p)

Very interesting. It appears my own model of the brain included a false dichotomy.

If modules are not genetically hardwired, but rather develop as they adapt to specific stimuli, then we should expect infants to have more homogeneous brains. Is that the case?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-12-13T06:09:38.613Z · LW(p) · GW(p)

I would presume so, but I haven't read any research on the question.

comment by TwistingFingers · 2011-12-09T18:48:12.785Z · LW(p) · GW(p)

Let the flensing begin...

comment by Normal_Anomaly · 2011-12-10T00:44:39.333Z · LW(p) · GW(p)

That's amazing, and a pleasant surprise that they're living such a normal life. I really want to read a follow-up article when the girls are old enough to give detailed descriptions of their introspection. I'm aiming for a career in neuroscience, hopefully touching on BCI or uploading, so this is especially interesting.

comment by hamnox · 2011-12-10T19:21:45.064Z · LW(p) · GW(p)

I am most curious as to what happens if you just went and set up a basic link between two people. Pick up and send signals between brains with something like a cross between one of those nifty move-a-mouse-with-your-brain gizmos (I'm sure it has a more sophisticated name I'm not aware of) and a hearing implant. Has anyone actually tried that before? I have no idea if it actually works like that. I hope it does, because that sounds like a crazy idea I'd actually want to volunteer for. Low bandwidth seems better than no bandwidth at all.

... That makes me wonder if, more within the realm of definite possibility, anyone's ever thought of making a blue-tooth enabled hearing implant. Maybe you could stick your "ear" somewhere else and eavesdrop. Or have multiple ears. Could your brain learn how to interpret direct 5-point surround sound?

EDIT: OKAY, just read up some and realized that is DEFINITELY not how hearing implants work. I thought they were more brain-direct. Ah well, a cochlear mp3 player would still be cool in mono :-)

Replies from: Incorrect
comment by Incorrect · 2011-12-10T22:21:42.588Z · LW(p) · GW(p)

If only humans could somehow transmit information with each other by making and receiving sounds…

The interesting part of this isn't that two brains are communicating, it's that they are doing it along unusual interface boundaries.

Replies from: hamnox
comment by hamnox · 2011-12-11T00:38:32.268Z · LW(p) · GW(p)

Communication by Controlled Atmospheric Vibrations? Nonsense! Preposterous! What kind of backwards society could function on something like that?

Imagine a world of beings communicating by air vibrations... I mean really--EVERYTHING causes air vibrations, how would you sort out actual communications from everything else? Imagine the kind of brain specialization you'd need just to make sense of anything! And how easily blocked! You could be separated from communication with all your hivemates by the flimsiest of barriers! It sends me into a panic just thinking about it. (shudder)

((Yeah, take joy in the merely real. I get where you're coming from, but you can't deny that there's a significant difference in the directness of the communication. So much information gets lost in playing telephone.))

comment by Shmi (shminux) · 2011-12-09T19:17:28.568Z · LW(p) · GW(p)

Why do you think there is a bandwidth problem? Neural transmission is notoriously slow (10m/s linear speed, much slower if not myelinated, about 1ms per pulse, mostly FM encoded). Surely 1kHz, 1ms latency per fiber is not that bad?

Replies from: saturn
comment by saturn · 2011-12-09T21:13:14.528Z · LW(p) · GW(p)

A brain-computer interface would need to be connected to millions of neurons in order to match the amount of information a computer screen can display, and the technology to do that doesn't exist yet as far as I know.

Replies from: Kaj_Sotala, jacob_cannell
comment by Kaj_Sotala · 2011-12-10T10:32:26.624Z · LW(p) · GW(p)

I took a brief look at the current state of such connections for our Coalescing Minds paper:

3.1. Direct brain-to-brain connections. The easiest approach seems to be to connect human brains directly in much the same way as the two brain hemispheres are connected. The corpus callosum, which connects the hemispheres comprises 200–250 million axons crossing from one hemisphere to the other. It seems likely that to coalesce minds, the number of connections should be of a similar order of magnitude, probably at least millions.

The technology exists today for creating hundreds of connections: e.g. Hochberg et al. [2006] used a 96-microelectrode array which allowed a human to control devices and a robotic hand by thought alone. Cochlear implants generally stimulate the auditory nerve with 16-22 electrodes, and allow the many recipients to understand speech in every day environments without needing visual cues [Peterson et al. 2010]. Various visual neuroprostheses are currently under development. Optic nerve stimulation has allowed subjects to recognize simple patterns and localize and discriminate objects. Retinal implants provide better results, but rely on existing residual cells in the retina. [Ong & da Cruz, 2011] Some cortical prostheses have also been recently implanted in subjects. [Normann et al. 2009] We are still likely to be below the threshold of coalescing minds by several orders of magnitude. Nevertheless, the question is merely one of scaling up and improving current techniques.

comment by jacob_cannell · 2011-12-10T00:23:29.003Z · LW(p) · GW(p)

Yes, and even if it was possible, hooking up all those wires to the visual cortex wouldn't have much or any advantage over just using a traditional display. A computer monitor is a cybernetic connection, it just makes a virtual connection using light passing through the retina instead of a bunch of messy wires.

Of course there are other types of connections a BCI could potentially enable that would have real uses, but a better visual display is not one of them.

Replies from: wedrifid, DanielVarga, saturn
comment by wedrifid · 2011-12-10T02:42:59.287Z · LW(p) · GW(p)

A computer monitor is a cybernetic connection

No it isn't. It is a form of human-computer interface. And a spade is a spade.

Replies from: Cog
comment by Cog · 2011-12-10T20:39:49.703Z · LW(p) · GW(p)

In terms of passing information to the brain, yes, it is. It excites neurons in a specific pattern in such a way as to form certain connections in the brain. It does this through cells in the retina, and the information does pass through a specific set of filters before it reaches the cortex, but I don't think that is an important distinction. To give an example, one of the things a friend of mine is working in the lab next door is inserting a channelrhodopsin gene into the visual cortex of monkeys. Channelrhodopsin is the protein in retinal cells that cause them to fire in response to light. By inserting it in other neural tissue, we can cause specific cells to fire by shining light of specific frequencies onto the cell. It's cool stuff, and I would put money on it becoming a dominant form of BCI in the mid term, at least for getting information into the brain.

The reason I bring this up is that it is using exactly the same mechanism that the retina uses, it just bypasses a few of the neural filtering mechanisms. Filters are incredibly useful, and while, in the future, we may want some of our connections to be directly into the cortex, we also might want to take advantage of some of those systems. To call one a cybernetic interface, and not the other, seems to be arbitrary.

Yes, this does mean that every human-computer interface is, in the end, a brain-computer interface with just an extra informational filter in between. It also means that every human interaction is brain-to-brain, again with just extra filters in place. I'm OK with that. I also find the idea very aesthetically pleasing, but that has no weight on whether it is true. When we talk about communication, cybernetics, and interfaces, it may be useful to distinguish between what filters are in place and how those effect the signal, but everything is eventually (interface/brain)-to-brain.

[edited for typo]

Replies from: wedrifid
comment by wedrifid · 2011-12-10T21:06:11.064Z · LW(p) · GW(p)

In terms of passing information to the brain, yes, it is.

No, it isn't.

We all know how computer monitors work. We roughly speaking know that the information from the computer ends up processed in the brain in the visual cortex. But we can still tell the difference between a computer monitor and matrix headjack.

And DON'T EVEN GET ME STARTED on people who think Wikipedia is an "Artificial Intelligence", the invention of LSD was a "Singularity" or that corporations are "superintelligent"!

Replies from: Cog
comment by Cog · 2011-12-10T22:33:22.535Z · LW(p) · GW(p)

Could you give a definition of cybernetics that does not include both? Cybernetics, as a word, has two different meanings. First is the study of the structure of regulartory systems. This, in regards to electronics, is where I believe it got its second meaning, which is much fuzzier. Most of us have an image of a Neuromancer style biomechanical ninja when we hear it, but have nothing in the way of a set definition. In fact, it appears normative, referring to something that is futuristic. This, of course, changes. Well designed mechanical legs that let you run faster than an Olympic sprinter would easily have been called cybernetics in the 60s. Now, because that's here, my impression is that people are more hesitant to call it that.

Do we draw the cybernetic/non-cybernetic line at something that physically touches neural tissue? Or projects light on it? Or induces changes in it with magnetic stimulation? Does it have to interface with neurons, or do glia count too? Muscle cells? Rods and cones? If we have a device that controls hormones in the blood, is that cybernetic? I understand your point about not overgeneralizing, and I tried to include that in response. Cybernetics, if it is to mean anything and not be an ambiguous rube/blegg as we discover more, has to be thought of as being heavily related to information processing in the brain. Filters are incredibly important. In an information processing system, they are almost everything. But in terms of getting information into the brain, the difference between a cortical brainjack and a monitor is what type of filters are in their way. Those filters can be broken down into incredibly complex systems that we can and should distinguish, but that's the proper conceptual framework with which to look at the problem.

comment by DanielVarga · 2011-12-10T22:49:07.936Z · LW(p) · GW(p)

Maybe the cheapest way to achieve high-bandwidth brain-to-brain communication is to create an artificial body part that can output neural data in a visual format with very high bandwidth (like a cross between a chameleon's skin and a computer monitor).

comment by saturn · 2011-12-10T22:35:26.404Z · LW(p) · GW(p)

I was trying to answer your question about "bandwidth". The problem is the total amount of information that can be sent to or from the brain per unit time. The usefulness of BCI will be rather limited until it comes within the ballpark of what present computer interfaces (or even interfaces of the 1980s) can do in terms of information throughput.