[LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses
post by drnickbone · 2012-11-21T22:23:26.193Z · LW · GW · Legacy · 23 commentsContents
23 comments
Recent article in The New Yorker:
http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-brain-simulation-compass.html
Here is the research report from IBM, with the simple title "10^14":
http://www.modha.org/blog/SC12/RJ10502.pdf
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
There is likely to be quite a long overhang between the hardware and the software...
23 comments
Comments sorted by top scores.
comment by Stabilizer · 2012-11-22T01:55:29.571Z · LW(p) · GW(p)
I think important caveats need to be kept in mind. From the New Yorker article:
Replies from: gwernI.B.M.’s Compass has more neurons than any system previously built, but it still doesn’t do anything with all those neurons. The short report published on the new system is full of vital statistics—how many neurons, how fast they run—but there’s not a single experiment to test the system’s cognitive capacities. It’s sort of like having the biggest set of Lego blocks in town without a clue of what to make out of them. The real art is not in buying the Legos but in knowing how to put them together. Until we have a deeper understanding of the brain, giant arrays of idealized neurons will tell us less than we might have hoped.
↑ comment by gwern · 2012-11-22T03:14:04.304Z · LW(p) · GW(p)
The full paper might be useful: http://conferences.computer.org/sc/2012/papers/1000a085.pdf
Replies from: drnickbone, John_Maxwell_IV↑ comment by drnickbone · 2012-11-22T07:40:58.190Z · LW(p) · GW(p)
Thanks for this. The latest research report 10^14 already appears to be a significant update on that paper.
IBM now report roughly eight times as many simulated neurons and synapses, while the slow-down has gone from ~400x real-time to ~1500x real time. That works out at a factor > 2 in hardware improvement within a matter of months. They are using a custom hardware architecture and presumably there are still a lot of optimisations to be made. It can't be very long before this can run in real time.
As said in other comments, nobody knows how to program this yet...
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-11-22T04:46:13.058Z · LW(p) · GW(p)
The paper seems to indicate that the sim is running 388 times slower than real-time. I guess we're not 100% there hardware-wise yet? Good.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2012-11-22T05:02:18.025Z · LW(p) · GW(p)
this is worryingly fast IMO.
comment by mapnoterritory · 2012-11-22T09:19:01.160Z · LW(p) · GW(p)
I actually never heard about non-von Neumann architectures. Anybody has some tip on a good source on this? Especially how this relates to biological brain architectures? Thank you!
Replies from: noen↑ comment by noen · 2012-11-23T17:11:33.884Z · LW(p) · GW(p)
Parallelism changes absolutely nothing other than speed of execution.
Strong AI is refuted because syntax is insufficient for semantics. Allowing the syntax to execute in parallel will not alter this because the refutation of strong AI attacks the logical basis for the strong AI hypothesis itself. If you are trying to build a television with tinker-toys it does not improve your chances to substitute higher quality tinker-toys for the older wooden ones. You will still never get a functional TV.
They do not actually have a physical non-von Neumann architecture. They are simulating a brain on simulated neurosynaptic cores on a simulated non-von Neumann architecture on a Blue Gene/Q super computer which consists of 64-bit PowerPC A2 processors connected in a toroidal network. No wonder it's slow.
They are trying to reach "True North" and believe they are headed in the right direction but they do not know if the Compass they have built actually measures what they believe it measures. Nor do they know if once they get there True North will do what they want it to do. They do not even know how what they want to do does what it does but they believe if they use faster computers that will overcome their lack of knowledge of how actual minds arise out of actual brains, which they don't know how they are constructed. Nor do they know how the actual neurons of which actual brains are constructed actually function in real life.
But they're published. So... you know... there's that.
If you cannot simulate round worms, do not know how neurons actually work and do not even know how memories are stored in natural brains you are in no danger of building Colossus.
People are highly susceptible to magical thinking. When the telegraph was invented people thought the mind was like the telegraph because...... magic is why. Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.
Replies from: loup-vaillant, khafra, JoshuaZ↑ comment by loup-vaillant · 2012-11-23T21:42:36.676Z · LW(p) · GW(p)
Strong AI is refuted because syntax is insufficient for semantics.
Where the heck does that come from? What do you mean by "strong AI is refuted", "syntax is insufficient for semantics", and how does the former follow from the latter?
Replies from: noen↑ comment by noen · 2012-11-24T05:08:34.182Z · LW(p) · GW(p)
"What do you mean by "strong AI is refuted""
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it's strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.... perhaps their efforts will result in spin off technology so not all is lost.
Replies from: Emile, fubarobfusco, loup-vaillant↑ comment by Emile · 2012-11-24T15:12:39.702Z · LW(p) · GW(p)
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it's strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds.
How would one determine whether a given device/system has this "semantic content"? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?
↑ comment by fubarobfusco · 2012-11-24T05:48:53.994Z · LW(p) · GW(p)
What on earth is "semantic content"?
Replies from: noen↑ comment by loup-vaillant · 2012-11-24T08:30:44.333Z · LW(p) · GW(p)
I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don't count as semantic content, I don't know what does.
So, I still don't know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using "strong AI" to talk about conciousness, instead of capabilities, is a bit strange.)
Replies from: noen↑ comment by noen · 2012-11-24T14:50:55.971Z · LW(p) · GW(p)
That is correct, you don't know what semantic content is.
"I still don't know what makes you so sure conciousness is impossible on an emulator."
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, "You're heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy."
"Sign here."
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won't be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
How could it be any other way?
Replies from: Emile, loup-vaillant↑ comment by Emile · 2012-11-24T15:10:24.904Z · LW(p) · GW(p)
the real world physical phenomenon we call consciousness
I don't know what you mean by "physical" here - for any other "physical phenomenon" - light, heat, magnetism, momentum, etc. - I could imagine a device that measures / detects it. I have no idea how one would go about making a device that detects the presence of consciousness.
In fact, I don't see anything "consciousness" has in common with light, heat, magnetism, friction etc. that warrants grouping them in the same category. It would be like having a category for "watersnail-eating fish, and Switzerland".
↑ comment by loup-vaillant · 2012-11-25T12:34:38.605Z · LW(p) · GW(p)
While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don't count as semantic content, I don't know what does.
That is correct, you don't know what semantic content is.
Care to explain?
Meaning.
The words on this page mean things. They are intended to refer to other things.
Oh. and how do you know that?
Meaning is assigned, it is not intrinsic to symbolic logic.
Assigned by us, I suppose? Then what makes us so special?
Anyway, that's not the most important:
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won't be done on a von Neuman machine.
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
- Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
- If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
- If it is possible, do you think the result is concious? Why not?
↑ comment by khafra · 2012-11-26T16:18:33.903Z · LW(p) · GW(p)
Strong AI is refuted because syntax is insufficient for semantics.
A wild Aristotelian Teleologist appears!
Phrasing claims in the passive voice to lend an air of authority is grating to the educated ear.
Aside from stylistic concerns, though, I believe you're claiming that electronic circuits don't really mean anything. However, I'm not sure whether you're making the testable claim that no arrangement of electronic circuits will ever perform complicated cross-domain optimization better than a human, or the untestable claim that no electronic circuit will ever really be able to think.
↑ comment by JoshuaZ · 2012-11-23T19:02:30.832Z · LW(p) · GW(p)
When the telegraph was invented people thought the mind was like the telegraph because...... magic is why.
Because the telegraph analogy is actually a pretty decent analogy.
Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.
What makes you think a sufficiently large number of organized telegraph lines won't act like a brain? Note that whether the number may be too large to actually fit on Earth is besides the point.
Replies from: noen↑ comment by noen · 2012-11-24T05:29:41.542Z · LW(p) · GW(p)
"Because the telegraph analogy is actually a pretty decent analogy."
No it isn't. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn't analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or "wire" between them. Neurons can communicate without any synaptic connection between them (See: "Neurons Talk Without Synapses"). Therefore the analogy is false.
"What makes you think a sufficiently large number of organized telegraph lines won't act like a brain?"
Because that is an example of magical thinking. It is not based on a functional understanding of the phenomenon. "If I just pour more of chemical A into solution B I will get a bigger and better reaction." We are strongly attracted to thinking like that. It's probably why it took us thousands of years to really get how to do science properly.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-11-24T16:31:40.855Z · LW(p) · GW(p)
No it isn't. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn't analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or "wire" between them. Neurons can communicate without any synaptic connection between them (See: "Neurons Talk Without Synapses"). Therefore the analogy is false.
Science uses analogies all the time. For example, prior to the modern quantum mechanical model of the atom one had a variety of other models which were essentially analogies. The fact that analogies break down in some respects shouldn't be surprising: they are analogies not exact copies.
It might be useful to give as an example an analogy that is closely connected to my own thesis work of counting Artin representations. It turns out that this is closely connected to the behavior of the units (that is elements that have inverses) in certain rings). For example, we can make the ring denoted as Z[2^(1/2)], which is formed by taking 1 and the square root of 2 and then taking all possible finite sums, differences and products of elements. Rings of this sort, where one takes all combinations of 1 with the square root of an integer are have been studied since the late 1700s. Now, it turns out that there are some not so obvious units in Z[2^(1/2)]. I claim that in this ring, 1+2^(1/2) is a unit.
It turns out that if instead one takes a ring in the following way: Take 1, and take 1/p for some prime p, and the form all products, sums and differences, one gets a ring that behaves in many ways similarly to the quadratic fields, but is much easier to analyze. The analogy breaks down pretty badly in some aspects, but in most ways is pretty good to the point where large classes of results in one setting translate into almost identical results in the other setting (although the proofs are often different and require much more machinery in the quadratic case) . So here we have in math, often seen as one of the most rigorous of disciplines, an analogy that is not just occurring at a pedagogical level but is actively helpful for research.
t is not based on a functional understanding of the phenomenon. "If I just pour more of chemical A into solution B I will get a bigger and better reaction." We are strongly attracted to thinking like that. It's probably why it took us thousands of years to really get how to do science properly.
You appear to be ignoring the bit where I noted "organized". But actually, even without that your statement is wrong. Often we do get critical masses where behavior becomes different on a large scale. Indeed, the term "critical mass" occurs precisely because this occurs with enriched uranium or with plutonium. And there are many other examples. For example, shove enough hydrogen together and you get a star.
comment by ChristianKl · 2012-11-29T14:31:46.925Z · LW(p) · GW(p)
It's nothing like a real brain simulation, of course, but illustrates that hardware to do this is getting very close.
They simulate model neurons. Those neurons they are less complex than the real neurons that we have in our head. The way in which real neurons change the amount of ion channels on their membrane for long-term plasticity is neither fully understood nor easy to simulate.
comment by noen · 2012-11-23T16:27:52.551Z · LW(p) · GW(p)
We have no idea how neurons actually work.
We have no idea how brains actually work.
We have no idea what consciousness is, how it works, or even if it does exist.
If you do not know how a radio works or how a transistor works or what the knobs and dials actually do and cannot even build a simulation of how one might work you are in no danger of building the ultimate radio to rule all others.
Having a bad idea does not make you closer to having a good idea.
comment by [deleted] · 2012-11-22T01:47:57.978Z · LW(p) · GW(p)
The Singularity approaches....