Whole Brain Emulation: Looking At Progress On C. elgans
post by jefftk (jkaufman) · 2011-10-29T15:21:09.499Z · LW · GW · Legacy · 85 commentsContents
85 comments
Being able to treat the pattern of someone's brain as software to be run on a computer, perhaps in parallel or at a large speedup, would have a huge impact, both socially and economically. Robin Hanson thinks it is the most likely route to artificial intelligence. Anders Sandberg and Nick Bostrom of the Future Of Humanity Institute created out a roadmap for whole brain emulation in 2008, which covers a huge amount of research in this direction, combined with some scale analysis of the difficulty of various tasks.
Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress. Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system. It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans. At 302 neurons, simulation has been within our computational capacity for at least that long. With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?
Reading through the research, there's been some work on modeling subsystems and components, but I only find three projects that have tried to integrate this research into a complete simulation: the University of Oregon's NemaSys (~1997), the Perfect C. elegans Project (~1998), and Hiroshima University's Virtual C. Elegans project (~2004). The second two don't have web pages, but they did put out papers: [1], [2], [3].
Another way to look at this is to list the researchers who seem to have been involved with C. elegans emulation. I find:
- Hiroaki Kitano, Sony [1]
- Shugo Hamahashi, Keio University [1]
- Sean Luke, University of Maryland [1]
- Michiyo Suzuki, Hiroshima University [2][3]
- Takeshi Goto, Hiroshima Univeristy [2]
- Toshio Tsuji, Hiroshima Univeristy [2][3]
- Hisao Ohtake, Hiroshima Univeristy [2]
- Thomas Ferree, University of Oregon [4][5][6][7]
- Ben Marcotte, University of Oregon [5]
- Sean Lockery, University of Oregon [4][5][6][7]
- Thomas Morse, University of Oregon [4]
- Stephen Wicks, University of British Columbia [8]
- Chris Roehrig, University of British Columbia [8]
- Catharine Rankin, University of British Columbia [8]
- Angelo Cangelosi, Rome Instituite of Psychology [9]
- Domenico Parisi, Rome Instituite of Psychology [9]
This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on. None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources. I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven't, my estimate of how hard whole brain emulation would be has gone up significantly. While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.
Note: I later reorganized this into a blog post, incorporating some feed back from these comments.
Papers:
[1] The Perfect C. elegans Project: An Initial Report (1998)
[2] A Dynamic Body Model of the Nematode C. elegans With Neural Oscillators (2005)
[3] A model of motor control of the nematode C. elegans with neuronal circuits (2005)
[4] Robust spacial navigation in a robot inspired by C. elegans (1998)
[5] Neural network models of chemotaxis in the nematode C. elegans (1997)
[6] Chemotaxis control by linear recurrent networks (1998)
[7] Computational rules for chemotaxis in the nematode C. elegans (1999)
[9] A Neural Network Model of Caenorhabditis Elegans: The Circuit of Touch Sensitivity (1997)
85 comments
Comments sorted by top scores.
comment by slarson · 2011-11-01T18:12:27.375Z · LW(p) · GW(p)
Hi all,
Glad there's excitement on this subject. I'm currently coordinating an open source project whose goal is to do a full simulation of the c. elegans (http://openworm.googlecode.com). More on that in a minute.
If you are surveying past c. elegans simulation efforts, you should be sure not to leave out the following:
A Biologically Accurate 3D Model of the Locomotion of Caenorhabditis Elegans, Roger Mailler, U. Tulsa http://j.mp/toeAR8
C. Elegans Locomotion: An integrated Approach -- Jordan Boyle, U. Leeds http://j.mp/fqKPEw
Back to Open Worm. We've just published a structural model of all 302 neurons (http://code.google.com/p/openworm/wiki/CElegansNeuroML) represented as NeuroML (http://neuroml.org). NeuroML allows the representation of multi-compartmental models of neurons (http://en.wikipedia.org/wiki/Biological_neuron_models#Compartmental_models). We are using this as a foundation to overlay the c. elegans connectivity graph and then add as much as we can find about the biophysics of the neurons. We believe this represents the first open source attempt to reverse-engineer the c. elegans connectome.
One of the comments mentioned Andrey Palyanov's mechanical model of the c. elegans. He is part of our group and is currently focused on moving to a soft-body simulation framework rather than the rigid one they created here: http://www.youtube.com/watch?feature=player_embedded&v=3uV3yTmUlgo Our first goal is to combine the neuronal model with this physical model in order to go beyond the biophysical realism that has already been done in previous studies. The physical model will then serve as the "read out" to make sure that the neurons are doing appropriate things.
Our roadmap for the project is available here: http://code.google.com/p/openworm/wiki/Roadmap
We have a mailing list here: http://groups.google.com/group/openworm
We have regular meetings on Google+ Hangout. If you want to help, we can surely find a way to include you. If you are interested, please let us know and we'll loop you in.
Cheers, Stephen
comment by atucker · 2011-10-30T04:06:38.833Z · LW(p) · GW(p)
David Dalrymple is also trying to emulate all of C. elegans, and was at the Singularity Summit.
http://syntheticneurobiology.org/people/display/144/26
Replies from: davidad↑ comment by davidad · 2011-10-31T09:46:47.229Z · LW(p) · GW(p)
That's me. In short form, my justification for working on such a project where many have failed before me is:
- The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
- What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
- With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.
I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.
Replies from: gwern, awesomeideas, Sickle_eye, jkaufman, jose-miguel-cruz-y-celis, atucker↑ comment by gwern · 2011-10-31T17:04:10.128Z · LW(p) · GW(p)
In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that's worth, if this is still an open problem in 2020.
How would you nail those two predictions down into something I could register on PredictionBook.com?
Replies from: davidad, shminux↑ comment by davidad · 2011-10-31T17:43:34.726Z · LW(p) · GW(p)
"A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence
"A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence
Replies from: ciphergoth, JoshuaZ, gwern, ciphergoth, gwern↑ comment by JoshuaZ · 2011-10-31T23:54:28.504Z · LW(p) · GW(p)
I'm curious where you'd estimate 50% chance of it existing and where you'd estimate 90%.
The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn't look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.
That high an estimate seems to be likely vulnerable to the planning fallacy.
Overall, your estimate seems to be too confident, the 2020 estimate especially so.
Replies from: davidad↑ comment by davidad · 2011-11-02T02:58:03.440Z · LW(p) · GW(p)
I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-02T03:04:24.607Z · LW(p) · GW(p)
Unfortunately, being aware of the planning fallacy does not make it go away.
True. But there are ways to calibrate for it. It seems that subtracting off 10-15% for technological predictions works well. If one is being more careful it probably would do something that was more careful, say taking not a fixed percentage but something that became less severe as the probability estimate of the event went up, so that one could still have genuinely high confidence intervals. But if one is in doubt simply reducing the probability until it doesn't look like the planning fallacy is likely is one way to approach things.
↑ comment by gwern · 2011-10-31T18:28:10.130Z · LW(p) · GW(p)
Bleh, I see I was again unclear about what I meant by nailing down - more precisely, how would one judge whatever has been accomplished by 2014/2020 as being 'complete' or 'functional'? Frequently there are edge cases (there's this paper reporting one group's abandoned simulation which seemed complete oh except for this wave pattern didn't show up and they had to simplify that...). But since you were good enough to write them:
Replies from: davidad, jkaufman↑ comment by davidad · 2011-11-02T03:04:10.478Z · LW(p) · GW(p)
Ah, I see. This is the sort of question that the X Prize Foundation has to wrestle with routinely. It generally takes a few months of work to take even a relatively clear problem statement and boil it down to a purely objective judging procedure. Since I already have an oracle for what I it is I want to develop (does it feel satisfying to me?), and I'm not trying to incentivize other people to do it for me, I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway. However, I'm open to arguments to the contrary.
Replies from: MathieuRoy, gwern↑ comment by Mati_Roy (MathieuRoy) · 2020-01-04T02:53:54.671Z · LW(p) · GW(p)
For your information, the above two links were judged as wrong
@davidad, any updates on your work?
Replies from: jkaufman↑ comment by gwern · 2011-11-02T03:45:59.802Z · LW(p) · GW(p)
I'm not convinced that I should do said work for the C. elegans upload project. I'm not even particularly interested in formalizing my prediction for futurological purposes since it's probably planning fallacy anyway.
Well, that's fine. I've make done with worse predictions than that.
↑ comment by jefftk (jkaufman) · 2011-10-31T18:40:23.494Z · LW(p) · GW(p)
(Which paper are you referring to?)
Replies from: gwern↑ comment by Paul Crowley (ciphergoth) · 2012-04-13T07:49:13.455Z · LW(p) · GW(p)
99.8% confidence - can I bet with you at those odds?
↑ comment by Shmi (shminux) · 2011-10-31T17:39:45.406Z · LW(p) · GW(p)
I expect to be finished with C. elegans in 2-3 years.
How would you nail those two predictions down into something I could register on PredictionBook.com?
Given the wild unwarranted optimism an average PhD student has in the first year or two of their research, I would expect that David will have enough to graduate 5 or 6 years after he started, but the outcome will not be anywhere close to the original goal, thus
90% that "No whole brain emulation of C. elegans by 2015"
Then again, he is not your average PhD student (the youngest person to ever start a graduate program at MIT -- take that, Sheldon!), so I hope to be proven wrong.
↑ comment by RulerOfMeasurement (awesomeideas) · 2021-02-17T17:38:15.803Z · LW(p) · GW(p)
What're you folks up to now? Have you updated because you were "Extremely Surprised"? What do the major challenges appear to be these days, and what year would you again be "Extremely Surprised … if this is still an open problem"?
↑ comment by Sickle_eye · 2012-01-16T14:27:30.662Z · LW(p) · GW(p)
Ha, I'll keep an eye out for your publications. I'm particularly interested at the distance you'll have to go in gathering data, and what will you be able to make out of what is known. I expect that scans aiming for connectome description contain some neuron type data already due to morphological differences in neurons. I don't know what sets of sensors are used for those scans, but maybe getting a broader spectrum could provide clues as to what neuron types occupy which space inside the connectome. SEM can, after all, determine the chemical composition of materials, can't it?. As-is, this seems a pretty neckbreaking undertaking, although I wish you the best of luck.
In other news, there is, luckily, more and more work in this field: http://www.theverge.com/2011/11/16/2565638/mit-neural-connectivity-silicon-synapse
Predictions for silicon-based processors are pretty optimistic as well - Intel aims to achieve 10nm by 2014, and similar date is pushed by nVidia. Past that date we may see some major leaps in available technology (or not), and development of multi-processor computation algorithms is finally gaining momentum since Von Neumann's Big Mistake.
Maybe the Kurzweil's 2025 date for brain emulation is a bit overoptimistic, but I don't expect that to take much longer. I do think that the first dozen of successful neural structure emulations will become a significant breakthrough, and we'll see a rapid expansion similar to that in genetic sciences not so long ago.
↑ comment by jefftk (jkaufman) · 2011-10-31T11:58:35.137Z · LW(p) · GW(p)
"Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data."
This suggests that even a full 5nm SEM imaging pass over the brain would not be enough information about the individual to emulate them.
Replies from: davidad↑ comment by davidad · 2011-10-31T17:37:47.769Z · LW(p) · GW(p)
It's worth noting that a 5nm SEM imaging pass will give you loads more information than a connectome, especially in combination with fancy staining techniques. It just so happens that most people doing SEM imaging intend to extract a connectome from the results.
That said, given the current state of knowledge, I don't think there's good reason to expect any one particular imaging technology currently known to man to be capable of producing a human upload. It may turn out that as we learn more about stereotypical human neural circuits, we'll see that certain morphological features are very good predictors of important parameters. It may be that we can develop a stain whose distribution is a very a good predictor of important parameters. Since we don't even know what the important parameters are, even in C. elegans, let alone mammalian cortex, it's hard to say with confidence that SEM will capture them.
However, none of this significantly impacts my confidence that human uploads will exist within my lifetime. It is an a priori expected feature of technologies that are a few breakthroughs away that it's hard to say what they'll look like yet.
↑ comment by Jose Miguel Cruz y Celis (jose-miguel-cruz-y-celis) · 2021-07-30T14:12:03.681Z · LW(p) · GW(p)
Here we are now, what would you comment on the progress of C. Elegans emulation in general and of your particular approach?
↑ comment by atucker · 2011-10-31T16:28:27.742Z · LW(p) · GW(p)
What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
Am I hearing hints of Tononi here?
Replies from: davidad↑ comment by davidad · 2011-10-31T17:27:27.075Z · LW(p) · GW(p)
It's fair to say that I am confident Tononi is on to something (although whether that thing deserves the label "consciousness" is a matter about which I am less confident). However, Tononi doesn't seem to have any particular interest in emulation, nor do the available tools for interfacing to live human brains have anything like the resolution that I'd expect to be necessary to get enough information for any sort of emulation.
comment by Risto_Saarelma · 2011-10-30T01:15:29.393Z · LW(p) · GW(p)
Maybe a more troubling situation for the feasibility of human brain emulation would be if we had had nematode emulation working for a decade or more but had made no apparent headway to emulating the next level of still not very impressive neural complexity, like a snail. At the moment there's still the possibility we're just missing some kind of methodological breakthrough, and once that's achieved there's going to be a massive push towards quickly developing emulations for more complex animals.
Replies from: slarson↑ comment by slarson · 2011-11-01T21:29:59.708Z · LW(p) · GW(p)
I think you are right on. I would extend your comment a bit which is to say we are not just missing a methodological breakthrough, but we are not even really attempting to develop the methods necessary. The problem is not just scientific but also what is considered to be science that is worth funding.
comment by turchin · 2011-10-29T16:37:55.963Z · LW(p) · GW(p)
http://www.computerra.ru/interactive/589824 A. Palianov now works in Russia on nematode brain emulation project
Replies from: Matvey_Ezhov↑ comment by Matvey_Ezhov · 2011-10-31T16:10:35.582Z · LW(p) · GW(p)
Don't forget the vid: http://www.youtube.com/watch?v=3uV3yTmUlgo
comment by multifoliaterose · 2011-10-30T16:21:36.008Z · LW(p) · GW(p)
While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.
Does this assessment take into account the possibility of intermediate acceleration of human cognition?
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2011-10-31T14:08:11.012Z · LW(p) · GW(p)
It doesn't.
comment by jefftk (jkaufman) · 2011-11-01T17:37:40.476Z · LW(p) · GW(p)
I wrote to Ken Hayworth who is a neuroscience researcher working on scanning and interested in whole brain emulation, and he wrote back:
Replies from: jkaufmanI have not read much on the simulation efforts on C. elegans but I have talked several times to one of the chief scientists who collected the original connectome data and has been continuing to collect more electron micrographs (David Hall, in charge of www.wormatlas.org). He has said that the physiological data on neuron and synapse function in C. elegans is really limited and suggests that no one spend time simulating the worm using the existing datasets because of this. I.e. we may know the connectivity but we don't know even the sign of many synapses.
If you look at a system like the retina I would argue that we already have quite good models of its functioning and thus it is a perfect ground for testing emulation from known connectivity.
So the short answer is that I think it may be far easier to emulate a well characterized and mapped part of the mammalian brain than it is to emulate the worm despite its smaller size.
↑ comment by jefftk (jkaufman) · 2011-11-01T18:22:36.014Z · LW(p) · GW(p)
Further exchange:
Me:
So even a nanoscale SEM pass over the whole brain wouldn't be enough unless we could find some way to visually read off the sign of a synapse, perhaps with a stain, perhaps by learning what different types of neurons look like, perhaps by something not yet discovered?
Hayworth:
Replies from: slarsonThat is right, but those tell tale signs are well known for certain systems (like the retina) already, and will become more clear for others once large scale em imaging combined with functional recording becomes routine.
↑ comment by slarson · 2011-11-01T21:14:34.689Z · LW(p) · GW(p)
I would respectfully disagree with Dr. Hayworth.
I would challenge him to show a "well characterized and mapped out part of the mammalian brain" that has a fraction of the detail that is known in c. elegans already. Moreover, the prospect of building a simulation requires that you can constrain the inputs and the outputs to the simulation. While this is a hard problem in c. elegans, its orders of magnitude more difficult to do well in a mammalian system.
There is still no retina connectome to work with (c. elegans has it). There are debates about cell types in retina (c. elegans has unique names for all cells). The gene expression maps of retina are not registered into a common space (c. elegans has that). The ability to do calcium imaging in retina is expensive (orders of magnitude easier in c. elegans). Genetic manipulation in mouse retina is expensive and takes months to produce specific mutants (you can feed c. elegans RNAi and make a mutant immediately).
There are methods now, along the lines of GFP (http://en.wikipedia.org/wiki/Green_fluorescent_protein) to "read the signs of synapses". There is just very little funding interest from Government funding agencies to apply them to c. elegans. David Hall is one of the few who is pushing this kind of mapping work in c. elegans forward.
What confuses this debate is that unless you study neuroscience deeply it is hard to tell the "known unknowns" apart from the "unknown unknowns". Biology isn't solved, so there are a lot of "unknown unknowns". Even with that, there are plenty of funded efforts in biology and neuroscience to do simulations. However, in c. elegans there are likely to be many fewer "unknown unknowns" because we have a lot more comprehensive data about its biology than we do for any other species.
Building simulations of biological systems helps to assemble what you know, but can also allow you to rationally work with the "known unknowns". The "signs of synapses" is an example of known unknowns -- we can fit those into a simulation engine without precise answers today and fill them in tomorrow. The statement that no one should start simulating the worm based on the current data has no merit when you consider that there is a lot to be done just to get to a framework that has the capacity to organize the "known unknowns" so that we can actually do something useful with them once they have them. More importantly, it makes the gaps a lot more clear. Right now, in the absence of any c. elegans simulations, data are being generated without a focused purpose of feeding into a global computational framework of understanding c. elegans behavior. I would argue that the field would be much better off collecting data in the context of adding to the gaps of a simulation, rather than everyone working at cross purposes.
That's why we are working on this challenge of building not just a c. elegans simulations, but a general framework for doing so, over at the Open Worm project (http://openworm.googlecode.com).
comment by Paul Crowley (ciphergoth) · 2011-10-31T08:22:52.889Z · LW(p) · GW(p)
While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.
Unbounded Scales, Huge Jury Awards, & Futurism:
Replies from: jkaufmanI observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, "How long will it be until we have human-level AI?" The responses I've seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, "Five hundred years." (!!)
Now the reason why time-to-AI is just not very predictable, is a long discussion in its own right. But it's not as if the guy who said "Five hundred years" was looking into the future to find out. And he can't have gotten the number using the standard bogus method with Moore's Law. So what did the number 500 mean?
As far as I can guess, it's as if I'd asked, "On a scale where zero is 'not difficult at all', how difficult does the AI problem feel to you?" If this were a bounded scale, every sane respondent would mark "extremely hard" at the right-hand end. Everything feels extremely hard when you don't know how to do it. But instead there's an unbounded scale with no standard modulus. So people just make up a number to represent "extremely difficult", which may come out as 50, 100, or even 500. Then they tack "years" on the end, and that's their futuristic prediction.
"How hard does the AI problem feel?" isn't the only substitutable question. Others respond as if I'd asked "How positive do you feel about AI?", only lower numbers mean more positive feelings, and then they also tack "years" on the end. But if these "time estimates" represent anything other than attitude expressions on an unbounded scale with no modulus, I have been unable to determine it.
↑ comment by jefftk (jkaufman) · 2011-10-31T11:51:57.962Z · LW(p) · GW(p)
My reasoning for saying hundreds of years was that this very simple subproblem has taken us over 25 years. Say we'll solve it in another ten. The amount of discovery and innovation needed to simulate a nematode seems maybe 1/100th as much as for a person. Naively this would say 100 (25+10). More people would probably work on this if we had initial successes and it looked practical, though. Maybe this gives us a 10x boost? Which still is (100/10) (25+10) or ~350 years.
Very wide error bars, though.
Replies from: orthonormal↑ comment by orthonormal · 2012-03-28T15:37:46.075Z · LW(p) · GW(p)
You must have been very surprised by the progress pattern of the Human Genome Project, then. It's as if 90% of the real work was about developing the right methods rather than simply plugging along at the initial slow pace.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-03-28T15:46:52.723Z · LW(p) · GW(p)
I'm not sure what you're responding to. I wasn't trying to say that the human brain was only 100x the size or complexity of a nematode's brain-like-thing. It's far larger and more complex than that. I was saying that even once we have a nematode simulated, we still have done only ~1% of the "real work" of developing the right methods.
Replies from: orthonormal↑ comment by orthonormal · 2012-03-28T15:48:23.018Z · LW(p) · GW(p)
Even once we have a nematode simulated we still have done only ~1% of the "real work" of developing the right methods.
I understand that this is your intuition, but I haven't seen any good evidence for it.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-03-28T16:03:57.252Z · LW(p) · GW(p)
The evidence I have that the methods developed for the nematode are dramatically insufficient to apply to people:
- nematodes are transparent
- they're thin and so easy to get chemicals to all of them at once
- their inputs and outputs are small enough to fully characterize
- their neural structure doesn't change at runtime
- while they do learn, they don't learn very much
It's not strong evidence, I agree. I'd like to get a better estimate here.
Replies from: orthonormal, orthonormal↑ comment by orthonormal · 2012-04-13T17:06:04.543Z · LW(p) · GW(p)
This lecture on uploading C. elegans is very relevant.
(In short, biophysicists have known where the neurons are located for a long time, but they've only just recently developed the ability to analyze the way they affect one another, and so there's fresh hope of "solving" the worm's brain. The new methods are also pretty awesome.)
↑ comment by orthonormal · 2012-03-28T16:10:08.371Z · LW(p) · GW(p)
My intuition is that most of the difficulty comes from the complexity of the individual cells- we don't understand nearly all of the relevant things they do that affect neural firing. This is basically independent of how many neurons there are or how they're wired, so I expect that correctly emulating a nematode brain would only happen when we're quite close to emulating larger brains.
If the "complicated wiring" problem were the biggest hurdle, then you'd expect a long gap between emulating a nematode and emulating a human.
comment by Douglas_Knight · 2011-10-29T23:34:57.958Z · LW(p) · GW(p)
Are these projects about emulation? The Oregon and Rome projects seem to treat the brain as a black box, rather than taking advantage of Brenner's connectome. I'm not sure about the others. That doesn't tell us much about the difficulty of emulation, except that they thought their projects were easier.
Brenner's connectome is not enough information. At the very least, you need to know whether synapses are exciting or inhibiting. This pretty much needs to be measured, which is rather different than what Brenner did. It might not require a lot of measurement: once you've measured a few, maybe you can recognize the others. Or maybe not.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2011-10-30T03:05:35.590Z · LW(p) · GW(p)
The oregon one looks to me like it was about emulation: each of the 302 neurons will be implemented according to available anatomical and physiological data.
The rome one I think you may be right.
Is the nematode too small to measure whether synapses are exciting on inhibiting?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-10-30T03:14:30.659Z · LW(p) · GW(p)
I was basing my judgement on the Oregon papers. I suppose that there may be emulation attempts lurking behind other non-emulation papers.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2011-10-30T03:54:00.980Z · LW(p) · GW(p)
It's also possible they only proposed to do emulation, but never got funded.
comment by Lapsed_Lurker · 2011-10-29T22:25:31.785Z · LW(p) · GW(p)
How well can a single neuron or a few neurons be simulated? If we have good working models of those, which behave as we see in life, then that means WBE might be harder, if no such models yet exist, then the failures to model a 302-neuron system are not such good evidence for difficulty.
Replies from: Douglas_Knight, slarson↑ comment by Douglas_Knight · 2011-10-29T23:57:45.741Z · LW(p) · GW(p)
There are many models of neurons, at many levels of detail. I think that the Neuron program uses the finest detail of any existing software.
I see the primary purpose of a simulating a nematode as measuring how well such models actually work. If they do work, it also lets us estimate the amount of detail needed, but the first question is whether these models are biologically realistic. An easier task would be to test whether the models accurately describe a bunch of neurons in a petri dish. The drawback of such an approach is that it is not clear what it would mean for a model to be adequate for that purpose, whereas in a organism we know what constitutes biologically meaningless noise. Also, realistic networks probably suppress certain kinds of noise.
Replies from: Lapsed_Lurker↑ comment by Lapsed_Lurker · 2011-10-30T00:15:49.316Z · LW(p) · GW(p)
When I googled for information on neuron emulation, that site came up as the first hit. I've used the search box to look for 'elegans' and 'nematode' - both 0 hits, so I figure no-one is discussing that stuff on their forum.
comment by Pfft · 2011-10-30T21:55:07.797Z · LW(p) · GW(p)
While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.
What kind of reasoning leads you to this time estimate? Hundreds of years is an awfully long time -- consider that two hundred years ago nobody even knew that cells existed, and there didn't exist any kind of computers.
From your description of the state of the field, I guess we won't see an uploaded nematode very soon, but getting there in a decade or two doesn't seem impossible. It seems a bit counter-intuitive to me that learning "no nematode know, but maybe in ten years" would move the point estimate for human uploads by several centuries. Because, what if we had happened to do this literature survey ten years later, and found out that indeed nematodes had been successfully uploaded? If the estimate is sensitive to very small changes like that, it must be very uncertain.
Replies from: Logos01↑ comment by Logos01 · 2011-10-31T09:04:34.375Z · LW(p) · GW(p)
What kind of reasoning leads you to this time estimate? Hundreds of years is an awfully long time
Humans are notoriously poor at providing estimates of probability, and our ability to accurately predict scales that are less than immediate are just as poor. It seems likely that this "hundreds of years" was a short-hand for "there does not seem to be a direct roadmap to achieving this goal from where we currently are, and therefore I must assign an arbitrarily distant point into the future as its most-likely-to-be-achieved date."
This is purely guesswork / projection on my part, however.
comment by Jordan · 2011-10-30T18:54:16.186Z · LW(p) · GW(p)
I was disappointed when I first looked into the C. elegans emulation progress. Now I'm not so sure it's a bad sign. It seems to me that at only 302 neurons the nervous system is probably far from the dominant system of the organism. Even with a perfect emulation of the neurons, it's not clear to me if the resulting model would be meaningful in any way. You would need to model the whole organism, and that seems very hard.
Contrast that with a mammal, where the brain is sophisticated enough to do things independently of feedback from the body, and where we can see these larges scale neural patterns with scanners. If we uploaded a mouse brain, presumably we could get a rough idea that the emulation was working without ever hooking it up to a virtual body.
Replies from: Douglas_Knight, jkaufman↑ comment by Douglas_Knight · 2011-10-31T04:50:42.673Z · LW(p) · GW(p)
The lobster stomach ganglion, 30 neurons, but a ton of synapses, might be better for since its input and output are probably cleaner.
Replies from: slarson, khafra↑ comment by slarson · 2011-11-01T21:25:03.146Z · LW(p) · GW(p)
Modeling lobster stomach ganglion work is going on at Brandeis and what they are finding is important: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2913134&tool=pmcentrez&rendertype=abstract
Given the results they are finding, and building on their methods it is not inappropriate to start thinking one level up to c. elegans
↑ comment by khafra · 2011-10-31T12:06:43.050Z · LW(p) · GW(p)
Also because there's fictional prior art?
Replies from: bogdanb↑ comment by bogdanb · 2011-10-31T14:59:08.662Z · LW(p) · GW(p)
Maybe there’s fictional prior art because the lobster stomack might be better.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-11-01T08:02:21.924Z · LW(p) · GW(p)
If you're talking about Charlie Stross's Lobsters, yes this was inspired by Henry Abarbanel's work. He ran around the office going "They're uploading lobsters in San Diego!"
↑ comment by jefftk (jkaufman) · 2011-10-31T11:46:01.222Z · LW(p) · GW(p)
"You would need to model the whole organism, and that seems very hard."
There are only ~100 muscle cells. People are trying to model the the brain-body combination, but that doesn't sound unreasonably hard to me.
Replies from: Nonecomment by jefftk (jkaufman) · 2011-11-03T11:53:52.388Z · LW(p) · GW(p)
I've reorganized this into a blog post incorporating what I've learned in the comments here.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-11-10T21:16:15.260Z · LW(p) · GW(p)
Could you be explicit about what you learned? I can't tell from comparing the two posts.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2011-11-10T21:55:35.446Z · LW(p) · GW(p)
Most of the blog post version is just reorganization and context for a different audience, but there are some changes reflecting learning about who is working on this. Specifically, I didn't know before about the OpenWorm project, Stephen Larson, David Dalrymple, or the 2009 and 2010 body model papers. While I think in a few years I'll be able to update my predictions based on their experiences, this new information about people currently working on the project didn't affect my understanding of how difficult or far away nematode simulation or WBE is.
comment by Humbug · 2011-10-29T19:25:13.208Z · LW(p) · GW(p)
None of the simulation projects have gotten very far...this looks to me like it is a very long way out, probably hundreds of years.
Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.
comment by Hyena · 2011-10-29T17:51:45.589Z · LW(p) · GW(p)
This depends on whether the problem is the basic complexity of modeling a neural network or learning how to do it. If the former, then we may be looking at a long time. But if it's the latter, then we really just need more attempts, successful or not, to learn from and a framework which allows a leap in understanding could arrive.
Replies from: Logos01↑ comment by Logos01 · 2011-10-31T09:08:28.743Z · LW(p) · GW(p)
But if it's the latter, then we really just need more attempts,
I don't know that repeatedly doing the wrong thing will help inform us how to do the right thing. This seems counterfactual to me. Certainly it informs us what the wrong thing is, but... without additional effort to more finely emulate the real-time biochemical actions of neurons, it seems that emulating what we already know won't lead us to deeper insights as to what we don't. The question becomes: how do we discern that missing information?
Emulations are certainly a vital part of that process, however: without them we cannot properly guage how close we are to 'knowing enough for government work'.
Replies from: Hyena↑ comment by Hyena · 2011-10-31T15:22:42.501Z · LW(p) · GW(p)
Everything that fails does for a reason and in a way. In engineering, mere bugs aside, everything fails at the frontier of our knowledge and our failures carry information about the shape of that frontier back to us. We learn what problems need to be overcome and can, with many failures, generalize what the overall frontier is like, connect its problems and create concepts which solve many at once.
Replies from: Logos01↑ comment by Logos01 · 2011-10-31T17:32:19.165Z · LW(p) · GW(p)
Everything that fails does for a reason and in a way.
Oh, absolutely. But if they keep failing for the same reason and in the same way, re-running the simulations doesn't get you any unique or novel information. It only reinforces what you already know.
I acknowledged this as I said, "Emulations are certainly a vital part of that process, however: without them we cannot properly guage how close we are to 'knowing enough for government work'."
Replies from: Hyena↑ comment by Hyena · 2011-10-31T20:08:18.694Z · LW(p) · GW(p)
I think the problem here is that you think that each instance of a simulation is actually an "attempt". A simulation is a model of some behavior; unlike climbing Everest (which I did in 2003), taming Pegasus (in -642) or repelling the Golden Horde (1257 - 1324, when I was called away on urgent business in Stockholm), each run of a model is a trial, not an attempt. Each iteration of the model is an attempt, as is each new model.
We need more attempts. We learn something different from each one.
Replies from: Logos01↑ comment by Logos01 · 2011-11-01T05:00:09.079Z · LW(p) · GW(p)
I think the problem here is that you think that each instance of a simulation is actually an "attempt".
No, the problem here is more that I don't believe that it is any longer feasible to run a simulation and attempt to extract new information without direct observation of the simulated subject-matter.
We need more attempts. We learn something different from each one.
Yes, absolutely. But I don't believe we can do anything other than repeat the past by building models based on modeled output without direct observation at this time.
Replies from: Hyena↑ comment by Hyena · 2011-11-01T13:45:15.306Z · LW(p) · GW(p)
So why not just say "to clarify, I believe that we do not have enough knowledge of C. elegans' neuroanatomy to build new models at this time. We need to devote more work to studying that before we can build newer models"? That's a perfectly valid objection, but it contradicts your original post, which states that C. elegans is well understood neurologically.
If you believe that we cannot build effective models "without [additional] direct observation", then you have done two things: you've objected to the consensus that C. elegans is well understood and provided a criterion (and effective upload model of its neuroanatomy) for judging how well we understand.
Replies from: Logos01↑ comment by Logos01 · 2011-11-01T18:14:20.625Z · LW(p) · GW(p)
That's a perfectly valid objection, but it contradicts your original post, which states that C. elegans is well understood neurologically.
My original post stated, "without additional effort to more finely emulate the real-time biochemical actions of neurons, it seems that emulating what we already know won't lead us to deeper insights as to what we don't."
Your assertion (in-line quoted, this comment) is false. I said what I meant the first time 'round: we don't know enough about how neurons work yet and without that understanding any models we build now won't yield us any new insights into how they do.
This, furthermore, has nothing to do with C. elegans in specific.
you've objected to the consensus that C. elegans is well understood and provided a criterion (and effective upload model of its neuroanatomy) for judging how well we understand.
Since the goal of these models is to emulate the behavior of C. elegans, and the models do not yet do this, it is clear that one of two things is true: either we do not understand C. elegans or we do not understand neurobiology sufficiently to achieve this goal.
I have made my assertion as to which this is, I have done so quite explicitly, and I have been consistent and clear in this from my first post in this thread.
So where's the confusion?
Replies from: Hyena↑ comment by Hyena · 2011-11-01T21:35:57.214Z · LW(p) · GW(p)
"The first time around" for the OPer is the OP, from which it is absent and in which you identify the problem as incomplete attempts.
Replies from: Logos01↑ comment by Logos01 · 2011-11-02T03:58:06.469Z · LW(p) · GW(p)
I am not jkaufman. So I don't know that I follow what you're trying to say here. This means that either you or I are confused. In either case, no successful communication is currently occurring.
Could you please clarify what it is you're trying to say?
Replies from: Hyena↑ comment by Hyena · 2011-11-02T04:19:30.701Z · LW(p) · GW(p)
Nothing to clarify, actually. I apologize; I've been busy and the header switch occasioned by using the context link threw me. It changes the title to "XXXX comments on YYYY". Not being someone who comments consistently, this tends to make me mistake who originally posted because it plants an association between the person I'm replying to and the title of the post.
Replies from: Logos01↑ comment by Logos01 · 2011-11-02T04:35:14.371Z · LW(p) · GW(p)
Ahh. Much is explained. :)
Well, hopefully this incident will serve to reinforce this particular tidbit and prevent you from having a repeat occurrance.
Replies from: Hyena↑ comment by Hyena · 2011-11-02T16:11:10.116Z · LW(p) · GW(p)
Maybe. I read a massive quantity of material daily, on the order of 80-90,000 words some weeks. This is combined with comment across a variety of forums and fields. I rely heavily on cues from websites to keep straight who I'm talking to and that I'm even on the right submission forms when I say something.
comment by spuckblase · 2011-10-31T12:47:58.882Z · LW(p) · GW(p)
Typo in the title!
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2011-10-31T14:07:18.721Z · LW(p) · GW(p)
fixed
comment by DavidPlumpton · 2011-10-30T07:19:12.697Z · LW(p) · GW(p)
IBM claims to be doing a cat brain equivalent simulation at the moment, albeit 600 time slower and not all parts of the brain.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-10-31T08:11:00.012Z · LW(p) · GW(p)
Henry Markram of the Blue Brain Project described this claim as a "hoax and a PR stunt", "shameful and unethical", and "mass deception of the public".
comment by [deleted] · 2015-02-27T19:31:33.964Z · LW(p) · GW(p)
Any new developments on the C. Elegans simulation in the past 3+ years?
Replies from: BrandonReinhart↑ comment by BrandonReinhart · 2015-10-22T03:08:01.327Z · LW(p) · GW(p)
I'm curious about the same thing as [deleted].