We Haven't Uploaded Worms
post by jefftk (jkaufman) · 2014-12-27T11:44:45.411Z · LW · GW · Legacy · 19 commentsContents
19 comments
In theory you can upload someone's mind onto a computer, allowing them to live forever as a digital form of consciousness, just like in the Johnny Depp film Transcendence.
But it's not just science fiction. Sure, scientists aren't anywhere near close to achieving such feat with humans (and even if they could, the ethics would be pretty fraught), but now an international team of researchers have managed to do just that with the roundworm Caenorhabditis elegans.
—Science Alert
Uploading an animal, even one as simple as c. elegans would be very impressive. Unfortunately, we're not there yet. What the people working on Open Worm have done instead is to build a working robot based on the c. elegans and show that it can do some things that the worm can do.
The c. elegans nematode has only 302 neurons, and each nematode has the same fixed pattern. We've known this pattern, or connectome, since 1986. [1] In a simple model, each neuron has a threshold and will fire if the weighted sum of its inputs is greater than that threshold. Which means knowing the connections isn't enough: we also need to know the weights and thresholds. Unfortunately, we haven't figured out a way to read these values off of real worms. Suzuki et. al. (2005) [2] ran a genetic algorithm to learn values for these parameters that would give a somewhat realistic worm and showed various wormlike behaviors in software. The recent stories about the Open Worm project have been for them doing something similar in hardware. [3]
To see why this isn't enough, consider that nematodes are capable of learning. Sasakura and Mori (2013) [5] provide a reasonable overview. For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.
If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
(In a 2011 post on what progress with nematodes might tell us about uploading humans I looked at some of this research before. Since then not much has changed with nematode simulation. Moore's law looks to be doing much worse in 2014 than it did in 2011, however, which makes the prospects for whole brain emulation substantially worse.)
I also posted this on my blog.
[1] The Structure of the Nervous System of the Nematode Caenorhabditis elegans, White et. al. (1986).
[2] A Model of Motor Control of the Nematode C. Elegans With Neuronal Circuits, Suzuki et. al. (2005).
[3] It looks like instead of learning weights Busbice just set them all to +1 (excitatory) and -1 (inhibitory). It's not clear to me how they knew which connections were which; my best guess is that they're using the "what happens to work" details from [2]. Their full writeup is [4].
[4] The Robotic Worm, Busbice (2014).
[5] Behavioral Plasticity, Learning, and Memory in C. Elegans, Sasakura and Mori (2013).
19 comments
Comments sorted by top scores.
comment by Andy_McKenzie · 2014-12-25T16:54:59.487Z · LW(p) · GW(p)
Agreed, and this is very similar to what I described in my comment on the other post about this here.
Where I disagree is the sole focus on connection strengths or weights. They are certainly important, but synapses are unlikely to be adequately described by just one parameter. Further, local effects like neuropeptides likely play a role.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-12-25T19:10:16.340Z · LW(p) · GW(p)
You're right: connection strengths are probably not enough on their own. On the other hand, they're almost certainly necessary and no one has figured out how to read them off of synapses.
comment by V_V · 2014-12-26T14:20:10.608Z · LW(p) · GW(p)
Apparently, the neurons of c. elegans don't even generate action potentials like mammalian neurons, rather their activity is more complicated and fundamentally analog (source).
The linear threshold spiking neuron model used by Busbice may roughly approximate the activity of mammalian neurons, but is likely a bad model of c. elegans neurons.
He's lucky that he managed to make the robot perform these simple Braitenberg-like behaviors.
comment by Raemon · 2019-06-25T00:22:06.636Z · LW(p) · GW(p)
Anyone know if there's been updates to this in the past few years? I made a very brief (30 seconds) attempt to search for information on it but had trouble figuring out what question to ask google.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2019-07-15T17:50:26.959Z · LW(p) · GW(p)
I just tried https://scholar.google.com/scholar?as_ylo=2015&q=c+elegans+emulation and don't see anything relevant. I did find Why is There No Successful Whole Brain Simulation (Yet)? from 2019. While I've only skimmed it and its reference list, if there had been something new here I think they would have cited it.
I think we're still stuck on both (a) we can't read weights from real worms (and so can only model a generic worm) and (b) we don't understand how weights are changed in real worms (and so can't model learning).
comment by Paul Crowley (ciphergoth) · 2014-12-28T08:59:33.359Z · LW(p) · GW(p)
I discussed this with a professor of neuroscience on Facebook.
Replies from: ShardPhoenix↑ comment by ShardPhoenix · 2014-12-30T08:51:35.738Z · LW(p) · GW(p)
Unfortunately seems that the inferential gap was not crossed.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2015-01-01T18:10:31.720Z · LW(p) · GW(p)
In which direction? :) and do you think you can say anything about what was said in a way that would help close the gap? Thanks!
Replies from: ShardPhoenix, arundelo↑ comment by ShardPhoenix · 2015-01-02T00:02:59.538Z · LW(p) · GW(p)
As arundelo said, it was frustrating how he wouldn't commit to specific predictions. I get the feeling he had some philosophical idea about "but is a copy of me really me?" that was influencing him even when you were trying to keep things on a more concrete level. (This isn't totally unreasonable on his part because there are sometimes disputes over this issue even here). Aside from the unlikely-to-be-productive tactic of telling him to read the sequences, perhaps you could have emphasized that you were interested in the objective behaviour and not the "identity'' or subjective experience of the worm? I think you were trying to do that but maybe the contrast could have been more explicit?
Basically, it seems he was jumping ahead from thinking about worm uploads to thinking about human mind uploads and getting tangled up in classical philosophical dilemmas as a result.
Replies from: hairyfigment↑ comment by hairyfigment · 2015-01-02T01:49:15.453Z · LW(p) · GW(p)
Actually, most of that seems like a straightforward false dichotomy (between "connectome" alone and a dynamic model with constant activity). Or I may misunderstand how he's using the phrase "information flow," eg it may stand for some technical point that Paul and I don't understand at all.
↑ comment by arundelo · 2015-01-01T20:38:06.638Z · LW(p) · GW(p)
I was pretty frustrated by the neuroscience prof's reluctance to speak in terms of predictions -- of what he'd expect to see as the result of some particular experiment -- but you did great at politely pushing him in that direction, and I can't think how you could have done better.
comment by ike · 2014-12-25T22:58:11.750Z · LW(p) · GW(p)
So is the method that the worm uses for learning known? If we know approximately the current weights, and we knew the way those update, what else is needed?
If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their pysical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
That is a way to prove that the worm was uploaded. But how would you actually do that? What other info is needed to get to that, and how can we get that? Why can't we test when the neurons fire in order to get the weights out of that? (I get it's more complicated than that or it would have been done, but don't get why.)
Also, typo: pysical to physical. Edit: looks fixed, good.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-12-26T18:59:10.154Z · LW(p) · GW(p)
Basically this is electrophysiology research on C elegans. Most of the research being done, AFAIK, is hypothesis testing and doesn't systematically measure all of the connection strengths at once. Plus then you have the correlation vs causation problem even if you did measure them all at, which is why davidad wanted to do optogenetics, but again AFAIK that didn't actually get done.
Bottom line: this research is technically difficult and like most research topics is not well funded.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-12-27T12:00:52.866Z · LW(p) · GW(p)
why davidad wanted to do optogenetics
More details: he was planning to engineer a nematode to make neurons give off light when activating and to be light-sensitive so you can activate individual neurons with light. This lets you see which neurons fire in response to others. He wrote:
In short form, my justification for working on such a project where many have failed before me is:
- The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
- What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
- With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks. I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Surprised, for whatever that's worth, if this is still an open problem in 2020.
I believe he's no longer working on this, however, and the NemaLoad project is stalled. The last update is a year ago and there haven't been any updates to the project's github page since April 2014. It does look like davidad contributed to a 2013 paper surveying methods of neural recording, but this seems to mostly be a discussion of theoretical capability based mostly on others' work than anything learned from NemaLoad experiments.
Replies from: None↑ comment by [deleted] · 2014-12-30T08:32:00.920Z · LW(p) · GW(p)
He wrote, "If I'd had $1 million seed, I wouldn't have had to cancel the project when I did..." on this Quora answer.
Replies from: Kawoombacomment by Vika · 2014-12-26T21:50:03.119Z · LW(p) · GW(p)
Great post - I suggest moving it to Main.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-12-27T11:45:15.900Z · LW(p) · GW(p)
Thanks! Done.
comment by Big Tony · 2022-07-28T20:49:31.861Z · LW(p) · GW(p)
3 years on from https://www.lesswrong.com/posts/B5auLtDfQrvwEkw4Q/we-haven-t-uploaded-worms?commentId=Qx5DadETdK8NrtA9S. [LW · GW]
Has any progress been made since?
These sort of things seem to happen slowly, then suddenly — very little progress for a long time, then a breakthrough unlocks big jumps in progress.