Whole Brain Emulation: No Progress on C. elegans After 10 Years

post by niconiconi · 2021-10-01T21:44:37.397Z · LW · GW · 87 comments

Contents

  Update
  Update 2
  Update 3
None
87 comments

Since the early 21st century, some transhumanist proponents and futuristic researchers claim that Whole Brain Emulation (WBE) is not merely science fiction - although still hypothetical, it's said to be a potentially viable technology in the near future. Such beliefs attracted significant fanfare in tech communities such as LessWrong.

In 2011 at LessWrong, jefftk did a literature review on the emulation of a worm, C. elegans [LW · GW], as an indicator of WBE research progress.

Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress.  Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system.  It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans.  At 302 neurons, simulation has been within our computational capacity for at least that long.  With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?

There were three research projects from the 1990s to the 2000s, but all are dead-ends that were unable to reach the full research goals, giving a rather pessimistic vision of WBE. However, immediately after the initial publication of that post, LW readers Stephen Larson (slarson) [LW(p) · GW(p)] & David Dalrymple (davidad) [LW(p) · GW(p)] pointed out in the comments that they were working on it, the two ongoing new projects of their own made the future look promising again.

The first was the OpenWorm project, coordinated by slarson. Its goal is to create a complete model and simulation of C. elegans, and to release all tools and data as free and open source software. Implementing a structural model of all 302 C. elegans neurons in the NeuroML description language was an early task completed by the project.

The next was another research effort at MIT by davidad. David explained that the OpenWorm project focused on anatomical data from dead worms, but very little data exists from living animals' cells. They can't tell scientists about the relative importance of connections between neurons within the worm's neural system, only that a connection exists.

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

In a year or two, he believed an automated device can be built to gather such data. And he was confident [LW(p) · GW(p)].

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Surprised, for whatever that's worth, if this is still an open problem in 2020.

When asked by gwern for a statement for PredictionBook.com, davidad said [LW(p) · GW(p)]:

  • "A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence
  • "A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

(disappointingly, these statements were not actually recorded on PredictionBook).

Unfortunately, 10 years later, both projects appear to have made no significant progress and failed to develop a working simulation that is able to resemble biological behaviors. In a 2015 CNN interview, slarson said the OpenWorm project was "only 20 to 30 percent of the way towards where we need to get", and seems to be in the development hell forever since. Meanwhile, I was unable to find any breakthrough from davaidad before the project ended. David personally left the project in 2012. [LW(p) · GW(p)]

When the initial review was published, there was already 25 years of works on C. elegans, and right now yet another decade has passed, yet we're still unable to "upload" a nematode. Therefore, I have to end my post with the pessimistic vision of WBE by quoting the original post.

This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on.  None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources.  I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven't, my estimate of how hard whole brain emulation would be has gone up significantly.  While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

This is discouraging.

Closing thoughts: What went wrong? What are the unsolvable difficulties here?

Update

Some technical insights behind the failure was given in a 2014 update ("We Haven't Uploaded Worms") [LW · GW], jefftk showed the major problems are:

  1. Knowing the connections isn't enough, we also need to know the weights and thresholds. We don't know how to read them from a living worm.
  2. C. elegans is able to learn by changing the weights. We don't know how weights and thresholds are changed in a living worm.

The best we can do is modeling a generic worm - pretraining and running the neural network with fixed weights. Thus, no worm is "uploaded" because we can't read the weights, and these simulations are far from realistic because they are not capable of learning. Hence, it's merely a boring artificial neural network, not a brain emulation.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

Furthermore, in a Quora answer, davidad hinted that his project was discontinued partially due to the lack of funding.

If I'd had $1 million seed, I wouldn't have had to cancel the project when I did...

Conclusion: Relevant neural recording technologies are needed to collect data from living worms, but they remain undeveloped, and the funding simply isn't there. 

Update 2

I just realized David actually had an in-depth talk about his work and the encountered difficulties at MIRI's AI Risk for Computer Scientists workshop in 2020, according to this LW post ("AIRCS Workshop: How I failed to be recruited at MIRI [LW(p) · GW(p)]"). 

Most discussions were pretty high level. For example, someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegansis. A worm with an extremely simple and well understood brain. They explained to us a lot of things about biology, and how they had been trying and scanning precisely a brain. If I understood correctly, they told us they failed due to technical constraints and what those were. They believe that, nowadays, we can theoretically create the technology to solve this problem. However there is no one interested in said technology, so it won't be developed and be available to the market.

Does anyone know any additional information? Is the content of that talk available in paper form?

Update 3

Note to the future readers: within a week of the initial publication of this post, I received some helpful insider comments, including David himself, on the status of this field. The followings are especially worth reading.

87 comments

Comments sorted by top scores.

comment by Gerald Monroe (gerald-monroe) · 2021-10-01T22:05:46.976Z · LW(p) · GW(p)

Let's look at a proxy task. "Rockets landing on their tail". The first automated landing of an airliner was in 1964. Using a similar system of guidance signals from antenna on the ground surely a rocket could land after boosting a payload around the same time period. While SpaceX first pulled it off in 2015.

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"? C elegans has 302 neurons and it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron. With a switching frequency of 1 khz and it's fully connected you would need 302 * 100^2 *1000 operations per second. This is 0.003 TOPs, and embedded cards that do 200-300 TOPs are readily available.

So the computational problem is easy. Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

My larger point is that if the 'math checks out' on the basic feasibility of an idea, either there is something about the problem that makes it enormously harder than it appears, or simply not enough resources were invested to make progress. SpaceX, for example, had approximately 2-5 billion dollars spent by 2015 and the first rocket landing. A scrappy startup with say 1 million dollars might not get anywhere. How much funding did these research labs working on C Elegans have?

Replies from: niconiconi, CraigMichael, jkaufman, slugbait93, FCCC
comment by niconiconi · 2021-10-02T03:30:42.901Z · LW(p) · GW(p)

Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

Good points, I did more digging and found some relevant information I initially missed, see "Update". He didn't, and funding was indeed a major factor. 

comment by CraigMichael · 2021-10-06T01:36:56.544Z · LW(p) · GW(p)

Let's look at a proxy task. "Rockets landing on their tail"… While SpaceX first pulled it off in 2015.

The DC-X did this first in 1993, although this video is from 1995.

https://youtube.com/watch?v=wv9n9Casp1o

(And their budget was 60 million 1991 dollars, Wolfram Alpha says that’s 117 million in 2021 dollars) https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X

comment by jefftk (jkaufman) · 2021-10-19T13:22:09.869Z · LW(p) · GW(p)

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"

That isn't the right interpretation of the proxy task. In 2011, I was using progress on nematodes to estimate the timing of whole brain emulation for humans. That's more similar to using progress in rockets landing on their tail to estimate the timing of self-sustaining Mars colony.

(I also walked back from "probably hundreds of years" to "I don't think we'll be uploading anyone in this century" after the comments on my 2011 post, writing the more detailed: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes)

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2021-10-20T00:45:52.037Z · LW(p) · GW(p)

Ok. Hypothetical experiment. In 2042 someone does demonstrate a convincing dirt dynamics simulation and a flock of emulated nematodes. The emulated firing patterns correspond well with experimentally observed nematodes.

With that information you would still feel safe in concluding the solution is 58 years away for human scale?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-20T02:46:26.754Z · LW(p) · GW(p)

I'm not sure what you mean by "convincing dirt dynamics simulation and a flock of emulated nematodes"? I'm going to assume you mean the task I described in my post: teach one something, upload it, verify it still has the learned behavior.

Yes, I would still expect it to be at least 58 years away for human scale. The challenges are far larger for humans, and it taking over 40 years from people starting on simulating nematodes to full uploads would be a negative timeline update to me. Note that in 2011 I expected this for around 2021, which is nowhere near on track to do: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2021-10-21T00:20:46.806Z · LW(p) · GW(p)

Ok. I have thought about it further and here is the reason I think you're wrong. You implicitly have made an assumption that the tools available to neuroscientist today are good, and we have a civilization with the excess resources to support such an endeavor.

This is false. Today the available resources for such endeavors is only enough to fund small teams. The research that is profitable like silicon chip improvement gets hundreds of billions invested into it.

So any extrapolation is kinda meaningless. It would be like asking in 1860 how many subway tunnels would be in NYC in 1940. The heavy industry to build it simply didn't exist so you would have to conclude it would be slow going.

Similarly, the robotic equipment to do bioscience is currently limited and specialized. It's why a genome can be sequenced for a few thousand dollars but graduate students still use pipettes.

Oh and if you wanted to know when the human genome would be sequenced in 1920, and in 1930 learned that zero genes had been sequenced, you might make a similar conclusion.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-21T17:21:04.843Z · LW(p) · GW(p)

Do you have a better way of estimating the timing of new technologies that require many breakthroughs to reach?

Replies from: gerald-monroe, ben-lang
comment by Gerald Monroe (gerald-monroe) · 2021-10-25T02:51:42.571Z · LW(p) · GW(p)

I'll try to propose one.

  • Is the technology feasible with demonstrated techniques at a laboratory level.
  • Will there likely be gain to the organization that sells or deploys this technology in excess of it's estimated cost?
  • Does the technology run afoul of existing government regulation that will slow research into it?
  • Does the technology have a global market that will result in a sigmoidal adoption curve?

Electric cars should have been predictable this way:

           They were feasible since 1996, or 1990.  (LFP battery is the first lithium chemistry with the lifespan to be a net gain for an EV, 1990 is the first 'modern' lithium battery assembled in a lab)

           The gain is reduced fuel cost, maintenance cost, and supercar acceleration and vehicle performance with much cheaper drivetrains.

           Governments perceive a benefit in EVs so they have subsidized the research

           Yes, and the adoption curve is sigmoidal.

smartphones follow a similar such set of arguments, and the chips that made them possible were only low power enough around the point that the sigmoidal adoption started.  They were not really possible much prior.  Also, Apple made a large upfront investment to deliver an acceptable user experience all at once, rather than incrementally adding features like other early smartphone manufacturers did.

I will put my stake in the sand and say that autonomous cars fit all these rules:

          - Feasible, as the fundamental problem of assessing collision risk for a candidate path, the only part the car has to have perfect, is a simple and existing algorithm

         - enormous gain, easily north of a trillion dollars in annual revenue or hundreds of billions in annual profit will be available to the industry

         - Governments are reluctant but are obviously allowing the research and testing

         - The adoption curve will be sigmoidal, because it has obvious self gain.  The first shipping autonomous EVs will likely produce a cost advantage for a taxi firm or be rented directly, and will be immediately adopted, and the revenue reinvested makes their cost advantage grow until on the upward climb of the adoption curve the limit is simply how fast the hardware can be manufactured.

I will take it a step further, and say that general robots that solve problems of the same form as the problem of autonomous cars also fit all the same rules, will be adopted, it will be sigmoidal, and other reports have estimated that about half of all jobs will be replaced.

 

Anyways, for uploading a nematode, the optical I/O techniques to debug a living neural system I think are still in the laboratory prototype stage.   Does anyone have this working in any lab animal anywhere?  So it doesn't even meet condition 1.  And what's the gain if you upload a nematode? Not much.  Certainly not in excess of the tens of millions of dollars it likely will cost.  Governments are disinterested as nematodes are not vertebrates.  And there's no "self-gain", upload a nematode and no one is going to be uploading nematodes all over the planet.

 

There still will be progress, and with advances in other areas - the general robotics mentioned above - this would free up resources and make possible something like a human upload project.

And that, if you had demonstrations of feasibility, does meet all 4 conditions.

     -assume you have demonstrated feasibility with neuroscience experiments that will be performed and can "upload" a tiny section of human brain tissue.

     - The gain is you get to charge each uploaded human all their assets accumulated in life, and/or they will likely have superhuman capabilities once uploaded.  This gain is more like "divide by zero" gain, uploaded humans would be capable of organizing projects to tear down the solar system for raw materials, or essentially "near infinite money".

      - Governments will have to push it with all-out efforts near the end-game because to not have uploaded humans or AI is to lose all sovereignty

     -  Adoption curve is trivially sigmoidal.

 

I don't know when all the conditions will be met, but uploading humans is a project similar to nuclear weapons, in terms of gain and how up until just 29 months! before detonation the amount of fission done on earth by humans was zero.  In 1900 you might feel safe in predicting no fission before the end of the century like you do now.

 

Also, you can use this method to disprove personal jetpacks or fusion power.

   feasible-  personal jetpacks, no, rocket jetpacks of the 1960s had 22 second flight times.  

                    fusion power - no, no experiments were developing significant fusion gain without a fission device to provide the containment pressure

  gain - no.  Jetpacks would guzzle jet fuel even with more practical forms that worked more like a VTOL aircraft, and the value of this fuel is going to exceed the value of the time saved to almost all users.  Fusion power is a method to boil water using high energy laboratory equipment and is unlikely to be cheaper than the competition over any feasible timescale.

 government- no. Jetpacks cause extreme noise pollution and extra fires and deaths from when they fall out of the sky.  Fusion is a nuclear proliferation risk as a fusion reactor provides a source of neutrons that could be used to transmute to plutonium.

  sigmoidal - no, you can't have this without large gain.  Maybe this criterion is redundant.

 

If you got this far in reading, one notable fault of this proposed algorithm is it does not predict technologies requiring a true breakthrough.  You could not predict lasers, for instance, as these were not known to be feasible until the 1960s when the first working models existed.  That's a real breakthrough.  The distinction I am making is that if you do not know if physics will allow something to work, or if physics will allow something to work well, then you need a breakthrough to get it working.

Ditto argument for math algorithms, neural networks I would say are another real breakthrough, as they are much "better" for how little we know what we are doing than they should be.

We do know that physics will allow us to build a computer big enough to emulate a brain, to scan at least the synaptome of a once living brain, and get some detail on the weights.  We also know that learning means we do not really have be all that exact.  

comment by Ben (ben-lang) · 2022-11-01T10:03:45.915Z · LW(p) · GW(p)

This doesn't give any real help in guessing the timing. But I think the curve to imagine is much closer to a step function than it is to a linear slope. So not seeing an increase just means we haven't reached the step, not that their is a linear slope that is too small to see.

comment by slugbait93 · 2022-02-25T20:15:04.348Z · LW(p) · GW(p)

"it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron"

More like a deep network of 5-8 layers of 256 ANN nodes, according to recent work (turns out lots of computation is happening in the dentrites): 

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2022-03-03T02:07:00.140Z · LW(p) · GW(p)

Thank you for the link.  Given some neurons have 1000+ inputs and complex timing behavior, this is perfectly reasonable.  

comment by FCCC · 2021-10-11T02:44:59.913Z · LW(p) · GW(p)

Were they trying to simulate a body and an environment? Seems to me that would make the problem much harder, as you’d be trying to simulate reality. (E.g. How does an organic body move through physical space based on neural activity? How does the environment’s effects on the body stimulate neural changes?)

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2021-10-11T07:23:29.632Z · LW(p) · GW(p)

You have to or you haven't really solved the problem.  It's very much bounded, you do not need to simulate "reality", you need to approximate the things that the nematode can experience to slightly higher resolution than it can perceive.  

So basically you need some kind of dirt dynamics model that has sufficient fidelity to the nematodes very crude senses to be equivalent.  It might be easier with an even smaller organism.  

Replies from: FCCC
comment by FCCC · 2021-10-11T11:14:14.599Z · LW(p) · GW(p)

Maybe someone should ask the people who were working on it what their main issues were.

comment by davidad · 2021-10-05T10:38:02.608Z · LW(p) · GW(p)

I might have time for some more comments later, but here's a few quick points (as replies):

Replies from: davidad, davidad, davidad
comment by davidad · 2021-10-05T10:53:23.005Z · LW(p) · GW(p)
  1. There has been some serious progress in the last few years on full functional imaging of the C. elegans nervous system (at the necessary spatial and temporal resolutions and ranges).

However, despite this I haven't been able to find any publications yet where full functional imaging is combined with controlled cellular-scale stimulation (e.g. as I proposed via two-photon excitation of optogenetic channels), which I believe is necessary for inference of a functional model.

comment by davidad · 2021-10-05T10:38:54.126Z · LW(p) · GW(p)
  1. I was certainly overconfident about how easy Nemaload would be, especially given the microscopy and ML capabilities of 2012, but moreso I was overconfident that people would continue to work on it. I think there was very little work toward the goal of a nematode-upload machine for the four years from 2014 through 2017. Once or twice an undergrad doing a summer internship at the Boyden lab would look into it for a month, and my sense is that accounted for something like 3-8% of the total global effort.
Replies from: adrian-arellano-davin, lael-cellier
comment by mukashi (adrian-arellano-davin) · 2021-10-06T00:48:34.824Z · LW(p) · GW(p)

Why there was not a postdoc or a PhD hired for doing this work? Was it due to the lack of funding?

Replies from: davidad
comment by davidad · 2021-10-06T10:53:47.326Z · LW(p) · GW(p)

I can't say for sure why Boyden or others didn't assign grad students or postdocs to a Nemaload-like direction; I wasn't involved at that time, there are many potential explanations, and it's hard to distinguish limiting/bottleneck or causal factors from ancillary or dependent factors.

That said, here's my best explanation. There are a few factors for a life-science project that make it a good candidate for a career academic to invest full-time effort in:

  1. The project only requires advancing the state of the art in one sub-sub-field (specifically the one in which the academic specializes).
  2. If the state of the art is advanced in this one particular way, the chances are very high of a "scientifically meaningful" result, i.e. it would immediately yield a new interesting explanation (or strong evidence for an existing controversial explanation) about some particular natural phenomenon, rather than just technological capabilities. Or, failing that, at least it would make a good "methods paper", i.e. establishing a new, well-characterized, reproducible tool which many other scientists can immediately see is directly helpful for the kind of "scientifically meaningful" experiments they already do or know they want to do.
  3. It is easy to convince people that your project is plausibly on a critical path in the roadmap towards one of the massive medical challenges that ultimately motivate most life-science funding, such as finding more effective treatments for Alzheimer's, accelerating the vaccine pipeline, preventing heart disease, etc.

The more of these factors are present, the more likely your effort as an academic will lead to career advancement and recognition. Nemaload unfortunately scored quite poorly on all three counts, at least until recently:

(1) It required advancing the state-of-the-art in, at least: C. elegans genetic engineering, electro-optical system integration, computer vision, quantitative structural neuroanatomy of C. elegans, mathematical modeling, and automated experimental design.

(2) Even the final goal of Nemaload (uploading worms who've learned different behaviors and showing that the behaviors are reproduced in simulations) is barely "scientifically meaningful". All it would demonstrate scientifically (as opposed to technically) is that learned behaviors are encoded in some way in neural dynamics. This hypothesis is at the same time widely accepted and extremely difficult to convince skeptics of. Of course, studying the uploaded dynamics might yield fascinating insights into how nature designs minds, but it also might be pretty black-boxy and inexplicable without advancing the state of the art in yet further ways.

(2b) Worse, partial progress is even less scientifically meaningful, e.g. "here's a time-series of half the neurons, I guess we can do unsupervised clustering on it, oh look at that, the neural activity pattern can predict whether the worm is searching for food or not, as can, you know, looking at it." To get an upload, you need all the components of the uploading machine, and you need them all to work at full spec. And partial progress doesn't make a great methods paper either, for the following reason. Any particular experiment that worm neuroscientists want to do, they can do more cheaply and effectively in other ways, like genetically engineering only the specific neurons they care about for that experiment to fluoresce when they're active. Even if they're interested in a lot of neurons, they're going to average over a population anyway, so they can just look at a handful of neurons at a time. And they also don't mind doing all kinds of unnatural things to the worms like microfluidic immobilization to make the experiment easier, even though that makes the worms' overall mental-state very, shall we say, perturbed, because they're just trying to probe one neural circuit at a time, not to get a holistic view of all behaviors across the whole mental-state-space.

(3) The worm nervous system is in most ways about as far as you can get from a human nervous system while still being made of neural cells. C. elegans is not the model organism of choice for any human neurological disorder. Further, the specific technical problems and solutions are obviously not going to generalize to any creature with a bony skull, or with billions of neurons. So what's the point? It's a bit like sending a lander to the Moon when you're trying to get to Alpha Centauri. There are some basic principles of celestial mechanics and competencies of system design and integration that will probably mostly generalize, and you have to start acquiring those with feedback from attempting easier missions. Others may argue that Nemaload on a roadmap to any science on mammals (let alone interventions on humans) is more like climbing a tree when you're trying to get to the Moon. It's hard to defend against this line of attack.

If a project has one or two of these factors but not all three, then if you're an ambitious postdoc with a good CV already in a famous lab, you might go for it. But if it has none, it's not good for your academic career, and if you don't realize that, your advisor has a duty of care to guide you towards something more likely to keep your trajectory on track. Advisors don't owe the same duty of care to summer undergrads.

Adam Marblestone might have more insight on this question; he was at the Boyden lab in that time. It also seems like the kind of phenomenon that Alexey Guzey likes to try to explain.

Replies from: davidad, lael-cellier
comment by davidad · 2021-10-06T17:09:59.907Z · LW(p) · GW(p)

Note, (1) is less bad now, post-2018-ish. And there are ways around (2b) if you're determined enough. Michael Skuhersky is a PhD student in the Boyden lab who is explicitly working in this direction as of 2020. You can find some of his partial progress here https://www.biorxiv.org/content/biorxiv/early/2021/06/10/2021.06.09.447813.full.pdf and comments from him and Adam Marblestone over on Twitter, here: https://twitter.com/AdamMarblestone/status/1445749314614005760

comment by Laël Cellier (lael-cellier) · 2023-04-01T13:32:13.704Z · LW(p) · GW(p)

For point 2, is it possible to use the system to make advance in computer ai through studying the impact of large modifications of the connectome or the synapses in silicon instead of in vivo for getting eeg equivalent? Of course, I understand the system might have to virtually sleep from time to time unlike the pure mathematical matrix based probability current systems.

This would be the matter of making the simulation more debuggable instead of only being able to study muscles according to input (senses).

comment by Laël Cellier (lael-cellier) · 2023-04-01T13:26:37.877Z · LW(p) · GW(p)

Also, isn t the whole project making some completely wrong assumptions? I heard about the ideas that neurons don t make synapses on their own and that astrocytes instead of just being support cells is acting like sculptors on their sculptures with research having focused on neurons mainly because eeg detectability. And to support this, that it is the underlying reasons different species with similar numbers neurons shows smaller or larger connectomes and they are research that claimed to have improved the number of synapses per neurons and memorisations capabilties of rodents (compared to those without) by introducing genes controlling the astrocytes of primates (thus I recognise this theory is left uninvestigated for protostomes and their neuroglia instead of the full fledged astroglia of vertebrates).

Of course, this would had even more difficulty to the project https://www.frontiersin.org/articles/10.3389/fcell.2022.931311/full.

Having results with completely wrong assumptions doesn t means it doesn t works. For example, geocentric models were good enough to predict the position of planets like Jupiter during the medieval time but later inadequate and hence the need to shift to simpler heliocentric models. Getting all clinical trials on Alzheimer of the past 25 years failing or performing poorly in humans might suggest we are completely wrong on the inner workings of brains somewere.

As an undergraduate student, please correct me if I said garbage.

comment by davidad · 2021-10-05T10:38:30.585Z · LW(p) · GW(p)
  1. I got initial funding from Larry Page on the order of 10^4 USD and then funding from Peter Thiel on the order of 10^5 USD. The full budget for completing the Nemaload project was 10^6 USD, and Thiel lost interest in seeing it through.
Replies from: wassname
comment by wassname · 2023-01-04T12:12:42.027Z · LW(p) · GW(p)

Do you know why they lost interest? Assuming their funding decision were well thought out, it might be interesting.

comment by sludgepuddle · 2021-10-02T06:03:59.234Z · LW(p) · GW(p)

While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbed version to match recorded data. What do we get here, did we recover the weights and biases almost exactly? Assuming yes, how far can this go before we might as well have trained the thing from scratch? Assuming success, does it work equally on different types and sizes of networks, if not what kind of scaling laws does this process obey? Assuming some level of success, move on to a harder problem, a sparse network, this time we throw away everything but connectivity information and try to repeat the above. How about something biologically realistic but we try to simulate the spiking neurons with groups of standard artificial ones.. you get the drift.

comment by Josh Jacobson (joshjacobson) · 2021-10-04T00:58:03.033Z · LW(p) · GW(p)

The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.

The lack of progress here may be a quite good thing.

Replies from: RomanS, niconiconi, J_Thomas_Moros
comment by RomanS · 2021-10-04T08:27:03.458Z · LW(p) · GW(p)

As many other people here, I strongly desire to avoid death. Mind uploading is an efficient way to prevent many causes of death, as it is could make a mind practically indestructible (thanks to backups, distributed computing etc). WBE is a path towards mind uploading, and thus is desirable too.

Mind uploading could help mitigate OR increase the AI X-risk, depending on circumstances and implementation details. And the benefits of uploading as a mitigation tool seem to greatly outweigh the risks. 

The most preferable future for me is the future there mind uploading is ubiquitous, while X-risk is avoided. 

Although unlikely, it is still possible that mind uploading will emerge sooner than AGI. Such a future is much more desirable than the future without mind uploading (some possible scenarios).

Replies from: benjamin-spiegel
comment by Benjamin Spiegel (benjamin-spiegel) · 2021-10-05T15:44:25.665Z · LW(p) · GW(p)

This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.

Replies from: Kaj_Sotala, RomanS
comment by Kaj_Sotala · 2021-10-06T17:32:51.296Z · LW(p) · GW(p)

If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE.

Doesn't WBE involve the easy rather than hard problem of consciousness? You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

Replies from: benjamin-spiegel
comment by Benjamin Spiegel (benjamin-spiegel) · 2021-10-07T02:56:10.385Z · LW(p) · GW(p)

You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

I'm pretty sure the problem with this is that we don't know what it is about the human brain that gives rise to consciousness, and therefore we don't know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie. To find out whether our emulation is sufficient to produce consciousness, we would need to find out what X is and how to emulate it. I'm pretty sure this is exactly the hard problem of consciousness.

Even if biological computation is sufficient for generating consciousness, we will have no way of knowing until we solve the hard problem of consciousness.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-07T13:27:26.198Z · LW(p) · GW(p)

Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie.

David Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn't be conscious (the central argument starts from the subheading "3 Fading Qualia"): http://consc.net/papers/qualia.html

Replies from: benjamin-spiegel
comment by Benjamin Spiegel (benjamin-spiegel) · 2021-10-07T14:03:08.613Z · LW(p) · GW(p)

What a great read! I suppose I'm not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.

Joe is an interesting character which Chalmers thinks is implausible, but aside from it rubbing up against a faint intuition, I have no reason to believe that Joe is experiencing Fading Qualia. There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.

Replies from: rsaarelm
comment by rsaarelm · 2021-10-07T18:13:00.091Z · LW(p) · GW(p)

There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.

The mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It's likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we'd have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn't it evolved out of living biological humans.

Replies from: benjamin-spiegel
comment by Benjamin Spiegel (benjamin-spiegel) · 2021-10-08T00:25:09.601Z · LW(p) · GW(p)

I agree with you, though I personally wouldn't classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn't think that Joe could exist because it doesn't seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.

comment by RomanS · 2021-10-05T18:08:51.294Z · LW(p) · GW(p)

The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain. 

If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain. 

Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods.

Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don't see the philosophical problems as important. As per the Newton's Flaming Laser Sword: 

if a question, even in principle, can't be resolved by an experiment, then it is not worth considering. 

If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. 

The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another. 

As Dennett put it, everyone is a philosophical zombie.

Replies from: benjamin-spiegel
comment by Benjamin Spiegel (benjamin-spiegel) · 2021-10-07T03:20:39.148Z · LW(p) · GW(p)

There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.

If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain.

Of course, I'm not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There's something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.

If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. 

The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.

These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don't think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, "have I ever experienced pain?" If you believe the answer to that is "yes," then at least you should be convinced that you are conscious.

What you are suggesting at the end there is that WBE = mind uploading. I'm not sure many people would agree with that assertion.

Replies from: RomanS
comment by RomanS · 2021-10-07T05:00:32.891Z · LW(p) · GW(p)

No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.

Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?

It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword. 

It seems that you are rejecting consciousness itself.

As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don't correspond to any real phenomenon. 

"have I ever experienced pain?"

If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I've experienced it. 

I can't say the same about any introspection-based observations that can't be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.

Replies from: benjamin-spiegel
comment by Benjamin Spiegel (benjamin-spiegel) · 2021-10-07T15:18:57.029Z · LW(p) · GW(p)

Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?

No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.

It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword.

Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.

I can't say the same about any introspection-based observations that can't be experimentally verified.

I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.

The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.

This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.

comment by niconiconi · 2021-10-07T12:36:56.065Z · LW(p) · GW(p)

The lack of progress here may be a quite good thing.

Did I miss some subtle cultural changes at LW?

I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time?

I'm not a regular reader of LW, any explanation would be greatly appreciated. 

comment by J Thomas Moros (J_Thomas_Moros) · 2021-10-04T05:54:25.071Z · LW(p) · GW(p)

While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-10-06T14:56:41.793Z · LW(p) · GW(p)

It's not just an AI safety risk, it's also an S-risk in it's own right.

Replies from: RomanS
comment by RomanS · 2021-10-07T05:17:05.340Z · LW(p) · GW(p)

While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right. 

What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.

Replies from: davidad
comment by davidad · 2021-10-07T06:26:26.482Z · LW(p) · GW(p)

At least from the orthodox QALY perspective on "weighing lives", the benefits of WBE don't outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version.

The benefits of eventually developing WBE do outweigh the X-risks, if we assume that

  • human lives are the only ones that count,
  • WBE'd humans still count as humans, and
  • WBE is much more resource-efficient than anything else that future society could do to support human life.

However, orthodox QALY reasoning of this kind can't justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.

Replies from: RomanS
comment by RomanS · 2021-10-07T08:02:09.676Z · LW(p) · GW(p)

As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems. 

There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering. 

In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe. 

I also don't see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead - cannot. Bad feelings are vastly less important than saved lives.

It's a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving.

orthodox QALY reasoning of this kind can't justify developing WBE soon (rather than, say, after a Long Reflection)

There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it's a half a billion. In 1000 years, it's a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity. 

Replies from: jkaufman, mr-hire
comment by jefftk (jkaufman) · 2021-10-19T11:59:15.316Z · LW(p) · GW(p)

I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.

This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I'm baffled. You really mean this?

Replies from: RomanS
comment by RomanS · 2021-10-21T20:00:44.476Z · LW(p) · GW(p)

There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.

The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. "rest in peace", "put to sleep", "he is in a better place now" etc. 

The association is harmful. 

The association suggests that death could be a valid solution to pain, which is deeply wrong. 

It's the same wrongness as suggesting to kill a child to make the child less sad. 

Technically, the child will not experience sadness anymore. But infanticide is not a sane person's solution to sadness. 

The sane solution is to find a way to make the child less sad (without killing them!). 

The sane solution to suffering is to reduce suffering. Without killing the sufferer.

For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they're in pain is a sub-optimal solution (to put it mildly).

I can't imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable. 

If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort. 

Replies from: Duncan_Sabien, lc
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-10-21T20:15:17.944Z · LW(p) · GW(p)

(I agree wholeheartedly with almost everything you've said here, and have strong upvoted, but I want to make space for the fact that some people don't make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves.  Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed "right" choice to something Actually Better.)

comment by lc · 2022-05-31T03:43:37.907Z · LW(p) · GW(p)

Something is handicapping your ability to imagine what the "worst possible discomfort" would be.

Replies from: RomanS
comment by RomanS · 2022-05-31T11:55:17.374Z · LW(p) · GW(p)

The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort. 

I wrote in more detail about it here [LW · GW].

comment by Matt Goldenberg (mr-hire) · 2021-10-07T13:56:02.830Z · LW(p) · GW(p)

There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired.

 

This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you're stuck dead, so that's the worst thing).

Replies from: RomanS
comment by RomanS · 2021-10-07T16:45:57.184Z · LW(p) · GW(p)

I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death - have internal inconsistencies. 

My prediction is based on the following assumption:

  • permanent death is the only brain state that can't be reversed, given sufficient tech and time

The non-reversibility is the key. 

For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can't increase happiness of the humans who are non-reversibly dead.

If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can't do that for the humans who are non-reversibly dead.

The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories. 

Personally, I simply don't want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death. 

Replies from: Erhannis
comment by Erhannis · 2022-09-24T21:36:43.965Z · LW(p) · GW(p)

While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there's an assumption in your argument that bears inspection.  Namely, I believe you are maximizing happiness at a given instance in time - the present, or the limit as time approaches infinity, etc.  (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.)  A (possibly) alternate optimization goal - maximize human happiness, summed over time.  See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe.  In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow.  At the very least, this metric is not helpful, because it cannot distinguish between any two states.  So a different metric must be chosen.  A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up.  The happy week you had last week is not canceled out by a mildly depressing day today, for instance - it still counts.  Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I'll grant this goes a little against my instincts).  If you DO assume infinite time, though, your argument may return to being automatically true.  I'm not sure that's an assumption that should be confidently made, though.  If you don't assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people's terminal goals.

(Side note: I've idly speculated about expanding the above optimization criteria for the case of all-possible-universes - I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess).  Needs more thought to become more rigorous etc.)

Replies from: RomanS
comment by RomanS · 2022-09-25T09:51:28.036Z · LW(p) · GW(p)

Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.

I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.

comment by gwern · 2023-07-03T22:47:42.650Z · LW(p) · GW(p)

Connectome scanning continues to scale up drastically, particularly on fruit flies. davidad highlights some very recent work:

I could be wrong, but we're still currently unable to get that original C. elegans neural map to do anything (like run a simulated worm body), right?

I think @AndrewLeifer is almost there but, yes, still hasn’t gone all the way to a demonstration of behavior in a virtual environment: "A11.00004 : A functional connectivity atlas of C. elegans measured by neural activation".

Neural processing and dynamics are governed by the details of how neural signals propagate from one neuron to the next through the brain. We systematically measured functional properties of neural connections in the head of the nematode Caenorhabditis elegans by direct optogenetic activation and simultaneous calcium imaging of 10,438 neuron pairs. By measuring responses to neural activation, we extracted the strength, sign, temporal properties, and causal direction of the connections and created an atlas of causal functional connectivity.

We find that functional connectivity differs from predictions based on anatomy, in part, because of extrasynaptic signaling. The measured properties of the connections are encoded in kernels which describe signal propagation in the network and which we fit from the data. Using such kernels, we can run numerical simulations of neural activity in the worm's brain using exclusively information that comes from the data, as opposed to simulations based on the anatomical connectome which require assumptions on many parameters.

We show that functional connectivity better predicts spontaneous activity than anatomy, suggesting that functional connectivity captures properties of the network that are critical for interpreting neural function.

An important feature of his work is that he’s shown functional connectivity is not only strictly more informative than anatomical connectomics, but that simulations actually get worse when incorporating connectomic constraints (because e.g. hormones work external to synapses).

Also relevant: "A leaky integrate-and-fire computational model based on the connectome of the entire adult Drosophila brain reveals insights into sensorimotor processing", Shiu et al 2023:

The forthcoming assembly of the adult Drosophila melanogaster central brain connectome, containing over 125,000 neurons and 50 million synaptic connections, provides a template for examining sensory processing throughout the brain.

Here, we create a leaky integrate-and-fire computational model of the entire Drosophila brain, based on neural connectivity and neurotransmitter identity, to study circuit properties of feeding and grooming behaviors.

We show that activation of sugar-sensing or water-sensing gustatory neurons in the computational model accurately predicts neurons that respond to tastes and are required for feeding initiation. Computational activation of neurons in the feeding region of the Drosophila brain predicts those that elicit motor neuron firing, a testable hypothesis that we validate by optogenetic activation and behavioral studies. Moreover, computational activation of different classes of gustatory neurons makes accurate predictions of how multiple taste modalities interact, providing circuit-level insight into aversive and appetitive taste processing. Our computational model predicts that the sugar and water pathways form a partially shared appetitive feeding initiation pathway, which our calcium imaging and behavioral experiments confirm. Additionally, we applied this model to mechanosensory circuits and found that computational activation of mechanosensory neurons predicts activation of a small set of neurons comprising the antennal grooming circuit that do not overlap with gustatory circuits, and accurately describes the circuit response upon activation of different mechanosensory subtypes.

Our results demonstrate that modeling brain circuits purely from connectivity and predicted neurotransmitter identity generates experimentally testable hypotheses and can accurately describe complete sensorimotor transformations.

"Connectome-constrained deep mechanistic networks predict neural responses across the fly visual system at single-neuron resolution", Lappalainen et al 2023:

We can now measure the connectivity of every neuron in a neural circuit, but we are still blind to other biological details, including the dynamical characteristics of each neuron. The degree to which connectivity measurements alone can inform understanding of neural computation is an open question.

Here we show that with only measurements of the connectivity of a biological neural network, we can predict the neural activity underlying neural computation. We constructed a model neural network with the experimentally determined connectivity for 64 cell types in the motion pathways of the fruit fly optic lobe but with unknown parameters for the single neuron and single synapse properties. We then optimized the values of these unknown parameters using techniques from deep learning, to allow the model network to detect visual motion.

Our mechanistic model makes detailed experimentally testable predictions for each neuron in the connectome. We found that model predictions agreed with experimental measurements of neural activity across 24 studies.

Our work demonstrates a strategy for generating detailed hypotheses about the mechanisms of neural circuit function from connectivity measurements. We show that this strategy is more likely to be successful when neurons are sparsely connected—a universally observed feature of biological neural networks across species and brain regions.

Replies from: PeterMcCluskey
comment by delton137 · 2021-10-08T14:04:04.577Z · LW(p) · GW(p)

I want to point out that there has been some very small amounts of progress in the last 10 years on the problem of moving from connectome to simulation rather than no progress. 

First, there has been interesting work at the JHU Applied Physics Lab which extends what Busbice was trying to do when he tried to run as simulation of c elegans in a Lego Mindstorms robot (by the way, that work by Busbice was very much overhyped by Busbice and in the media, so it's fitting that you didn't mention it). They use a basic integrate and fire model to simulate the neurons (which is probably actually not very accurate here because c elegans neurons don't actually seem to spike much and seem to rely on subthreshold activity more so than in other organisms). To assign weights to the different synapses they used what appears to be a very crude metric - the weight was determined in proportion to the total number of synapses the two neurons on either side of the synapse share. Despite the crudeness of their approach, the simulated worm did manage to reverse it's direction when bumping into walls.  I believe this work was a project that summer interns did and didn't have a lot of funding, which makes it more impressive in my mind than it might seem at first glance. 

Another line of work that seems worth pointing out is this 2018 work simulating "hexagonal cells" in the drosophilia which has been done at Janelia: "A Connectome Based Hexagonal Lattice Convolutional Network Model of the Drosophila Visual System". They claim "Our work is the first demonstration, that knowledge of the connectome can enable in silico predictions of the functional properties of individual neurons in a circuit". I skimmed this paper and found it a bit underwhelming since it appears the validation of the model was mostly in terms of summary statistics. 

Finally, for anyone who wants to learn what happened with the OpenWorm project, the CarbonCopies Foundation did a workshop in June 2021 with Steven Larson. A recording of the 4 hour event is online. I was present for a bit of it at the time it aired, but my recollection is dim. I believe part of the issue they ran into was figuring out how to simulate the physiology of the neuron (ie all the non-neuronal cells). Some people in the OpenWorm open source community managed to build a  3D model (you can view it here). If I recall correctly, he mentioned there was some work to embed that model in a fluid dynamics simulation and "wire it" with a crude simulation of the nervous system, and they got it to wiggle in some way that looked plausible. 

Replies from: niconiconi
comment by niconiconi · 2021-10-16T06:00:50.397Z · LW(p) · GW(p)

Thanks for the info. Your comment is the reason why I'm on LessWrong.

comment by Charlie Sanders (charlie-sanders-1) · 2021-10-04T15:05:37.834Z · LW(p) · GW(p)

Imagine you have two points, A and B. You're at A, and you can see B in the distance. How long will it take you to get to B?

Well, you're a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you're extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you'll get to point B in a year or so.

Then you start walking.

And you run into a wall. 

Turns out, there's a maze in between you and point B. Huh, you think. Well that's ok, I put a factor of safety into my calculations, so I should be fine. You pick a direction, and you keep walking. 

You run into more walls.

You start to panic. You figured this would only take you a year, but you keep running into new walls! At one point, you even realize that the path you've been on is a dead end — it physically can't take you from point A to point B, and all of the time you've spent on your current path has been wasted, forcing you to backtrack to the start.

Fundamentally, this is what I see happening, in various industries: brain scanning, self-driving cars, clean energy, interstellar travel, AI development. The list goes on.

Laymen see a point B in the distance, where we have self-driving cars run on green energy powered by AGI's. They see where we are now. They figure they can estimate how long it'll take to get to that point B, slap on a factor of safety, and make a prediction. 

But the real world of problem solving is akin to a maze. And there's no way to know the shape or complexity of that maze until you actually start along the path. You can think you know the theoretical complexity of the maze you'll encounter, but you can't. 

Replies from: orthonormal
comment by orthonormal · 2021-10-06T07:03:18.156Z · LW(p) · GW(p)

On the other hand, sometimes people end up walking right through what the established experts thought to be a wall. The rise of deep learning from a stagnant backwater in 2010 to a dominant paradigm today (crushing the old benchmarks in basically all of the most well-studied fields) is one such case.

In any particular case, it's best to expect progress to take much, much longer than the Inside View indicates. But at the same time, there's some part of the research world where a major rapid shift is about to happen.

comment by Lone Pine (conor-sullivan) · 2021-10-02T05:59:45.100Z · LW(p) · GW(p)

It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said "this is a 100% accurate and working simulation of the worm." How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say "this emulated brain does/doesn't produce speech that resembles Kurzweil's speech." What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?

Replies from: niconiconi, J_Thomas_Moros, jacobjacob
comment by niconiconi · 2021-10-02T06:23:41.496Z · LW(p) · GW(p)

Quote jefftk.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

(just included the quotation in my post)

comment by J Thomas Moros (J_Thomas_Moros) · 2021-10-04T05:12:10.548Z · LW(p) · GW(p)

A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.

Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans

comment by jacobjacob · 2021-10-04T00:09:01.727Z · LW(p) · GW(p)

You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don't know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it's some evidence it's the same worm. To make it more granular there's a lot of learning tasks from behavioural neuroscience you could adapt. 

You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction. 

Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual.   

Or you can get causal evidence via optogenetics

Replies from: niconiconi
comment by niconiconi · 2021-10-04T21:17:32.282Z · LW(p) · GW(p)

Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote.

I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you're restarting a forgotten discussion.

comment by J Thomas Moros (J_Thomas_Moros) · 2021-10-04T05:47:33.371Z · LW(p) · GW(p)

As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.

Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. Note that it isn't accurate to say it must be alive. It would be sufficient to freeze an individual worm and then spend extensive time and effort reading that information. Nevertheless, I can imagine that might be very difficult to do. According to wormbook.org, C. elegans has on the order of 7,000 synapses. I am not sure we know how to read the weight and threshold of a synapse. This strikes me as a task requiring significant technological development that isn't in line with existing research programs. That is, most research is not attempting to develop the technology to read specific weights and thresholds. So it would require a significant well-funded effort focused specifically on it. I am not surprised this has not been achieved given reports of lack of funding. Furthermore, I am not surprised there is a lack of funding for this.

Simulating a worm should only require an accurate model of the behavior of the worm nervous system and a simulation environment. Given that all C. elegans have the same 302 neurons this seems like it should be feasible. Furthermore, the learning mechanism of individual neurons, operation of synapses, etc. should all be things researchers outside of the worm emulation efforts should be interested in studying. Were I wanting to advance the state of the art, I would focus on making an accurate simulation of a generic worm that was capable of learning. Then simulate it in an environment similar to its native environment and try to demonstrate that it eventually learned behavior matching real C. elegans including under conditions which C. elegans would learn. That is why I was very disappointed to learn that the "simulations are far from realistic because they are not capable of learning." It seems to me this is where the research effort should focus and I would like to hear more about why this is challenging and hasn't already been done.

I believe that worm uploading is not needed to make significant steps toward showing the feasibility of WBE. The kind of worm simulation I describe would be more than sufficient. At that point, reading the weights and thresholds of an individual worm becomes only an engineering problem that should be solvable given a sufficient investment or level of technological advancement.

comment by jp · 2022-12-23T12:51:50.716Z · LW(p) · GW(p)

This post was a great dive into two topics:

  • How an object-level research field has gone, and what are the challenges it faces.
  • Forming a model about how technologically optimistic projects go.

I think this post was good on it's first edition, but became great after the author displayed admirable ability to update their mind and willingness to update their post in light of new information.

Overall I must reluctantly only give this post a +1 vote for inclusion, as I think the books are better served by more general rationality content, but I'm terms of what I would like to see more of on this site, +9. Maybe I'll compromise and give +4.

comment by barak · 2021-10-03T16:38:51.608Z · LW(p) · GW(p)

One complication glossed over in the discussion (both above and below) is that a single synapse, even at a single point in time, may not be well characterized as a simple "weight". Even without what we might call learning per se, the synaptic efficacy seems, upon closer examination, to be a complicated function of the recent history, as well as the modulatory chemical environment. Characterizing and measuring this is very difficult. It may be more complicated in a C. elegans than in a mammal, since it's such a small but highly optimized hunk of circuitry.

comment by Mikhail Samin (mikhail-samin) · 2021-10-02T21:37:56.051Z · LW(p) · GW(p)

There’s a scan of 1 mm^3 of a human brain, 1.4 petabytes with hundred(s?) of millions of synapses

‪https://ai.googleblog.com/2021/06/a-browsable-petascale-reconstruction-of.html‬

comment by Dustin · 2021-10-02T16:54:26.853Z · LW(p) · GW(p)

Do we at least have some idea of what kind of technology would be needed for reading out connection weights?

Replies from: RomanS, niconiconi
comment by RomanS · 2021-10-04T08:01:50.917Z · LW(p) · GW(p)

A possible indirect way of doing that is by recording the worm's behavior:

  1. record the behavior under many conditions
  2. design an ANN that has the same architecture as the real worm
  3. train many instances of the ANN on a half of the recordings
  4. select an instance that shows the right behavior on the withheld half of the recordings
  5. if the records are long and diverse enough, and if the ANN is realistic enough, the selected instance will have the weights that are functionally equivalent to the weights of the real worm

The same approach could be used to emulate the brain of a specific human. Although the required compute in this case might be too large to become practical in the next decades.

comment by niconiconi · 2021-10-04T21:10:30.172Z · LW(p) · GW(p)

David believed one can develop optogenetic techniques to do this. Just added David's full quote to the post. 

With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

comment by niconiconi · 2023-03-02T08:31:47.655Z · LW(p) · GW(p)

One review criticized my post for being inadequate at world modeling - readers who wish to learn more about predictions are better served by other books and posts (but also praised me for being willing to update its content after new information arrived). I don't disagree, but I felt it was necessary to clarify my background of writing it.

First and foremost, this post was meant specifically as (1) a review of the research progress on Whole Brain Emulation of C. elegans, and (2) a request for more information from the community. I became aware of this research project since 10 years ago on Wikipedia, and like everyone else, I thought it would be a milestone of transhumanism. At the beginning of 2020 I remembered it again - it seemed to stuck in development hell forever, without clear reasons. Frustrated, I decided to find out why.

I knew transhumanism has always been an active topic on LessWrong so naturally I came here and searched the posts. It was both disappointing and encouraging. There was no up-to-date post beyond the original one in 2010, but I found some researchers are LW members, and it was likely that I would be able to learn more by asking it here, perhaps with an "insider" answer.

Thus, I made the original post. It was not, at all, intended as an exercise of forecasting or world modeling. Then, I was completely surprised by the post's reception. Despite being the first post I've ever made, it received a hundred upvotes within days, and later, it was selected by the editors as a "Curated" homepage post. It even became the reading material at offline meetups! It was completely unexpected. As a result, jefftk - the lead researcher of one project, personally answered my questions. I updated my post to incorporate new information as they arrived.

In conclusion, this post completely served its purpose of gathering new information about the research progress in this field. However, the research I did in the original post (before update) was incomplete. I totally missed the 2014 review and a 2020 mention, both were literally on LW. If I knew the post would be selected as Curated and widely read by a large audience as an exercise of world modeling (and winning a prize), I would have used more patience during my initial research before sending the original version.

comment by Regex · 2023-08-25T09:58:51.938Z · LW(p) · GW(p)

Two years later, there are now whole brain wide recordings on C. Elegans via calcium imaging. This includes models apparently at least partially predictive of behavior and analysis of individual neuron contributions to behavior. 

If you want the "brain-wide recordings and accompanying behavioral data" you can apparently download them here!

It is very exciting to finally have measurements for this. I still need to do more than skim the paper though. While reading it, here are the questions on my mind:
* What are the simplest individual neuron models that properly replicates each measured neuron-activation? (There are different cell types so take that into account too)
* If you run those individually measurement-validated neuron models forward in time, do they collectively produce the large scale behavior seen? 
    * If not, why not? What's necessary?
* Are these calcium imaging measurements sufficient to construct the above? (Assume individualized connectomes per-worm are gathered prior instead of using averages across population)
    * If not, what else is necessary? 
    * And if it is sufficient, how do you construct the model parameters from the measurements?         
* Can we now measure and falsify our models of individual neuron learning? 
    * If we need something else, what is that something?

Edit: apparently Gwern is slightly ahead [LW(p) · GW(p)] of me and pointed at Andrew Leifer whose group an entire year ago who produced a functional atlas of C Elegans that also included calcium imaging. Which I'd just totally missed. One missing element is extrasynaptic signaling, which apparently has a large impact on C Elegans behavior. So in order to predict neuron behavior you need to attend to those as well.
 

comment by Mateusz Bagiński (mateusz-baginski) · 2022-12-30T15:07:37.219Z · LW(p) · GW(p)

This is a total nitpick but there is a typo in the title of both this post and the one from jefftk referenced by it. It's "C. elegans", not "C. elgans".

Replies from: niconiconi
comment by niconiconi · 2023-02-28T14:05:04.096Z · LW(p) · GW(p)

Typo has been fixed.

comment by slugbait93 · 2022-02-25T20:10:57.769Z · LW(p) · GW(p)

There's a fundamental difficulty with these sorts of attempts to emulate entire nervous systems (which gets exponentially worse as you scale up) that I don't think gets enough attention:  failure of averaging. See this paper on simulating single neurons: https://pubmed.ncbi.nlm.nih.gov/11826077/#:~:text=Averaging%20fails%20because%20the%20maximal,does%20not%20contain%20its%20mean.

The abstract:

"Parameters for models of biological systems are often obtained by averaging over experimental results from a number of different preparations. To explore the validity of this procedure, we studied the behavior of a conductance-based model neuron with five voltage-dependent conductances. We randomly varied the maximal conductance of each of the active currents in the model and identified sets of maximal conductances that generate bursting neurons that fire a single action potential at the peak of a slow membrane potential depolarization. A model constructed using the means of the maximal conductances of this population is not itself a one-spike burster, but rather fires three action potentials per burst. Averaging fails because the maximal conductances of the population of one-spike bursters lie in a highly concave region of parameter space that does not contain its mean. This demonstrates that averages over multiple samples can fail to characterize a system whose behavior depends on interactions involving a number of highly variable components."

Historically, a similar problem was discovered by the US air force when trying to design the cockpits of fighter jets. They took anatomical measurements from hundred of pilots and designed a cockpit based on the average values, under the assumption that it would fit most pilots reasonably well. In actuality, it didn't fit anyone (they eventually solved the problem by making everything adjustable): https://www.thestar.com/news/insight/2016/01/16/when-us-air-force-discovered-the-flaw-of-averages.html

comment by Ruby · 2021-10-04T00:21:26.574Z · LW(p) · GW(p)

Curated. The topic of uploads and whole-brain emulation is a frequent one, and one whose feasibility is always assumed to be true. While this post doesn't argue otherwise, it's fascinating to hear where we're with the technology for this.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-04T01:05:50.902Z · LW(p) · GW(p)

Post is about tractability / difficulty, not feasibility

Replies from: ESRogs, Ruby
comment by ESRogs · 2021-10-04T22:15:35.825Z · LW(p) · GW(p)

What's the distinction you're making? A quick google suggests this as the definition for "feasibility":

the state or degree of being easily or conveniently done

This matches my understanding of the term. It also sounds a lot like tractability / difficultly.

Are you thinking of it as meaning something more like "theoretical possibility"?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-05T08:43:41.310Z · LW(p) · GW(p)

That isn't the definition I'm familiar with -- the one I was using is in Webster's:

1: capable of being done or carried out

comment by Ruby · 2021-10-04T04:09:55.649Z · LW(p) · GW(p)

Indeed! By "feasibility is assumed to be true", I meant in other places and posts.

comment by Yonatan Cale (yonatan-cale-1) · 2022-06-24T06:41:55.575Z · LW(p) · GW(p)

Hey, 

TL;DR I know a researcher who's going to start studying C. elegans worms in a way that seems interesting as far as I can tell. Should I do something about that?

 

I'm trying to understand if this is interesting for our community, specifically as a path to brain emulation, which I wonder if could be used to (A) prevent people from dying, and/or (B) creating a relatively-aligned AGI.

This is the most relevant post I found on LW/EA (so far).

I'm hoping someone with more domain expertise can say something like:

  • "OMG we should totally extra fund this researcher and send developers to help with the software and data science and everything!"
  • "This sounds pretty close to something useful but there are changes I'd really like to see in that research"
  • "Whole brain emulation is science fiction, we'll obviously destroy the world or something before we can implement it"
  • "There is a debate on whether this is useful, the main positions are [link] and [link], also totally talk to [person]"

Any chance someone can give me direction?

Thx!

 

(My background is in software, not biology or neurology)

comment by Neil Howard (neil-howard) · 2022-05-01T17:45:54.187Z · LW(p) · GW(p)

OpenWorm seems to be a project with realistic goals but unrealistic funding in contrast to the EU's Human Brain Project (HBP): a project with an absurd amount of funding, with absurdly unrealistic goals. Even ignoring the absurd endpoint, any 1billion Euro project should be split up into multiple smaller ones with time to take stock of things in between.

What could the EU have achieved by giving $50million to OpenWorm to spend in 3 years (before getting more ambitious)? 

Would it not have done so in the first place because of hubris? The worm is somehow too simple to be worthy of investigation? The complexity of 300 C. Elegans neurons is way way more than the superficial perception of 300 times the complexity of 1 artificial neuron. 300 real neurons provide way more degrees of freedom than any scientist would like to be dealing with. 

$50million on OpenWorm surely would have made significant progress on the methodologies required, without being distracted by the supercomputing sideshow.

What would you have done with the $50million?

comment by PeterMcCluskey · 2021-10-08T21:13:36.541Z · LW(p) · GW(p)

My impression of OpenWorm was that it was not focused on WBE. It tried to be a more general-purpose platform for biological studies. It attracted more attention than a pure WBE project would, by raising vague hopes of also being relevant to goals such as studying Alzheimer's.

My guess is that the breadth of their goals led them to become overwhelmed by complexity.

comment by markusrobam · 2021-10-04T03:44:04.676Z · LW(p) · GW(p)

It's possible to donate to Openworm to accelerate the development, via their website

comment by Flaglandbase · 2021-10-02T02:56:53.070Z · LW(p) · GW(p)

Maybe the problem is figuring out how to realistically simulate a SINGLE neuron, which could then be extended 302 or 100,000,000,000 times. Also due to shorter generation times any random c.elegans has 50 times more ancestors than any human, so evolution may have had time to make their neurons more complex.