OpenWorm and differential technological development
post by John_Maxwell (John_Maxwell_IV) · 2014-05-19T04:47:00.042Z · LW · GW · Legacy · 30 commentsContents
30 comments
According to Robin Hanson's arguments in this blog post, we want to promote research in to cell modeling technology (ideally at the expense of research in to faster computer hardware). That would mean funding this kickstarter, which is ending in 11 hours (it may still succeed; there are a few tricks for pushing borderline kickstarters through). I already pledged $250; I'm not sure if I should pledge significantly more on the strength of one Hanson blog post. Thoughts from anyone? (I also encourage other folks to pledge! Maybe we can name neurons after characters in HPMOR or something. EDIT: Or maybe funding OpenWorm is a bad idea; see this link.)
People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).
30 comments
Comments sorted by top scores.
comment by slarson · 2014-05-19T06:57:07.746Z · LW(p) · GW(p)
Thanks for pledging and encouraging others to pledge! Full disclosure: I'm the coordinator for the project. I've been having a look through the discussions on your references and I'd offer the following thoughts:
I think Hanson's three part break down (Computing power, brain scanning, cell modeling) is useful and I agree that cell modeling is an important research investment that has not had enough focus, either academically or industrially. Better cell models is one of the technological advances that OpenWorm helps to address due to its approach to model a complete organism with such few cells.
I would add that none of these discussions seem to pick up on the additional benefits of cell modeling outside of the context of brain emulation, which include advances in complexity science in general, increased potential for tissue regeneration and repair, and better diagnostics and therapies for diseases. Remember, all living things have cells, so advanced cell modeling could give us a debugger and an editor for biology unlike anything we've ever seen.
In terms of funding open science via crowd funding as a differential technological development strategy, I would also point out that the results are held in a public commons (GitHub in our case) and this transparency and open access may be an important factor. Work like this is likely going to be done at some point, but if it isn't publicly funded then it is likely to be privately funded and also privately held, and may add to asymmetrical control over these technologies. Personally, I prefer power to be distributed as a bulwark against tyranny. The more of these technological advances are out in the open, I think, the less likely the power of them will be concentrated in the hands of the few and used improperly.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-19T07:05:37.577Z · LW(p) · GW(p)
Hi Stephen, thanks for chiming in! Did you take a look at this that I linked to at the bottom?
Replies from: slarson↑ comment by slarson · 2014-05-19T07:23:59.760Z · LW(p) · GW(p)
I did -- could you summarize what parts of this you take away to discourage you specifically? There's quite a few things you could be resonating with there.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-19T07:53:42.510Z · LW(p) · GW(p)
Sure. So to give you some context, WBE is an acronym for Whole Brain Emulation. You can read this report by the Future of Humanity Institute at Oxford where they detail the technological development that'd be required to achieve this. If it did end up getting achieved, the consequences for society could be enormous. An emulated brain could do the same work as a software developer, researcher, etc., but the emulated brain could be run at a much faster subjective rate (say, doing a week's worth of subjective thinking in the space of 5 minutes) and for less money (given continued decreases in the cost of computer hardware). Robin Hanson, the GMU economics professor who wrote the "Bad Emulation Advance" blog post, is very interested in ems... here is a presentation where he fleshes out the possibilities of an emulation-filled future in some detail. It's not a terrible future, but it's not a terrifically bright one either.
One future that probably would be pretty terrible is if we had an extremely intelligent artificial intelligence that was built on some technologies inspired by the human brain but was not a high-resolution exact copy of any living human ("neuromorphic AI"), and was not carefully constructed to work towards achieving human values. You can read this for a short summary of why this would likely be terrible, or this for a longer, more fleshed-out argument (with entertaining background info).
So this is why we want to tread carefully. It's suspected that neuromorphic AI is harder to construct in a robust, provably safe way, and given the dangers of haphazardly constructed superintelligences, it seems like we'd rather see more mathematically pure AI methods advance, if any at all. That's what this link is in reference to.
I know that was a pretty high density of crazy ideas in a pretty short period of time, and there are some leaps of reasoning that I left out... let me know if you've got any thoughts or questions. To a futurist like me, things like better diagnostics and therapies for diseases, while interesting and exciting, are small potatoes in the grand scheme of things. When there's the possibility of society being completely transformed by a technology, that's when I start to pay attention. (Hey, it's happened plenty of times before. Imagine explaining the modern world to a Cro-Magnon.) So the main way I'm seeing your project is in terms of its incremental advances along various technological dimensions that could lead to societal transformation. Hopefully that makes some sense.
Replies from: private_messaging↑ comment by private_messaging · 2014-05-19T17:49:24.104Z · LW(p) · GW(p)
A human is very massively sub-mankind level intelligent, and some rough approximation running at sub-realtime speeds with several orders of magnitude higher daily sustenance cost even more so.
Granted, it's a great plot device once you give it superpowers, and so there have been many high profile movies concerning such scenarios, and you see worlds destroyed by AI on the big screen. And your internal probability evaluator - evolved before CGI - uses the frequency of scenarios you seen with your own eyes.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-20T04:04:18.922Z · LW(p) · GW(p)
A human is very massively sub-mankind level intelligent, and some rough approximation running at sub-realtime speeds with several orders of magnitude higher daily sustenance cost even more so.
No disagreement there.
Granted, it's a great plot device once you give it superpowers, and so there have been many high profile movies concerning such scenarios, and you see worlds destroyed by AI on the big screen. And your internal probability evaluator - evolved before CGI - uses the frequency of scenarios you seen with your own eyes.
Reversed stupidity isn't intelligence.
Replies from: private_messaging, David_Gerard↑ comment by private_messaging · 2014-05-20T11:01:20.233Z · LW(p) · GW(p)
No disagreement there.
Then why you're discussing sudden super-intelligence? The faster cell simulation technologies advance, the weaker is the hardware they'll run on.
Reversed stupidity isn't intelligence.
Direct stupidity is not intelligence either. The a-priori likelihood of an arbitrary made up prediction being anywhere near correct is pretty damn low. If I tell you that 7 932 384 626 is some lottery number from yesterday, you may believe it, but once you see that it's the digits of pi fairly close to the start, your credence should drop. A lot.
Replies from: othercriteria, John_Maxwell_IV↑ comment by othercriteria · 2014-05-20T12:46:34.176Z · LW(p) · GW(p)
The faster cell simulation technologies advance, the weaker is the hardware they'll run on.
If hardware growth strictly followed Moore's Law and CPUs (or GPUs, etc.) were completely general-purpose, this would be true. But, if cell simulation became a dominant application for computing hardware, one could imagine instruction set extensions or even entire architecture changes designed around it. Obviously, it would also take some time for software to take advantage of hardware change.
Replies from: private_messaging↑ comment by private_messaging · 2014-05-20T13:34:07.465Z · LW(p) · GW(p)
Well, first it has to become dominant enough (for which it'd need to be common enough, for which it needs to be useful enough - used for what?), then the hardware specialization is not easy either, and on top of that specialized hardware locks the designs in (prevents easy modification and optimization). Especially if we're speaking of specializing beyond how GPUs are specialized for parallel floating point computations.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-20T17:51:16.567Z · LW(p) · GW(p)
I'm afraid you're going to have to explain yourself better if you want me to respond... I confess I don't see clearly how what you're saying pertains to our argument.
Replies from: private_messaging↑ comment by private_messaging · 2014-05-21T07:17:20.938Z · LW(p) · GW(p)
The point is, cell simulation won't yield this stupid AI movie plot threat that you guys are concerned about. This is because it doesn't result in sudden superintelligence, but a very gradual transition.
And in so much as there's some residual possibility that it could, this possibility is lessened by working on it earlier.
I'm puzzled why you focus on the AI movie plot threat when discussing any AI related technology, but my suspicion is that it's because it is a movie plot threat.
edit: as for the "robust provably safe AI", as a subset of "safe" an AI must be able to look at - say - an electronic circuit diagram (or an even lower level representation) and tell if said circuit diagram implements a tortured sentient being. You'd need neurobiology to merely define what's bad. The problem is that "robust provably safe" is nebulous enough that you can't link it to anything concrete.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-21T23:36:29.378Z · LW(p) · GW(p)
The point is, cell simulation won't yield this stupid AI movie plot threat that you guys are concerned about. This is because it doesn't result in sudden superintelligence, but a very gradual transition.
You seem awfully confident. I agree that you're likely right but I think it's hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
And in so much as there's some residual possibility that it could, this possibility is lessened by working on it earlier.
Just to clarify: so you mostly agree with the "Bad Emulation Advance" blog post?
It's not clear to me that a gradual transition completely defeats the argument against neuromorphic AI. If neuromorphic AI is less predictable (to put things poetically, "harder to wield") than AI constructed so that it provably satisfies certain properties, then you can imagine humanity wielding a bigger and bigger weapon that's hard to control. How long do you think the world would last if everyone had a pistol that fired tactical nuclear weapons? What if the pistol has a one in six chance of firing in a random direction?
I'm puzzled why you focus on the AI movie plot threat when discussing any AI related technology...
Want to point to a specific case where I did that?
edit: as for the "robust provably safe AI", as a subset of "safe" an AI must be able to look at - say - an electronic circuit diagram (or an even lower level representation) and tell if said circuit diagram implements a tortured sentient being. You'd need neurobiology to merely define what's bad. The problem is that "robust provably safe" is nebulous enough that you can't link it to anything concrete.
That's an interesting point. I think it probably makes sense to think of "robust provably safe" as being a continuous parameter. You've got your module that determines what's ethical and what isn't and you've got your module that makes predictions and you've got your module that generates plans. The probability of your AI being "safe" is the product of the probabilities of all your modules being "safe". If a neuromorphic AI self-modifies in a less predictable way that seems like a lose, keeping the ethics module constant.
Replies from: private_messaging↑ comment by private_messaging · 2014-05-22T05:38:08.061Z · LW(p) · GW(p)
You seem awfully confident. I agree that you're likely right but I think it's hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
There's a false equivalence, similar to what'd happen if I were predicting "the lottery will not roll 12345134" and someone else predicting "the lottery will roll 12345134". Predicting some sudden change in a growth curve along with the cause of such change, that's a guess into a large space of possibilities; if such guess is equally unsupported with it's negation, it's extremely unlikely and negation is much more likely.
If neuromorphic AI is less predictable
That strikes me as a rather silly way to look at it. The future generations of biological humans are not predictable or controllable either.
If a neuromorphic AI self-modifies in a less predictable way that seems like a lose, keeping the ethics module constant.
The point is that you need bottom-up understanding of, for example, suffering, to be able to even begin working at an "ethics module" which recognizes suffering as bad. (We get away without conscious understanding of such only because we can feel it ourselves and thus implicitly embody a definition of such). On the road to that, you obviously have cell simulation and other neurobiology.
The broader picture is that with zero clue as to the technical process of actually building the "ethics module", when you look at, say, openworm, and it doesn't seem like it helps build an ethical module, that's not representative in any way as to whenever it would or would not help, but only representative of it being a concrete and specific advance and the "ethics module" being too far off and nebulous.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-22T17:49:12.502Z · LW(p) · GW(p)
There's a false equivalence, similar to what'd happen if I were predicting "the lottery will not roll 12345134" and someone else predicting "the lottery will roll 12345134". Predicting some sudden change in a growth curve along with the cause of such change, that's a guess into a large space of possibilities; if such guess is equally unsupported with it's negation, it's extremely unlikely and negation is much more likely.
This sounds to me like an argument over priors; I'll tap out at this point.
That strikes me as a rather silly way to look at it. The future generations of biological humans are not predictable or controllable either.
Well, do you trust humans with humanity's future? I'm not sure I do.
The point is that you need bottom-up understanding of, for example, suffering, to be able to even begin working at an "ethics module" which recognizes suffering as bad. (We get away without conscious understanding of such only because we can feel it ourselves and thus implicitly embody a definition of such). On the road to that, you obviously have cell simulation and other neurobiology.
Maybe.
Replies from: private_messaging, Lumifer↑ comment by private_messaging · 2014-05-23T05:49:00.832Z · LW(p) · GW(p)
This sounds to me like an argument over priors; I'll tap out at this point.
If you just make stuff up, the argument will be about priors. Observe: there's a teapot in the asteroid belt.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-23T06:08:40.845Z · LW(p) · GW(p)
Well yeah and I could trivially "defeat" any argument of yours by declaring my prior for it to be very low. My priors for the future are broadly distributed because the world we are in would seem very weird to a hunter-gatherer, so I think it's likely that the world of 6,000 years from now will seem very weird to us. Heck, World War II would probably sound pretty fantastic if you described it to Columbus.
I'll let you have the last word :)
Replies from: private_messaging↑ comment by private_messaging · 2014-05-23T08:55:42.717Z · LW(p) · GW(p)
Priors can't go arbitrarily high before the sum over incompatible propositions becomes greater than 1.
If we were to copy your brain a trillion times over and ask it to give your "broadly distributed" priors for various mutually incompatible and very specific propositions, the result should sum to 1 (or less than 1 if its non exhaustive), which means that most propositions should receive very, very low priors. I strongly suspect that it wouldn't be even remotely the case - you'll be given a proposition, then you can't be sure it's wrong "because the world of future would look strange", and so you give it some prior heavily biased towards 0.5 , and then over all the propositions, the summ will be very huge .
When you're making very specific stuff up about what the world of 6000 years from now will look like, it's necessarily quite unlikely that you're right and quite likely that you're wrong, precisely because that future would seem strange. That the future is unpredictable works against specific visions of the future.
↑ comment by David_Gerard · 2014-05-21T07:49:23.566Z · LW(p) · GW(p)
Granted, it's a great plot device once you give it superpowers, and so there have been many high profile movies concerning such scenarios, and you see worlds destroyed by AI on the big screen. And your internal probability evaluator - evolved before CGI - uses the frequency of scenarios you seen with your own eyes.
Reversed stupidity isn't intelligence.
PM's precise point is also raised in the Sequences.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-05-21T22:18:09.613Z · LW(p) · GW(p)
Yes, and John's point is that just because it's possible to arrive at that conclusion by a wrong method doesn't actually mean it's wrong.
comment by lukeprog · 2014-05-21T17:07:50.758Z · LW(p) · GW(p)
Re: Tyler's comment on "philosophical work" on x-risk reduction being largely a waste of time.
I'm not sure what he means by "philosophical work" here, but if he means "broad strategic work of the type sometimes done at FHI and MIRI," well, the whole point of that work is to help answer questions exactly like the one you're conflicted about here: whether OpenWorm is good or bad differential technological development. It's precisely because of such work at FHI and MIRI that we have the concept of "differential technological development" in the first place, and that we have a collection of arguments for and against different kinds of differential technological development, even if the answers aren't yet clear.
Before one country invades another, or cuts or supplies $1B in funding for some project, it would be nice to know whether doing so would be good or bad. That's why Teller et al. studied the question of whether an atom bomb could ignite the atmosphere, and it's why FHI and MIRI are doing much of the research we do.
Replies from: Brian_Tomasik, Lumifer↑ comment by Brian_Tomasik · 2014-05-22T05:24:44.745Z · LW(p) · GW(p)
I agree with Luke. It's funny that Tyler, a pundit, says that pundits are useless for reducing existential risk.
Funding concrete projects is often relatively easy. People can see it and get excited about it. Asking to fund higher-level research is harder.
Concrete work is what governments do. OpenWorm is not going to compete with Obama's $3 billion BRAIN Initiative. Pundits make these issues political, raise awareness, and thereby lead to huge amounts of funding down the road.
As for the question of whether to favor WBE: I'd be nervous about it. It could also accelerate non-WBE AI through spillover technology, enhancing interest in general in these topics, etc. I don't have a clear opinion, but the fact that the question is so hard suggests to me that this isn't the most cost-effective place to push. There are many other donation targets where the benefits clearly outweigh the risks.
↑ comment by Lumifer · 2014-05-21T17:27:43.151Z · LW(p) · GW(p)
Before one country invades another or cuts or supplies $1B in funding for some project, it would be nice to know whether doing so would be good or bad.
Even leaving the definitions of "good" and "bad" aside, can you know for a sufficiently long time horizon?
Five-year consequences we can get a reasonable forecast for. Hundred-year consequences, I submit no one has the slightest clue about.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-21T21:18:26.456Z · LW(p) · GW(p)
I like Brian Tomasik's case for trying to increase reflectiveness in the general population. I would expect that increasing reflectiveness in the general population would, if anything, be self-reinforcing; I'd be a bit surprised to be a "reflectiveness backlash" where people would decide they wanted to start being very unreflective as a result of reflectiveness being promoted too strongly. So increasing reflectiveness would seem to me to be an intervention that is pretty likely to steer humanity in a good direction, or at the very least, make it so that wherever we are in 10 years, we'll have a better distribution of outcomes in front of us and factors to lean on.
My guess is there are other interventions that also fall in to this category, e.g. improving the quality of political discourse and generally increasing peoples' rationality. Basically things that would prepare society to better deal with a broad range of tough situations that we might face 100 years out.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-22T15:17:23.140Z · LW(p) · GW(p)
trying to increase reflectiveness in the general population
I don't know about that. First, I'm automatically suspicious of arguments which go "General population should be more like me!" and, truth be told, intellectuals tend to be rather fond of such arguments.
Second, reflectiveness is like narcissism, it's just that instead of focusing on your body you focus on your mind instead. I am not convinced it falls into the "the more the better" category.
Third, the suggested ways of going about it are all very handwavy and wishy-washy.
improving the quality of political discourse and generally increasing peoples' rationality.
This is tautology -- "we will improve things by improving things". Try tabooing words like "better" or "improve". What specific, concrete, practical changes would you make to the political discourse? Why do you think these changes will turn out to have positive consequences in a hundred years?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-22T17:39:21.342Z · LW(p) · GW(p)
First, I'm automatically suspicious of arguments which go "General population should be more like me!" and, truth be told, intellectuals tend to be rather fond of such arguments.
Second, reflectiveness is like narcissism, it's just that instead of focusing on your body you focus on your mind instead. I am not convinced it falls into the "the more the better" category.
That sounds like an argument from analogy to me. You're not describing any causal pathway by which reflectiveness makes the world a worst case. You're saying "reflectiveness looks vaguely like this other thing [which is actually totally different], and people to seem to think that thing is bad, therefore reflectiveness is bad".
What specific, concrete, practical changes would you make to the political discourse?
I'd like to hear your answers to those questions before I answer them, if you don't mind.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-22T18:55:51.032Z · LW(p) · GW(p)
Well that would be my prior
You link leads to a (rather obvious) observation that smarter-on-the-average populations do better, economically, than dumber-on-the-average populations. Though, as the PRC example shows, sociopolitical structures do matter.
However being reflective is not at all the same thing as being smart. I'm viewing statements "People should be more reflective" by someone who's clearly more reflective than the average as epistemically suspect. I don't see how your link addresses this issue.
You're not describing any causal pathway by which reflectiveness makes the world a worst case.
That's not difficult. For example, an increase in reflectiveness is generally accompanied by a decrease in decisiveness. Analysis paralysis is a common problem for highly reflective people. For another example, reflectiveness leads your focus inside, to yourself, and it's not hard to come up with situations where you should be thinking about the outside world more and about yourself less. Navel gazing is a highly reflective but rarely productive activity.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-05-23T06:18:40.606Z · LW(p) · GW(p)
I'm viewing statements "People should be more reflective" by someone who's clearly more reflective than the average as epistemically suspect.
This looks like an ad hominem argument to me.
That's not difficult. For example, an increase in reflectiveness is generally accompanied by a decrease in decisiveness. Analysis paralysis is a common problem for highly reflective people. For another example, reflectiveness leads your focus inside, to yourself, and it's not hard to come up with situations where you should be thinking about the outside world more and about yourself less. Navel gazing is a highly reflective but rarely productive activity.
Thanks for making concrete arguments. Analysis paralysis is a problem I would like to see more of on a societal level :) But maybe things will get tricky if the least reflective people end up moving first and making society's decisions. So maybe what we want to aim for is increasing the reflectiveness of the least reflective people who hold power.
I think it makes sense to disentangle the self-focus that you mention and treat it as an orthogonal vector. I'm in favor of people reflecting about important stuff but not unimportant stuff... I hope that clarifies my goals. Insofar as there's a tradeoff where getting people to reflect about important stuff also means they will waste time reflecting about unimportant stuff, I'm not sure how best to manage that tradeoff.
I'm not finding this conversation especially productive, so I'll let you have the last word if you want it.
Replies from: Lumifercomment by Pablo (Pablo_Stafforini) · 2014-05-23T08:55:53.311Z · LW(p) · GW(p)
The latest edition of The Economist (May 24th, 2014) includes a story on OpenWorm. Some excerpts:
OpenWorm [is] an informal collaboration of biologists and computer scientists from America, Britain, Russia and elsewhere. On May 19th this group managed to raise $121,076 on Kickstarter, a crowd-funding website. The money will be put towards the creation of the world’s most detailed virtual life form—an accurate, open-source, digital clone of a critter called Caenorhabditis elegans, a 1mm-long nematode that lives in the soils of the world’s temperate regions. C. elegans is a scientific stalwart. It is simple, transparent, easy to feed and easy to breed. As a result it is one of the best-understood organisms in biology. Hermaphrodite individuals (which is most of them) have exactly 959 cells, of which 302 are neurons. The location and the function of every one of these cells is known. Thanks to work begun in the 1970s, scientists even have a complete map—a “connectome”—of how the neurons link up with each other to form the worm’s nervous system. Despite 40 years of technological progress, C. elegans remains the only animal for which such a diagram is available.
Building a complete electronic organism in this way, one that aims to be functionally indistinguishable from its fleshy counterpart, would be quite an achievement. It would also be useful. The human brain, for instance, differs from the worm’s tiny nervous system only in the number and interconnectedness of its neurons. But although plenty of cash and brow-sweat have been thrown at the problem over the years, nobody really knows how the brain works. Having a detailed, proddable model of a far simpler nervous system would be a good first step. And C. elegans is already used to probe everything from basic biochemistry to the actions of drugs in laboratories. The ability to run those tests electronically, with no need for actual worms, and to be reasonably sure that the results will nonetheless be the same as in the real world, would be a boon to biological and medical research.
For now, no one is quite clear what a faithful simulation would look like. The point of a model is to remove unnecessary, cluttering details, while preserving the essence of whatever it is the model-maker wants to study. But even for an organism as well-researched as C. elegans, no one is sure which details are crucial and which extraneous. A living cell is a complicated mess of enzymes, ion channels, messenger molecules and voltages. Attempting to simulate everything faithfully would bring even a supercomputer to its knees.
For the moment, the team is planning systems that will simulate how the worm’s muscle cells work, how its neurons behave and how electrical impulses move from one to the other. There will be physics algorithms that give the worm a realistic simulation of a Petri dish to move through. They will also make sure its virtual muscles can deform its virtual body by the correct amount when they receive a virtual jolt from a virtual neuron.
OpenWorm is available to anyone to play with. And its success on Kickstarter may help raise interest—and cash—from elsewhere. “We’ve thought about applying for traditional grants,” says Dr Larson. “And the success of this crowd-funding proves that there’s public interest in this, which ought to help our case.” If he and his collaborators succeed in their ambitions, then doing biological research may one day become a simple matter of downloading some animals onto your computer, and getting started.