PZ Myers on the Infeasibility of Whole Brain Emulation

post by Peter Wildeford (peter_hurford) · 2012-07-14T18:13:51.915Z · LW · GW · Legacy · 59 comments

Contents

59 comments

From: http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyone-gets-a-robot-pony/

I’ve worked with tiny little zebrafish brains, things a few hundred microns long on one axis, and I’ve done lots of EM work on them. You can’t fix them into a state resembling life very accurately: even with chemical perfusion with strong aldehyedes of small tissue specimens that takes hundreds of milliseconds, you get degenerative changes. There’s a technique where you slam the specimen into a block cooled to liquid helium temperatures — even there you get variation in preservation, it still takes 0.1ms to cryofix the tissue, and what they’re interested in preserving is cell states in a single cell layer, not whole multi-layered tissues. With the most elaborate and careful procedures, they report excellent fixation within 5 microns of the surface, and disruption of the tissue by ice crystal formation within 20 microns. So even with the best techniques available now, we could possibly preserve the thinnest, outermost, single cell layer of your brain…but all the fine axons and dendrites that penetrate deeper? Forget those.

[...]

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically modifying the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present?

I think they’re grossly underestimating the magnitude of the problem. We can’t even record the complete state of a single cell; we can’t model a nematode with a grand total of 959 cells. We can’t even start on this problem, and here are philosophers and computer scientists blithely turning an immense and physically intractable problem into an assumption.

[...]

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” 

[...]

I’m not anti-AI; I think we are going to make great advances in the future, and we’re going to learn all kinds of interesting things. But reverse-engineering something that is the product of almost 4 billion years of evolution, that has been tweaked and finessed in complex and incomprehensible ways, and that is dependent on activity at a sub-cellular level, by hacking it apart and taking pictures of it? Total bollocks.

59 comments

Comments sorted by top scores.

comment by fubarobfusco · 2012-07-14T19:45:31.040Z · LW(p) · GW(p)

Computer folk often use the terms emulation and simulation to mean two different things, which Myers appears to be conflating. In the sense I'm thinking of, simulation means modeling the components of a system at a relatively low level — such as all the transistors and connections in a CPU — whereas emulation means replicating the functional behavior of a system.

(Of course, these terms are used in a lot of other ways, too. SimCity is neither a simulation nor an emulation in the sense I'm using.)

For instance, a circuit simulator modeling a piece of RAM might keep track of the amount of charge in a particular capacitor that represents a particular bit in memory; but an emulator would just keep track of what numerical value was stored in which addressable location. An emulator doesn't attempt to replicate how the original system works, but rather what it does.

(A non-computational analogy: An artificial heart doesn't duplicate the muscle cells of a natural heart; it duplicates the function of a heart, namely moving blood around. It's not necessary to copy the behavior of each individual muscle cell — to say nothing of each molecule in each muscle cell! — in order to duplicate the function of a heart well enough to keep a person alive for years.)

From what I've read, folks who expect WBE don't expect modeling at the molecular level (a simulation of a brain), but rather at some higher functional level (an emulation, hence the term), so much as that some sort of functional components — maybe individual neurons; maybe specific brain regions — can be emulated without simulating them.

Replies from: Kaj_Sotala, None
comment by Kaj_Sotala · 2012-07-15T07:42:37.456Z · LW(p) · GW(p)

In the sense I'm thinking of, simulation means modeling the components of a system at a relatively low level — such as all the transistors and connections in a CPU — whereas emulation means replicating the functional behavior of a system.

There seems to be conflicting usage about this.

http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

The term emulation originates in computer science, where it denotes mimicking the function of a program or computer hardware by having its low‐level functions simulated by another program. While a simulation mimics the outward results, an emulation mimics the internal causal dynamics (at some suitable level of description). The emulation is regarded as successful if the emulated system produces the same outward behaviour and results as the original (possibly with a speed difference). This is somewhat softer than a strict mathematical definition1. [...]

By analogy with a software emulator, we can say that a brain emulator is software (and possibly dedicated non‐brain hardware) that models the states and functional dynamics of a brain at a relatively fine‐grained level of detail.

In particular, a mind emulation is a brain emulator that is detailed and correct enough to produce the phenomenological effects of a mind.

https://secure.wikimedia.org/wikipedia/en/wiki/Emulation

The word emulation refers to: [...]

The low-level simulation of equipment or phenomena by artificial means, such as by software modeling. Note that simulation may also allow an abstract high-level model.

On the other hand, the top-voted answer at http://stackoverflow.com/questions/1584617/simulator-or-emulator-what-is-the-difference says that

Emulation is the process of mimicking the outwardly observable behavior to match an existing target. The internal state of the emulation mechanism does not have to accurately reflect the internal state of the target which it is emulating.

Simulation, on the other hand, involves modeling the underlying state of the target. The end result of a good simulation is that the simulation model will emulate the target which it is simulating.

comment by [deleted] · 2012-07-14T23:06:59.296Z · LW(p) · GW(p)

Well, when I argued on here last week ( http://lesswrong.com/lw/d80/malthusian_copying_mass_death_of_unhappy/6y2r?context=1#6y2r ) that emulation would be more difficult than people imagine, based on my experience of working on software that does that, people downvoted it and argued "no, people aren't talking about emulation, but about modelling at the molecular level"

Replies from: fubarobfusco
comment by fubarobfusco · 2012-07-14T23:35:35.470Z · LW(p) · GW(p)

Hmm ... from my reading of that conversation, one person said that.

Replies from: None
comment by [deleted] · 2012-07-15T17:38:15.146Z · LW(p) · GW(p)

Fair enough, although multiple people downvoted that comment (it seems to have had some upvotes since to compensate). Even if they downvoted for different reasons though, that's still at least one counterexample of someone who fits into the category "folks who expect WBE".

Emulation without simulation would require not only vastly more understanding of the brain and of cell biology than we have now (most of the problems Myers points out would still be there, though not all) but on top of that all the problems you hit when trying to emulate one system on another, plus a whole lot of problems no-one's ever even conceived because no-one's ever ported an algorithm (for which we have neither source code nor documentation) from a piece of meat to silicon.

comment by David_Gerard · 2012-07-14T21:51:36.494Z · LW(p) · GW(p)

I did like the test problem in the comments:

Take a preserved cell phone, slice it into very thin slices, scan the slices, and build a computer simulation of the entire phone.

Question: what is the name, number, and avatar of the third entry in the address book?

Now, how would you approach that one? Assume a known model of phone.

Replies from: ciphergoth, timtyler, siodine, None, jsalvatier
comment by Paul Crowley (ciphergoth) · 2012-07-16T08:17:30.769Z · LW(p) · GW(p)

Looks like flash memory stores information using varying levels of charge; that would be quite painful to read out with a destructive scan. Happily that's unlikely to be the case with the brain's long-term storage, since AIUI it doesn't contain any sufficiently good insulators.

comment by timtyler · 2012-07-14T22:03:42.624Z · LW(p) · GW(p)

Now, how would you approach that one?

Step 1 is to construct a superintelligent machine...

comment by siodine · 2012-07-16T17:57:46.089Z · LW(p) · GW(p)

Freeze the volatile memory - - this preserves its state (you can retrieve passwords from shutdown laptops this way. an upside down can of computer cleaner will work). Slice it up, scan it (this assumes it wasn't significantly damaged while slicing; some damage is acceptable because what was there can be inferred -- this is a method in data recovery. also, you wouldn't slice it up tbh. probably the same with a brain.). With the scan you should be able to build a 3d representation of the memory with pixels (more information than just rgba). Now, you use some kind of pattern recognition to map patterns of pixels to physical representations (eg. take a Quake map and look for pixel patterns that match a jump pad).

Now, if you understood how the memory and cellphone software works, you could just get the state into a binary form acceptable for a cell phone emulator. But, because we don't understand how it works, we'll need to simulate reality to a sufficient level. I.e., we need an empty emulated universe with physical laws that correspond to our own, so that we can interpret pixels into their physical correspondents. So, when we pattern match a bunch of pixels into a memory cell with a certain state, we can then drop that interpretation into the emulated world.

For the emulated world to be sufficient for emulating the cell phone, I don't think you would need atoms or electrons (or anything below that level). You could probably emulate the components at the level of electricity, silicon, wire, gold, ect, because we can explain and predict the phenomena a phone produces at that level without going further. E.g., we just need to know what an electric current does, not what its electrons are doing to turn on an emulated light bulb.

(This was my internal monologue as I went through this problem. It's not researched, and is intended to be taken more as bar talk than anything very serious.)

comment by [deleted] · 2012-07-14T22:50:44.896Z · LW(p) · GW(p)

That seems feasible if you knew both the model and the operating system, and had a scan showing very precise relative temperatures. You could then match the state of the simulated phone to a long but finite list of the possible states of the phone given the operating system. But I'm not a doctor.

Replies from: Lachann, jsteinhardt
comment by Lachann · 2012-07-15T00:49:55.476Z · LW(p) · GW(p)

It's possible to directly read the state of transistors in the phone's memory via scanning capacitance microscopy (http://www.multiprobe.com/technology/technologyassets/S05_1_direct_measurements_of_charge_in_floating_gate.pdf), so you can reconstruct the actual contents of the memory. Probably the greater challenge would be figuring out how to cut the phone into slices without damaging the memory.

comment by jsteinhardt · 2012-07-16T06:21:00.776Z · LW(p) · GW(p)

Assume there are 20 apps on the phone, and each app can be in 5 states. Then this list is already 5^20 (or about 10^14) entries long. This doesn't include stored memory, as the address book would entail (number of possible names for the first entry of the address book is already something like 26^20 as a conservative estimate).

comment by lsparrish · 2012-07-14T23:47:50.153Z · LW(p) · GW(p)

PZ's comment regarding the implausibility of speeding up an emulated brain was a real head-scratcher to me, and Andrew G calls him on it in the comments. Apparently (judging from his further comments) what he really meant was that you have to simulate or emulate a good environment, physiology, and endocrine system as well otherwise the brain would go insane.

Of course, we already knew that...

Replies from: ChrisHallquist, C9AEA3E1
comment by ChrisHallquist · 2012-07-16T03:10:48.211Z · LW(p) · GW(p)

Right on.

I'm the blogger PZ was responding to in his post, and I specifically recommended PZ read Sandberg and Bostrom's Whole Brain Emulation: A Roadmap.

That's what PZ is claiming to have read when he writes "I read the paper he recommended," but PZ doesn't seem to have read it very carefully, in particular missing out on the sections "simulation scales" (pp. 13-14), "Body Simulation," and "Environment Simulation" (pp. 74-78). I've written a post explaining PZ's apparent confusions in greater detail at my blog.

Replies from: arundelo
comment by arundelo · 2012-07-16T05:08:26.523Z · LW(p) · GW(p)

a post [...] at my blog.

Copy-and-paste error with the link; I think you meant to give this one.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2012-07-16T06:50:38.149Z · LW(p) · GW(p)

Thanks, fix'd.

comment by C9AEA3E1 · 2012-07-15T03:06:10.751Z · LW(p) · GW(p)

Seems similar enough to "Every part of your brain assumes that all the other surrounding parts work a certain way. The present brain is the Environment of Evolutionary Adaptedness for every individual piece of the present brain.

Start modifying the pieces in ways that seem like "good ideas"—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges. And then everything goes to hell.

So you'll forgive me if I am somewhat annoyed with people who run around saying, "I'd like to be a hundred times as smart!" as if it were as simple as scaling up a hundred times instead of requiring a whole new cognitive architecture."

Eliezer Yudkowsky, Growing Up is Hard

Replies from: CarlShulman
comment by CarlShulman · 2012-07-15T15:35:42.202Z · LW(p) · GW(p)

Well, OTOH, he also complains that messing around by trial and error is likely to cause unpredictable side effects, like nasty insanity, some of which may be too subtle to notice at first, or just tolerated.

comment by Risto_Saarelma · 2012-07-15T02:25:12.420Z · LW(p) · GW(p)

Can Myers engage with stuff he might be wrong about on the Pharyngula blog? He seems to mostly focus on spotting creationists and similar obviously wrong crackpots, hitting them with the biggest hammer in easy reach and never backing down. Taking the same approach to stuff nobody understands very well yet might not be productive.

Replies from: David_Gerard, billswift
comment by David_Gerard · 2012-07-19T22:45:07.563Z · LW(p) · GW(p)

stuff nobody understands very well yet

He works with brain preservation every day. When he says "this is impossible", he's not being uber-sceptic - he's speaking with annoyance at something he'd love to be able to do and that would make his work a lot easier, but that he has excellent reason to consider practically impossible.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-07-20T05:51:59.202Z · LW(p) · GW(p)

No complaints about that part, but then he went off on the weird argument about how increasing the emulation speed is an incoherent idea, and seems to be sticking to his guns in the comments despite several people pointing out that you don't need to do a quantum-level simulation of an entire universe to provide a sped-up virtual sensory reality for the sped-up emulated brain in a box.

That's the stuff some people do understand but PZ either doesn't or can't back down on since he's writing a blog where he must not lose face by admitting mistakes or the creationists win.

The stuff nobody understands is why we can't even build a robot flatworm by emulating the 100-odd neuron flatworm brain, which would be nice to know before we start getting into detailed arguments about the practical requirements of human uploads. Proper understanding of this part might also reveal shortcuts which we can use to loosen the scanning and emulation requirements and still end up with functional uploads.

comment by billswift · 2012-07-15T11:22:20.094Z · LW(p) · GW(p)

I used to read Panda's Thumb regularly, many years ago, and have read occasional pieces by him more recently. PZ Myers might be competent at whatever field he specializes in, but as a general thinker he is best ignored.

comment by Dr_Manhattan · 2012-07-16T12:42:33.901Z · LW(p) · GW(p)

It also seems like a pretty serious argument against cryonics, no?

comment by HBDfan · 2012-07-16T11:13:03.852Z · LW(p) · GW(p)

What can we do about reactions like this?

Replies from: Richard_Kennaway, Bugmaster
comment by Richard_Kennaway · 2012-07-17T12:07:02.949Z · LW(p) · GW(p)

The dogs bark. The caravan moves on.

comment by Bugmaster · 2012-07-17T01:12:38.749Z · LW(p) · GW(p)

Putting smileys after jokes such as "Step 1 is to construct a superintelligent machine..." would be a good start. Seems like people are taking such statements seriously -- not surprising, really.

comment by brilee · 2012-07-14T18:48:20.785Z · LW(p) · GW(p)

From the comments, PZ elaborates: "Andrew G: No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

Or do you really think you can understand how the brain works in complete isolation from physiology, endocrinology, and sensation?"

Seems like PZ is dismissing the feasibility of computation by assuming that computation has to be perfectly literal. To make a chemistry analogy here, one does not have to model the quantum mechanics and the dynamics of every single molecule in a beaker of water in order to simulate the kinetics of a reaction in water. One does not need to replicate the chemical entirety of the neuron in silico; one merely needs to replicate the neuron's stimulus-response patterns.

Replies from: brilee
comment by brilee · 2012-07-14T18:54:24.844Z · LW(p) · GW(p)

Oops, didn't see a further comment below: In response to a comment, " I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.", PZ says this:

"Errm, because that’s what the singularitarians we’re critiquing are proposing? This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule."

Still seems like a straw man.

Replies from: David_Gerard
comment by David_Gerard · 2012-07-14T19:30:32.008Z · LW(p) · GW(p)

Still seems like a straw man.

Erm, please clarify how.

Replies from: gwern, brilee, CarlShulman
comment by gwern · 2012-07-14T19:41:50.773Z · LW(p) · GW(p)

Well, there are many different possible levels of brain emulation (just like in emulating video game consoles), all of which have different demands and feasibilities. The Whole Brain Emulation roadmap discusses several.

No one denies that details of every molecule would be a very brute force and difficult emulation and as far as that goes, he's not strawmanning; but to think that this is the only kind of emulation and dismiss emulation in general on the basis of the specific, that is a straw man.

comment by brilee · 2012-07-14T21:28:29.695Z · LW(p) · GW(p)

In the first quote, he sets up the straw man as gwern describes it. In the second quote, he defends his first straw man by saying "but that's what singularitarians believe", essentially putting up a second straw man to defend the first.

comment by CarlShulman · 2012-07-14T19:42:42.989Z · LW(p) · GW(p)

The quote jumps between models of large brain regions to molecule by molecule analysis, leaving out the intermediate of creating models of neurons. Thus all the talk in the roadmap about predictive models.

comment by Bruno_Coelho · 2012-07-15T06:54:35.402Z · LW(p) · GW(p)

Recent talk of (http://www.youtube.com/watch?v=ZBpy29IPO8c)[S. Seung] at oxford show how large is the material problem of building a human connectome. The AIs are not enouth to track the path of individual neurons. People have to correct the erros, becoming gamers.

Replies from: David_Gerard
comment by David_Gerard · 2012-07-18T12:47:40.773Z · LW(p) · GW(p)

It is unclear why this apposite technical reference got downvotes.

comment by buybuydandavis · 2012-07-15T01:41:23.735Z · LW(p) · GW(p)

But reverse-engineering something that is the product of almost 4 billion years of evolution, that has been tweaked and finessed in complex and incomprehensible ways, and that is dependent on activity at a sub-cellular level, by hacking it apart and taking pictures of it? Total bollocks.

I agree with the last sentence.

While it is possible that key features aspects of intelligence can't be modeled without an extremely low level of detail of brain function, it's also possible that many of those details are not needed. I think it's likely. My guess is that if neurons were so chaotically fiddly on a functional level, we wouldn't work at all in the first place.

Replies from: Dolores1984
comment by Dolores1984 · 2012-07-15T19:53:33.009Z · LW(p) · GW(p)

My hypothesis is that there are a finite number of classes of neurons, glial cells, and classes of synaptic junctions, that bound closely into certain behavioral groupings. In which case, you need only prod enough neurons in petri dishes to develop good statistical models of each type of neuron, glia, and synapse you're modelling. I suspect, but can't prove right now, that only the broad probabilistic behavior of each functional element would be meaningful on the scales we care about.

The reason I believe that is exactly what you said -- it's too noisy. Human brains are way too robust to be extremely sensitive to sub-cellular changes. If you want sub-cellular changes to make a difference (say, in the case of drugs) you have to affect billions of neurons.

EDIT: Actually, you can pretty cleanly rebut his argument about how hard it is to preserve the fish's neural tissue in what he considers to be 'sufficient detail.' If brains really were that sensitive to sub-cellular shifts in neuronal state, there's no way it would be possible for someone to recover from being clinically dead for a few seconds, much less the hours or days that have been observed in cold conditions.

comment by komponisto · 2012-07-14T18:36:21.186Z · LW(p) · GW(p)

PZ Meyers

Please spell PZ Myers' name correctly.

Replies from: peter_hurford
comment by DanArmak · 2012-07-14T19:59:20.873Z · LW(p) · GW(p)

To our best knowledge, what are the hard limits on 'compressing' physical systems? That is, given some bit of physics, what are the limits on building a simulator using less space/time/energy/bits/... than the original, and still having a similarly sized phase space? I expect physics is in general incompressible, but perhaps we can use some physical phenomena that don't ordinarily play a part in the everyday systems we want to simulate?

I've seen people discuss what level of emulation is necessary for WBE. Supposing outright simulation is needed, how much bigger/more complex/more expensive might a robust simulator have to be compared to a regular brain?

Replies from: timtyler
comment by timtyler · 2012-07-14T22:04:56.319Z · LW(p) · GW(p)

I expect physics is in general incompressible

Why would "physics" be incompressible? Most of the universe is empty space, no?

Replies from: DanArmak
comment by DanArmak · 2012-07-14T22:46:44.759Z · LW(p) · GW(p)

I don't know, I'm not a physicist. Don't they have vacuum energy and virtual particles and other stuff that makes even empty space full of information? ETA: what's empty space? A near-zero value of all relevant fields? But if fields can be measured to the same precision regardless of magnitude (?) then don't you get the same amount of information unless the fields are actually a constant zero? I don't understand physics, this may well be completely wrong.

Anyway, I expect the lack of phenomena important to brains in empty space (no ordinary matter and energy, atoms, chemistry) allows the compression of that. But can you simulate a typical physical system using significantly less matter or energy? (Or time?) Can you simulate the human brain or body?

Replies from: timtyler
comment by timtyler · 2012-07-15T13:25:31.132Z · LW(p) · GW(p)

I don't know, I'm not a physicist. Don't they have vacuum energy and virtual particles and other stuff that makes even empty space full of information?

Not so much as near black holes. Just look at their respective entropies.

FWIW, I expect that the human brain will prove to be highly compressible with advanced molecular nanotechnology.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-15T14:43:03.873Z · LW(p) · GW(p)

Do you mean the compressibility of a single human brain in isolation, or the compressibility of an individual human brain given that at least one other human brain has already been stored (or is expected to be available during restoration), or both? I expect the data storage requirements of the latter to be orders of magnitude smaller than the former.

Replies from: timtyler
comment by timtyler · 2012-07-15T14:55:09.957Z · LW(p) · GW(p)

I was talking about the compressibility of a single human brain in isolation.

comment by jmmcd · 2012-07-14T19:15:59.942Z · LW(p) · GW(p)

The link goes to lesswrong.com!

Replies from: peter_hurford
comment by private_messaging · 2012-07-15T20:12:25.697Z · LW(p) · GW(p)

I always thought it is more likely via alternative approach: figure out how the plug-n-play nature of the brain works (one part of the brain can substitute for damaged part, it must be built out of small plug-n-play-ish units (cortical columns?) ), then you can perhaps connect brain to the hardware running that simulated network, have the function 'expand' into there, get smarter and figure out how to scan or recreate the rest. Still an enormous problem, of course, but there's a better way to copy data from one computer to other computer than shaving off the plastic from the flash memory chips then using a scanning electron microscope to read off the data.

comment by timtyler · 2012-07-15T14:56:17.776Z · LW(p) · GW(p)

P. Z. isn't an expert on intelligent machines.

Replies from: Dolores1984
comment by Dolores1984 · 2012-07-15T19:55:17.151Z · LW(p) · GW(p)

I would point out that neither are most of us.

Replies from: timtyler
comment by timtyler · 2012-07-16T00:38:41.867Z · LW(p) · GW(p)

Clarke's third law has wisdom here:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Replies from: Dolores1984
comment by Dolores1984 · 2012-07-16T03:14:15.007Z · LW(p) · GW(p)

It's a useful heuristic, but I try to be very cautious about selecting heuristics that confirm what I would like to believe.

Replies from: timtyler
comment by timtyler · 2012-07-17T01:13:21.075Z · LW(p) · GW(p)

Intelligent machines are likely to one day be able to upload humans - but the chance of any individual modern human getting uploaded are probably pretty slender. So, you probably shouldn't feel as though you have a personal stake.

comment by HBDfan · 2012-07-15T11:39:56.403Z · LW(p) · GW(p)

It looks as he is just another smart guy who is no wiser outside the laboratory.

edit: I am wrong and withdraw this comment.

Replies from: Nick_Tarleton, David_Gerard, Manfred
comment by Nick_Tarleton · 2012-07-15T20:13:53.695Z · LW(p) · GW(p)

Downvoted for being a content-free status move (against someone who's disagreeing with local dogma, but not saying things that are that silly).

comment by David_Gerard · 2012-07-19T22:43:12.014Z · LW(p) · GW(p)

This is not only unfair, but misses an important point: Myers works with preserving brains every day. He would love it if brains could be preserved. When he explains in detail why they can't, he's not doing with triumphalism at being more sceptical than singularitarians, he's doing it with annoyance that we can't do this thing he'd really, really love to be able to do.

Replies from: HBDfan
comment by HBDfan · 2012-07-19T23:18:48.389Z · LW(p) · GW(p)

You are right, I am in error.

comment by Manfred · 2012-07-15T16:56:43.296Z · LW(p) · GW(p)

Well, or maybe not.