6502 simulated - mind uploading for microprocessors
post by humpolec · 2011-01-08T18:03:10.340Z · LW · GW · Legacy · 14 commentsContents
14 comments
Possibly offtopic, but a neat project with interesting analogy to mind uploading:
Some people managed to scan, using a microscope, a MOS 6502 microprocessor (Apple II, C64, NES), and simulate it at the level of single transistors. This neatly circumvented all the problems with inaccurate emulation, unknown opcodes etc., and even allowed them to run actual Atari 2600 games without having to know anything about 6502's inner workings.
Presentation slides about the project are here.
14 comments
Comments sorted by top scores.
comment by sketerpot · 2011-01-08T20:27:51.503Z · LW(p) · GW(p)
Veering even more off-topic, does anybody know if there's an efficient way to transform this transistor-level design into logic gates? If we could do that, for the entire chip, then we could synthesize an FPGA design that's exactly compatible with the original 6502, which would be pretty neat. Useless, but neat.
I've been trying to think of ways to do this, but none of my ideas sound particularly great. For example, you could probably turn a lot of the chip into logic gates by making the transistor graphs for common logic gates and using some fast subgraph isomorphism algorithm to look for those in the 6502. Of course, this won't do everything, but it would convert at least the PLA portion of the chip. Or you could look for chunks of the transistor graph with a lot of internal transistors for every external connection, and build truth tables for them, which can be converted into logic gates.
Or there's the worse-is-better idea: identify the parts of the chip that can hold state -- registers, latches, whatever -- and then assume that everything else is combinational logic, and build a truly enormous truth table for it. I'm pretty sure this is NP-complete.
Does anybody have ideas for this that don't suck? Because I would love to have a perfect Verilog/VHDL model of the 6502.
Replies from: EdScomment by timtyler · 2011-01-09T10:45:43.486Z · LW(p) · GW(p)
Note how late it happened, and how useless the results are, though. Practically nobody uses IT systems that are scanned and emulated versions of the previous generation of technology.
Replies from: EdS↑ comment by EdS · 2011-01-10T16:50:56.716Z · LW(p) · GW(p)
Yes, late, and yes, slow. But it's what you have to do when you don't understand the thing you wish to duplicate. Making a brain is one thing, making a specific brain is another.
Replies from: timtyler↑ comment by timtyler · 2011-01-10T20:49:42.207Z · LW(p) · GW(p)
Making a brain is what we want to do to solve our resource problems, and help us to focus on the things we care about.
We didn't understand birds - but we didn't duplicate them either. I would draw an analogy between duplicating a specific brain and duplicating specific bird. We learned how to fly a while back - but we still don't know where to begin on the project of making a specific bird.
Replies from: EdS↑ comment by EdS · 2011-01-16T19:02:50.312Z · LW(p) · GW(p)
Hmm, learning to fly without replicating a specific bird is analogous to the problem of general AI. This discussion thread started with a claimed analogy between chip simulation and mind uploading, which is more the problem of replicating a specific bird. If I claimed to be able to upload your mind, then proceeded to scan or mince your brain, and then showed your relatives a general AI, they would be unimpressed.
Replies from: timtyler↑ comment by timtyler · 2011-01-16T19:34:51.012Z · LW(p) · GW(p)
Sure. On the other hand if you show manufacturers, engineers or governments a general AI then some major changes happen - and those are the folk who are most likely to cough up for the required R&D.
Possibly those changes might ultimately include the human brain being scanned and emulated - but chronological order seems as though it may be significant here.
comment by ewang · 2011-01-09T23:53:22.215Z · LW(p) · GW(p)
Even if scanning leads to a working simulation of a brain, there will have to be several more years before sufficiently powerful hardware is available that could run it at "brainspeed".
Replies from: EdS, genix↑ comment by EdS · 2011-01-10T16:54:47.702Z · LW(p) · GW(p)
Indeed. More interesting perhaps, is that destructive scanning would become viable long before non-disruptive scanning. Also note: a slow-running simulation which turns out to be in agony doesn't have to suffer for much subjective time. Presuming the 'owners' care about that.
comment by EdS · 2011-01-09T08:14:45.524Z · LW(p) · GW(p)
There is an analogy here: the visual6502 simulator just simulates transistors, with an adequate but imprecise model. It loads a description of a chip - presently the 6502 - and then acts out the behaviour of that chip. Other 6502 models out there were written by understanding how the CPU works - we only had to understand how transistors work. Michael Steil's presentation at 27C3 includes a graph claiming orders of magnitude less work for the same fidelity.
To upload a mind into a computer without having to understand how minds and brains work, one might similarly model at the neuron level and then upload a description of the neuron characteristics and connectivity.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-01-09T22:27:56.776Z · LW(p) · GW(p)
Keep in mind that there may be more to a mind than neurons (different kinds of cells, hormones etc).
Replies from: EdS