Posts

Comments

Comment by GeraldMonroe on Monty Hall Sleeping Beauty · 2015-10-04T01:26:44.445Z · LW · GW

Care to elaborate?

You just woke up. You don't know if the coin was head or tails, and you have no further information. You knew it was 50-50 before going to sleep. No new information, no new answer. I don't see what the "twist" is. Monty Hall, there's another information input - the door the host opens never has the prize behind it.

Or, another perspective : a perfect erasure of someone's memories and restoration of their body to the pre-event state is exactly the same as if the event in question never occurred. So delete the 1 million from consideration. It's just 1 interview post waking. Heads or Tails?

Comment by GeraldMonroe on Are Cognitive Biases Design Flaws? · 2015-02-25T21:11:45.130Z · LW · GW

The designer had a specific design goal : "thou shalt replicate adequately well under the following environmental conditions"...

Given the complex, intricate mechanisms that humans seem to have that allow for this, the "designer" did a pretty good job.

Cognitive biases boost replication under the environmental conditions they were meant for, and they save on the brainpower required.

So yes, I agree with you. If the human brain system were an engineered product, it clearly meets all of the system requirements the client (mother nature) asked for. It clearly passes the testing. The fact that it internally takes a lot of shortcuts and isn't capable of optimal performance in some alien environment (cities, virtual spaces, tribes larger than a few hundred people) doesn't make it a bad solution.

Another key factor you need to understand in order to appreciate nature is the constraints it is operating under. We can imagine a self-replicating system that has intelligence of comparable complexity and flexibility to humans that makes decisions that optimal to a few decimal places. But does such a system exist inside the design space accessible to Earth biology? Probably not.

The simple reason for this is 3 billion years of version lock in. All life on earth uses a particular code-space, where every possible codon in DNA maps to a specific amino acid. With 3 bases per codon, there's 4^3 possibilities, and all of them map to an existing amino acid. In order for a new amino acid to be added to the code base, existing codons would have to be repurposed, or an organism's entire architecture would need to be extended to a 4 (or more) codon base system.

We can easily design a state machine that translates XXX -> _XXX, remapping an organisms code to a new coding scheme. However, such a machine would be incredibly complicated at the biological level - it would be a huge complex of proteins and various RNA tools, and it would only be needed once in a particular organism's history. Evolution is under no forces to evolve such a machine, and the probability of it occurring by chance is just too small.

To summarize, everything that can ever be designed by evolution has to be made of amino acids from a particular set, or created as a derivative product by machinery made of these amino acids.

An organism without cognitive biases would probably need a much more powerful brain. Nature cannot build such a brain with the parts available.

Comment by GeraldMonroe on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T20:04:28.307Z · LW · GW

I agree that Voldemort seems to be holding the idiot ball this chapter. With that said, you'd kind of expect an immortal god-wizard who's 10 steps ahead to be buffed with poison and magic protections up the wazoo, etc.

Comment by GeraldMonroe on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:05:13.736Z · LW · GW

Why does he need to create a Horcrux immediately? Theory :

Each horcrux, even with Voldemort's modified ritual, is just a snapshot of Voldemort's mind-state at the instant the horcrux was made. What Voldemort has somehow managed using magic is to interconnect all these Horcruxes into a network, and build some kind of non-biological system to "run" his personality based on the snapshots.

This explains why he was able to observe the stars and think about his mistakes for 8 years until someone touched one of his horcruxes. Somehow, his network sends only the presentMindState to the body it hijacks.

This is how I assumed, previously, that Harry might defeat him. Since magic thinks he's got the same identifiers as Voldemort, is Harry is killed a moment after voldemort is, his mind should become the "top" element in the stack of memory states that is Voldemort's horcrux network. I would assume that he would have access to all of the knowledge in the lower states, but as the top, canonical state, he would have control when Voldemort respawns.

As it is, I'm guessing the respawned Voldemort after he's killed by magical resonance will lack the mind state changes since his last horcrux.

Comment by GeraldMonroe on Non-standard cryo ideas · 2013-11-10T17:33:12.519Z · LW · GW

One thing I would like to be mentioned is why these methods might work.

Assume the best possible scanning method is used, such that the future reanimators have a map of where every atom was bonded in your brain.

There's going to be frost damage, even if cryoprotectant is used - there will be areas it didn't reach, cracks from low temperature stresses, oxidation damage from time spent in the cryostat, and so on.

Future software could computationally reverse many of these damaging events - but there will be uncertainty in that there would be multiple solutions possible as to the "original" state. A video of the freezing process would allow you to calibrate the model used to computationally reverse the damage better.

Furthermore, this level of technology means it is probable that the reanimators would be able to "read" memories at some level of fidelity. If there are surviving notes about your life, they could potentially resolve ambiguities when there are multiple possible past memory states.

One thing that bothers me about this proposal is that the "reanimators" would have to be beings smarter than you ever were, and they would probably need to use more computational capacity to revive just one person than that person performed in their entire lifetime.

Comment by GeraldMonroe on Military Rationalities and Irrationalities · 2013-09-10T18:25:14.132Z · LW · GW

The reason while you had limited instruction in shooting a weapon was probably due to a related problem I observed.

The military spends lavish sums on expensive capital equipment and human resources, but it seems to pinch pennies on the small stuff. For example, I recall being assigned numerous times to various cleanup details, and noticed we would never have any shortage of manpower - often 10+ people, but there would be an acute shortage of mops, cleaning rags, and chemicals.

Similarly, we all had rifles, but live ammunition to train with was in very short supply. I would mentally compute how backwards this was. It costs the government several hundred dollars in pay and benefits to have each one of us standing around for a day, yet they were pinching pennies on ammo that cost maybe 10 cents a round.

I don't know what causes these backwards situations, where you would be drowning in expensive equipment and people yet critically short of cheap, basic supplies, but I've seen many references to the problem.

Comment by GeraldMonroe on Artificial explosion of the Sun: a new x-risk? · 2013-09-05T18:42:22.772Z · LW · GW

Your statement would be a safe bet based on the past 50 years. 50 years ago, or 1963, was 4 years before the Saturn V first launched. Using modern figures of 3.3 billion/launch, including R&D costs, that comes to approximately $28,000 per Kg to low earth orbit. The same math says that the Space Shuttle cost about $61,000 per Kg.

(I'm lumping in the total cost of the entire program in both cases divided by the number of launches. There's problems with this method, but it means that costs can't be hidden by accounting tricks as easily)

With that said, there are scads of methods that would lower this cost, at least for unmanned payloads, and there is also the realistic possibility that automated manufacturing could build the rockets for a fraction of what they currently cost. There's videos taken at the SpaceX plant showing automated lathes, and direct metal 3d printers can apparently make parts that meet spec. It seems at least possible that over the next 50 years the entire end to end process could be automated to take minimal human labor.

Comment by GeraldMonroe on New Monthly Thread: Bragging · 2013-08-14T18:35:57.999Z · LW · GW

Why is there a solar wind, then?

Comment by GeraldMonroe on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-02T23:22:58.011Z · LW · GW

Prediction : Harry has stolen a march on Quirrelmort. I predict that between the time Professor Mcgonagall unlocked his time turner and Quirrelmort entered the room, he already used the device to visit the library's restricted section.

At least, I hope so : I really want to learn how "spell creation" is done, per EY's interpretation. That will tell us a lot about what magic actually is and what can be done to achieve Real Ultimate Power.

Furthermore, this would be fully rational. Harry's analysis of what to do next should have already made it abundantly clear that he needs to obtain more information, and the restricted section obviously has stuff that might be helpful. And why start on a task now when you can start on it 6 hours ago?

Comment by GeraldMonroe on How probable is Molecular Nanotech? · 2013-06-30T07:09:04.190Z · LW · GW

Why do we have to solve it? In his latest book, he states that he calculates you can get the thermal noise down to 1/10 the diameter of a carbon atom or less if you use stiff enough components.

Furthermore, you can solve it empirically. Just build a piece of machinery that tries to accomplish a given task, and measure it's success rate. Systematically tweak the design and measure the performance of each variant. Eventually, you find a design that meets spec. That's how chemists do it today, actually.

Edit : to the -1, here's a link where a certain chemist that many know is doing exactly this : http://pipeline.corante.com/archives/2013/06/27/sealed_up_and_ready_to_go.php

Comment by GeraldMonroe on How probable is Molecular Nanotech? · 2013-06-30T06:10:29.889Z · LW · GW

From reading Radical Abundance :

Drexler believes that not only are stable gears possible, but that every component of a modern, macroscale assembly-line can be shrunk to the nanoscale.  He believes this because his calculations, and some experiments show that this works.  

He believes that " Nanomachines made of stiff materials can be engineered to employ familiar kinds of moving parts, using bearings that slide, gears that mesh, and springs that stretch and compress (along with latching mechanisms, planetary gears, constant-speed couplings, four-bar linkages, chain drives, conveyor belts . . .)."

The power to do this comes from 2 sources. First of all, the "feedstock" to a nanoassembly factory always consists of the element in question bonded to other atoms, such that it's an energetically favorable reaction to bond that element to something else. Specifically, if you were building up a part made of covalently bonded carbon (diamond), the atomic intermediate proposed by Drexler is carbon dimers ( C---C ). See http://e-drexler.com/d/05/00/DC10C-mechanosynthesis.pdf

Carbon dimers are unstable, and the carbon in question would rather bond to "graphene-, nanotube-, and diamond-like solids"

The paper I linked shows a proposed tool.

Second, electrostatic electric motors would be powered by plain old DC current. These would be the driving energy to turn all the mechanical components of an MNT assembly system. Here's the first example of someone getting one to work I found by googling : http://www.nanowerk.com/spotlight/spotid=19251.php

The control circuitry and sensors for the equipment would be powered the same way.

An actual MNT factory would work like the following. A tool-tip like in the paper I linked would be part of just one machine inside this factory. The factory would have hundreds or thousands of separate "assembly lines" that would each pass molecules from station to station, and at each station a single step is perfomed on the molecule. One the molecules are "finished", these assembly lines will converge onto assembly stations. These "assembly stations" are dealing with molecules that now have hundreds of atoms in them. Nanoscale robot arms (notice we've already gone up 100x in scale, the robot arms are therefore much bigger and thicker than the previous steps, and have are integrated systems that have guidance circuitry, sensors, and everything you see in large industrial robots today) grab parts from assembly lines and place them into larger assemblies. These larger assemblies move down bigger assembly lines, with parts from hundreds of smaller sub-lines being added to them.

There's several more increases in scale, with the parts growing larger and larger. Some of these steps are programmable. The robots will follow a pattern that can be changed, so what they produce varies. However, the base assembly lines will not be programmable.

In principle, this kind of "assembly line" could produce entire sub-assemblies that are identical to the sub assemblies in this nanoscale factory. Microscale robot arms would grab these sub-assemblies and slot them into place to produce "expansion wings" of the same nanoscale factory, or produce a whole new one.

This is also how the technology would be able to produce things that it cannot already make. When the technology is mature, if someone loads a blueprint into a working MNT replication system, and that blueprint requires parts that the current system cannot manufacture, the system would be able to look up in a library the blueprints for the assembly line that does produce those parts, and automatically translate library instructions to instructions the robots in the factory will follow. Basically, before it could produce the product someone ordered, it would have to build another small factory that can produce the product. A mature, fully developed system is only a "universal replicator" because it can produce the machinery to produce the machinery to make anything.

Please note that this is many, many, many generations of technology away. I'm describing a factory the size and complexity of the biggest factories in the world today, and the "tool tip" that is described in the paper I linked is just one teensy part that might theoretically go onto the tip of one of the smallest and simplest machines in that factory.

Also note that this kind of factory must be in a perfect vacuum. The tiniest contaminant will gum it up and it will seize up.

Another constraint to note is this. In Nanosystems, Drexler computes that the speed of motion for a system that is 10 million times smaller is in fact 10 million times faster. There's a bunch of math to justify this, but basically, scale matters, and for a mechanical system, the operating rate would scale accordingly. Biological enzymes are about this quick.

This means that an MNT factory, if it used convergent assembly, could produce large, macroscale products at 10 million times the rate that a current factory can produce them. Or it could, if every single bonding step that forms a stable bond from unstable intermediates didn't release heat. That heat product is what Drexler thinks will act to "throttle" MNT factories, such that the rate you can get heat out will determine how fast the factory will run. Yes, water cooling was proposed :)

One final note : biological proteins are only being investigated as a boostrap. The eventual goal will use no biological components at all, and will not resemble biology in any way. You can mentally compare it to how silk and wood was used to make the first airplanes.

Comment by GeraldMonroe on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-26T07:17:38.273Z · LW · GW

This problem is very easy to solve using induction. Base step : the minimum "replicative subunit". For life, that is usually a single cell. For nano-machinery, it is somewhat larger. For the sake of penciling in numbers, suppose you need a robot with a scoop and basic mining tools, a vacuum chamber, a 3d printer able to melt metal powder, a nanomachinery production system that is itself composed of nanomachinery, a plasma furnace, a set of pipes and tubes and storage tanks for producing the feedstock the nanomachinery needs, and a power source.

All in all, you could probably fit a single subunit into the size and mass of a greyhound bus. One notable problem is that there's enough complexity here that current software could probably not keep a factory like this running forever because eventually something would break that it doesn't know how to fix.

Anyways, you set down this subunit on a planet. It goes to work. In an hour, the nanomachinery subunit has made a complete copy of itself. In somewhat more time, it has to manufacture a second copy of everything else. The nanomachinery subunit makes all the high end stuff - the sensors, the circuitry, the bearings - everything complex, while the 3d printer makes all the big parts.

Pessimistically, this takes a week. A greyhound bus is 9x45 feet, and there are 5.5e15 square feet on the earth's surface. To cover the whole planet's surface would therefore take 44 weeks.

Now you need to do something with all the enormous piles of waste material (stuff you cannot make more subunits with) and un-needed materials. So you reallocate some of the 1.3e13 robotic systems to build electromagnetic launchers to fling the material into orbit. You also need to dispose of the atmosphere at some point, since all that air causes each electromagnetic launch to lose energy as friction, and waste heat is a huge problem. (my example isn't entirely fair, I suspect that waste heat would cook everything before 44 weeks passed). So you build a huge number of stations that either compress the atmosphere or chemically bond the gasses to form solids.

With the vast resources in orbit, you build a sun-shade to stop all solar input to reduce the heat problem, and perhaps you build giant heat radiators in space and fling cold heat sinks to the planet or something. (with no atmospheric friction and superconductive launchers, this might work). You can also build giant solar arrays and beam microwave power down to the planet to supply the equipment so that each subunit no longer needs a nuclear reactor.

Once the earth's crust is gone, what do you do about the rest of the planet's mass? Knock molten globules into orbit by bombarding the planet with high energy projectiles? Build some kind of heat resistant containers that you launch into space full of lava? I don't know. But at this point you have converted the entire earth's crust into machines or waste piles to work with.

This is also yet another reason that AI is part of the puzzle. Even if failures were rare, there probably are not enough humans available to keep 1e13 robotic systems functioning, if each system occasionally needed a remote worker to log in and repair some fault. There's also the engineering part of the challenge : these later steps require very complex systems to be designed and operated. If you have human grade AI, and the hardware to run a single human grade entity is just a few kilograms of nano-circuitry (like the actual hardware in your skull), you can create more intelligence to run the system as fast as you replicate everything else.

Comment by GeraldMonroe on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-25T21:07:47.179Z · LW · GW

Bacteria, as well as all life, are stuck at a local maximum because evolution cannot find optimal solutions. Part of Drexler's work is to estimate what the theoretical optimum solutions can do.

My statement "tear apart planets" assumed too much knowledge on the part of the reader. I thought it was frankly pretty obvious. If you have a controllable piece of industrial machinery that uses electricity and can process common elements into copies of itself, but runs no faster than bacteria, tearing apart a planet is a straightforward engineering excercise. I did NOT mean the machinery looked like bacteria in any way, merely that it could copy itself no faster than bacteria.

And by "copy itself", what I really meant is that given supplies of feedstock (bacteria need sugar, water, and a few trace elements...our "nanomachinery" would need electricity, and a supply of intermediates for every element you are working with in a pure form) it can arrange that feedstock into thousands of complex machine parts, such that the machinery that is doing this process can make it's own mass in atomically perfect products in an hour.

I'll leave it up to you to figure out how you could use this tech to take a planet apart in a few decades. I don't mean a sci-fi swarm of goo, I mean an organized effort resembling a modern mine or construction site.

Comment by GeraldMonroe on After critical event W happens, they still won't believe you · 2013-06-23T06:11:44.615Z · LW · GW

Alas, cryonics may be screwed with regards to this. It simply may not be physically possible to freeze something as large and delicate as a brain without enough damage to prevent you from thawing it and have it still work. This is of course is no big deal if you just want the brain for the pattern it contains. You can computationally reverse the cracks and to a lesser extent some of the more severe damage the same way we can computationally reconstruct a shredded document.

The point is, I think in terms of relative difficulty, the order is :

  1. Whole brain emulation
  2. Artificial biological brain/body
  3. Brain/body repaired via MNT
  4. Brain revivable with no repairs.

Note that even the "easiest" item on this list is extremely difficult.

Comment by GeraldMonroe on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-23T06:01:01.338Z · LW · GW

This is also wrong. The actual proposals for MNT involve creating a system that is very stable, so you can measure it safely. The actual machinery is a bunch of parts that are as strong as they can possibly be made (this is why the usual proposals involve covalent bonded carbon aka diamond) so they are stable and you can poke them with a probe. You keep the box as cold as practical.

It's true that even if you set everything up perfectly, there are some events that can't be observed directly, such as bonding and rearrangements that could destroy the machine. In addition, practical MNT systems would be 3d mazes of machinery stacked on top of each other, so it would be very difficult to diagnose failures. To summarize : in a world with working MNT, there's still lots of work that has to be done.

Comment by GeraldMonroe on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-23T05:39:29.967Z · LW · GW

The way biological nanotechnology (aka the body you are using to read this) solves this problem is it bonds the molecule being "worked on" to a larger, more stable molecule. This means instead of whole box of legos shaking around everywhere, as you put it, it's a single lego shaking around bonded to a tool (the tool is composed of more legos, true, but it's made of a LOT of legos connected in a way that makes it fairly stable). The tool is able to grab the other lego you want to stick to the first one, and is able to press the two together in a way that makes the bonding reaction have a low energetic barrier. The tool is shaped such that other side-reactions won't "fit" very easily.

Anyways, a series of these reactions, and eventually you have the final product, a nice finished assembly that is glued together pretty strongly. In the final step you break the final product loose from the tool, analagous to ejecting a cast product from a mold. Check it out : http://en.wikipedia.org/wiki/Pyruvate_dehydrogenase

Note a key difference here between biological nanotech (life) and the way you described it in the OP. You need a specific toolset to create a specific final product. You CANNOT make any old molecule. However, you can build these tools from peptide chains, so if you did want another molecule you might be able to code up a new set of tools to make it. (and possibly build those tools using the tools you already have)

Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life - it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon. The idea here is to make your "lego" analogy manageable. All the "legos" in the box are glued tightly to one another (low temperature, strong covalent bonds) except for the ones you are actually playing with. No extraneous legos are allowed to enter the box (vacuum chamber)

If you want to bond a blue lego to a red lego, you force the two together in a way that controls which way they are oriented during the bonding. Check it out : http://www.youtube.com/watch?v=mY5192g1gQg

Current organic chemical synthesis DOES operate as a box of shaking legos, and this is exactly why it is very difficult to get lego models that come out without the pieces mis-bonded. http://en.wikipedia.org/wiki/Thalidomide

As for your "Shroedinger Equations are impractical to compute" : what this means is that the Lego Engineers (sorry, nanotech engineers) of the future will not be able to solve any problem in a computer alone, they'll have to build prototypes and test them the hard way, just as it is today.

Also, this is one place where AI comes in. The universe doesn't have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.

Comment by GeraldMonroe on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-23T05:06:00.644Z · LW · GW

Nanosystems discusses theoretical maximums. However, even if you make the assumption that living cells are as good as it gets, an e-coli, which we know from extensive analysis uses around 25,000 moving parts, can double itself in 20 minutes.

So in theory, you have some kind of nano-robotic system that is able to build stuff. Probably not any old stuff - but it could produce tiny subunits that can be assembled to make other nano-robotic systems, and other similar things.

And if it ran as fast as an e-coli, it could build itself every 20 minutes.

That's still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.

Comment by GeraldMonroe on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-17T18:25:44.489Z · LW · GW

I wanted to make a concrete proposal. Why does it have to be autonomous? Because in urban combat, the combatants will usually choose a firing position that has cover. They "pop up" from the cover, take a few shots, then position themselves behind cover again. An autonomous system could presumably accurately return fire much faster than human reflexes. (it wouldn't be instant, there's a delay for the servos of the automated gun to aim at the target, and delays related to signals - you have to wait for the sound to reach all the acoustic sensors in the drone swarm, then there's processing delays, then time for the projectiles from the return fire to reach the target)

Also, the autonomous mode would hopefully be chosen only as a last resort, with a human normally in the loop somewhere to authorize each decision to fire.

As for a threat to democracy? Defined how? You mean a system of governance where a large number of people, who are easily manipulated via media, on the average know fuck-all about a particular issue, are almost universally not using rational thought, and the votes give everyone a theoretically equal say regardless of knowledge or intelligence?

I don't think that democracy is something that should be used as an ideal nor a terminal value on this website. It has too many obvious faults.

As for humans needing to be employed : autonomous return fire drones are going to be very expensive to build and maintain. That "expense" means that the labor of thousands is needed somewhere in the process.

However, in the long run, obviously it's possibly to build factories to churn them out faster than replacing soldiers. Numerous examples of this happened during ww2, where even high technology items such as aircraft were easier to replace than the pilots to fly them.

Comment by GeraldMonroe on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-16T15:13:54.725Z · LW · GW

Let's talk actual hardware.

Here's a practical, autonomous kill system that is possibly feasible with current technology. A network of drone helicopters armed with rifles and sensors that can detect the muzzle flashes, sound, and in some cases projectiles of an AK-47 being fired.

Sort of this aircraft : http://en.wikipedia.org/wiki/Autonomous_Rotorcraft_Sniper_System

Combined with sensors based on this patent : http://www.google.com/patents/US5686889

http://en.wikipedia.org/wiki/Gunfire_locator

and this one http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1396471&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F9608%2F30354%2F01396471

The hardware and software would be optimized for detecting AK-47 fire, though it would be able to detect most firearms. Some of these sensors work best if multiple platforms armed with the same sensor are spread out in space, so there would need to be several of these drones hovering overhead for maximum effectiveness.

How would this system be used? Whenever a group of soldiers leaves the post, they would all have to wear blue force trackers that clearly mark them as friendly. When they are at risk for attack, a swarm of drones follows them overhead. If someone fires at them, the following autonomous kill decision is made

if( SystemIsArmed && EventSmallArmsFire && NearestBlueForceTracker > X meters && ProbableError < Y meters) ShootBack();

Sure, a system like this might make mistakes. However, here's the state of the art method used today :

http://www.youtube.com/watch?list=PL75DEC9EEB25A0DF0&feature=player_detailpage&v=uZ2SWWDt8Wg

This same youtube channel has dozens of similar combat videos. An autonomous killing drone system would save soldier's lives and kill fewer civilians. (drawbacks include high cost to develop and maintain)

Other, more advanced systems are also at least conceivable. Ground robots that could storm a building, killing anyone carrying a weapon or matching specific faces? The current method is to blow the entire building to pieces. Even if the robots made frequent errors, they might be more effective than bombing the building.

Comment by GeraldMonroe on Who thinks quantum computing will be necessary for AI? · 2013-05-29T13:40:45.047Z · LW · GW

An optimal de novo AI, sure. Keep in mind that human beings have to design this thing, and so the first version will be very far from optimal. I think it's a plausible guess to say that it will need on the order of the same hardware requirements as an efficient whole brain emulator.

And this assumption shows why all the promises made by past AI researchers have so far failed : we are still a factor of 10,000 or so away from having the hardware requirements, even using supercomputers.

Comment by GeraldMonroe on Who thinks quantum computing will be necessary for AI? · 2013-05-29T01:23:34.461Z · LW · GW

These people's objections are not entirely unfounded. It's true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today's hardware to do it.

The point is : you can simplify people's arguments to "I'm not worried about the imminent existence of AI because we cannot build the hardware to run one". The fact that a detail about their argument is wrong doesn't change the conclusion.

Comment by GeraldMonroe on Isolated AI with no chat whatsoever · 2013-01-29T21:47:56.790Z · LW · GW

A stable outcome is possible where such a self-improving AI is unable to form.

The outcome can happen if the "human based" AIs occupy all ecological space within this solar system. That is, there might be humans alive, but all significant resources would be policed by the AIs. Assuming a self-improving AI, no matter how smart, still needs access to matter and energy to grow, then it would not ever be able to gain a foothold.

The real life example is earth's biosphere : all living things are restricted to a subset of the possible solution space for a similar reason, and have been for several billion years.

Comment by GeraldMonroe on Isolated AI with no chat whatsoever · 2013-01-29T02:03:59.949Z · LW · GW

Alternate Proposal : here's a specific, alternate proposal developed with feedback from members of the #FAI channel on IRC.

Instead of building non-human optimizing algorithms, we develop accurate whole brain emulations of once living people. The simulation hardware is a set of custom designed chips with hardware restricts that prevent external writes to the memory cells storing the parameters for each emulated synapse. That is to say, the emulated neural network can update it's synapses and develop new ones (aka it can learn) but the wires that allow it to totally rewrite itself are disconnected permanently in the chip. (it's basically a write once FPGA. You need to write once to take the compiled mapping of a human mind and load it into the chips)

Thus, these emulations of human beings can only change themselves in a limited manner. This restriction is present in real human brain tissue : neurons in lower level systems have far less flexibility during the lifespan of adults. You cannot "learn" to not breathe, for instance. (you can hold your breath via executive function but once you pass out, cells in the brainstem will cause you to breathe again)

This security measure prevents a lot of possible failures.

Anyways, you don't just scan and emulate one human being. An isolated person is not an entity capable of prolonged independent operation and self improvement. Humans have evolved to function properly in small tribes. So you have to scan enough people to create an entire tribe, sufficient for the necessary social bonds and so forth needed to keep people sane. During this entire process, you use hardware blocks to physically prevent the emulation speed from exceeding a certain multiple of realtime. (current limiters or something)

Once you have an entire working tribe of sane people, interconnected in a manner that allows them to act like checks on each other, you gradually increase their responsibility and capabilities. (by boosting emulation speeds, making them responsible for gradually more complex systems, etc)

Eventually, this emulated tribe would run at a maximum rate of perhaps 10^6 times real-time and be capable of performing self improvement to a limited degree. Compared to extant human beings, people like this would have effective super-intelligence and would most likely be capable of solving problems to improve the quality and length of human lives. Maybe they could not develop "magic" and take over the universe (if that is possible) but they could certainly solve the problems of humanity.

I'd much rather have a weak super-intelligence smart enough to make great quality 3d molecular printers, and giant space habitats for humans to live in, and genetic patches to stop all human aging and disease, and slow but working starships, and a sane form of government, and a method to backup human personalities to recover from accidental and violent death, and so on at the head of things.

Ideally, this super-intelligence would consist of a network of former humans who cannot change so much as to forget their roots. (because of those previously mentioned blocks in the hardware)

Comment by GeraldMonroe on Isolated AI with no chat whatsoever · 2013-01-29T01:51:24.737Z · LW · GW

How would you build such an AI? Most or all proposals for developing a super-human AI require extensive feedback between the AI and the environment. A machine cannot iteratively learn how to become super-intelligent if it has no way of testing improvements to itself versus the real universe and feedback from it's operators, can it?

I'll allow that if an extremely computationally expensive simulation of the real world were used, it is at least possible to imagine that the AI could iteratively make itself smarter by using the simulation to test improvements.

However, this poses a problem. At some point N years from today, it is predicted that we will have sufficiently advanced computer hardware to support a super intelligent AI. (N can be negative for those who believe that day is in the past). So we need X amount of computational power (I think using the Whole Brain Emulation roadmap can give you a guesstimate for X)

Well, to also simulate enough of the universe to a sufficient level of detail for the AI to learn against it, we need Y amount of computational power. Y is a big number, and most likely bigger than X. Thus, there will be years (decades?, centuries?) which X is available to a sufficiently well funded group, but X+Y is not.

It's entirely reasonable to suppose that we will have to deal with AI (and survive them...) before we ever have the ability to create this kind of box.

Comment by GeraldMonroe on [Link] Statistically, People Are Not Very Good At Making Voting Decisions · 2012-12-31T21:55:46.616Z · LW · GW

If you think about voting decisions as an intelligent collective entity making decisions, the question naturally arises : why does the system work at all? Sure, there are massive flaws, but overall the governments of the United States does maintain a powerful military, build and maintain a decent set of roads, keep the mail delivered, care for millions via the VA, etc, etc, etc.

(note : state government is typically selected through even more arbitrary and uninformed votes)

If you think of the masses as a collective with an IQ down in the mentally retarded range, it is difficult to see how this is even possible at all.

There are various theories, of course, but one possibility is that a "bug" in the system is why it even functions. Powerful people in "smoke filled back rooms" decide who the candidates for office that the masses choose between actually are. These people are themselves intelligent and they are selecting intelligent enough decision makers that the system continues to function, more or less. The problems are mostly limited to cases where the powerful people calling the shots collectively make short-sighted decisions, such as sponsoring candidates for office who would rather lower taxes now and allow the roads to go to ruin, and so on.

No, I don't think these powerful people in smoke filled back rooms coordinate with each other very much. This is the difference between this hypothesis and the "conspiracy" hypothesis people create to explain how politics work.

Comment by GeraldMonroe on Bad news for uploading · 2012-12-13T23:54:03.992Z · LW · GW

As in, the number and type of neurotransmitter receptors embedded in each synapse.

This isn't "disappointing", this was expected. The initial wiring layout is random, though there's some pruning that occurs in early brain development.

Comment by GeraldMonroe on Mini advent calendar of Xrisks: Pandemics · 2012-12-07T01:21:47.771Z · LW · GW

This isn't true. Viruses are subject to evolutionary pressure even inside a single patient. They don't replicate perfectly (partly because they have to be small and simple, and don't have very good control of the cellular environment they are inside, being invaders and all) and so variants of the particle compete with one another. Because of this, features that might be desired in a bioweapon but are not needed in order for the virus to replicate can get lost.

For instance, a bioweapon virus might contain genes for botulism toxin in order to kill the host. However, copying this gene every generation would diminish the particles ability to replicate, and so variants of the particle that are missing the gene would have a small evolutionary advantage. After just a few patients, the wild version of the virus might have lost this feature.

Comment by GeraldMonroe on Mini advent calendar of Xrisks: Pandemics · 2012-12-07T01:12:42.821Z · LW · GW

The latter. I've read of limited successes in other fields of research (no one is publicly trying to make something like this) that indicate it's just barely possible, maybe, with some luck.

One nasty thing is that the virus doesn't have to be safe. It just has to work, and it's not a problem if it permanently damages the people it doesn't kill. So, creating a weapon like this is fundamentally much easier than trying to create, say, a treatment for cancer using similar methods.

Comment by GeraldMonroe on [SEQ RERUN] Whither Manufacturing? · 2012-12-06T22:05:07.821Z · LW · GW

There are advantages to local production. Every time a customer orders something (whether it be an individual or a company consuming resources), if instead of huge single purpose factories located at a few places in the world, you have general purpose fabrication plants located nearby, it greatly reduces the time lag between an order being placed and a product received.

No need to warehouse final products - you might produce "general purpose" subunits and stockpile those, and every time someone places an order, you assemble the desired final product and send a robotic delivery vehicle out to deliver it.

The advantage of local production is reduced time lag. It might be a matter of hours between someone placing an order and the freshly manufactured product arriving. Moreover, since "nanofabs" can produce a huge range of possibilities, someone could post a refined design for a product, presumably pass some kind of safety inspection, and the moment the product is approved for sale, it could be available everywhere, worldwide, that is near a nanofabrication plant.

Due to these pressures, one can imagine the tradeoffs working out to where maybe every small city has a nanofabrication plant or two, but people don't have them in their garages or basements because of the licensing fees and regulations.

There's another huge advantage here that is much more interesting. The whole concept of a mini-van sized machine that can produce almost anything, including parts to assemble a copy of itself, does a lot more than refine supply chains. You could start the process of converting the entire moon into useful products merely by launching a couple of Apollo sized landers loaded with the seed machinery. On a longer scale, it would make interstellar colonization practical. (since a starship merely needs to transport a minimum set of nanomachinery production equipment to get started, and a big enough library of things to be made)

And, you could escape an apocalypse by building a truly self-contained bunker, buried deep underground with a nano-machinery plant to produce everything you need.

Comment by GeraldMonroe on Mini advent calendar of Xrisks: Pandemics · 2012-12-06T20:52:14.507Z · LW · GW

The theory goes : plagues that are especially deadly must spread through the body extremely quickly. Otherwise, they give the immune system time for the B cells to formulate an antibody. Yet, if the plague spreads quickly, it has a short incubation period, and it means that hosts will die before spreading it. Ebola is thought to fit in this part of the ecology, and this is one reason why the virus is rare.

A virus that spread itself like the flu but also killed like ebola would be pushed by evolution away from these properties because it would kill off it's hosts too quickly.

Another factor is that some of the better viruses for evading the immune system (HIV) depend on being able to randomly recombine and change the pattern for their outer shells.

If you designed a virus that had a tough outer coating, targeted cells and receptors designed to kill the host, and had some kind of sophisticated clock mechanism to force a long incubation period, you would be forced to give it genes that would code for complex error correcting proteins so that each new generation of the virus would have a low chance of containing a mutation. This would in turn prevent it from evolving, allowing the immune system (and synthetic antibodies) to target it easily.

So, you'd have to deliberately make it able to adjust it's own outer coat randomly, but not any other components.

Such a virus is not something evolution is likely to ever create (because for one, it would extinct it's hosts, and for another, evolution doesn't work like this. Evolution as an algorithm finds the highest point on the NEAREST hill in the solution space, not the peak of a theoretical mountain that towers over the solution space)

Net result : with very sophisticated bioscience, such a person killer that had overlapping qualities could be created. However, you are correct that there is a reason you don't see them in nature.

Comment by GeraldMonroe on Mini advent calendar of Xrisks: synthetic biology · 2012-12-04T17:56:17.160Z · LW · GW

An example of a nasty trick that would make for a relatively easy to produce and deploy bio-weapon : http://www.plospathogens.org/article/info%3Adoi%2F10.1371%2Fjournal.ppat.1001257

Inhaled prions have extremely long incubation times (years), so it would be possible for an attacker to expose huge numbers of people unknowingly to them. The disease it causes is slow and insidious, and as of today, there is no way to detect it until post-mortem. There's no treatment, either. I'm not certain of the procedure for making prions in massive quantities in the laboratory, but since they are self-replicating if placed in a medium containing the respective protein, they probably could be mass-produced.

On the bright side, the disease would not be self-replicating in the wild, so it would not be an existential risk - merely a very nasty way to cause mass casualties. Also, this method has never been tested on humans, so it might not be very effective, so one can hope that terrorists will stick with bombs.

Comment by GeraldMonroe on [Link] The real end of science · 2012-10-05T04:08:23.556Z · LW · GW

Again, this is one of those approaches that sounds good at a conference, but when you actually sit there and think about it rationally, it shows it's flaws.

Even if you know exactly what pathway to hit, a small molecule by definition will get everywhere and gum up the works for many, many other systems in the body. It's almost impossible not to. Sure, there's a tiny solution space of small molecules that are safe enough to use despite this, but even then you're going to have side effects and you still have not fixed anything. The reason the cells are giving up and failing as a person ages is that their genetic code has reached a stage that calls for this. We're still teasing out the exact regulatory mechanisms, but the evidence for this is overwhelming.

No small molecule can fix this problem. Say one of the side effects of this end of life regulatory status is that some cells have intracellular calcium levels that are too high, and another set has them too low. Tell me a small molecule exists out of the billions of possibilities that can fix this.

DNA patching and code update is something that would basically require Drexelerian nanorobotics, subject to the issues above.

Methods to "rollback" cells to their previous developmental states, then re-differentiate them to functional components for a laboratory grown replacement organ actually fix this problem.

For some reason, most of the resources (funding and people) is not pouring into rushing Drexelerian nanorobotics or replacement organs to the prototype stage.

Comment by GeraldMonroe on [Link] The real end of science · 2012-10-04T18:58:18.842Z · LW · GW

Meh, another buzzword. I actually don't think we'll see nanosurgery for a very long time, and we should be able to solve the problem of "death" many generations of tech before we can do nano-surgery.

Think about what you actually need to do this. You need a small robot, composed of non-biological parts at the nanoscale. Presumably, this would be diamondoid components such as motors, gears, bearings, etc as well as internal power storage, propulsion, sensors, and so on. The reason for non-biological parts is that biological parts are too floppy and unpredictable and are too difficult to rationally engineer into a working machine.

Anyways, this machine is very precisely made, probably manufactured in a perfect vacuum at low temperatures. Putting it into a dirty liquid environment will require many generations of engineering past the first generation of nanomachinery that can only function in a perfect vacuum at low temperatures. And it has to deal with power and communication issues.

Now, how does this machine actually repair anything? Perhaps it can clean up plaques in the arteries, but how does it fix the faulty DNA in damaged skin cells that cause the skin to sag with age? How does it enter a living cell without damaging it? How does it operate inside a living cell without getting shoved around away from where it needs to be? How do it's sensors work in such a chaotic environment?

I'm not saying it can't be done. In fact, I am pretty sure it can be done. I'm saying that this is a VERY VERY hard engineering problem, one that would require inconceivable amounts of effort. Using modern techniques this problem may in fact be so complex to solve that even if we had the information about biology and the nanoscale needed to even start on this project, it might be infeasible with modern resources.

If you have these machines, you have a machine that can create other nanomachines, with atomically precise components. Your machine probably needs a vacuum and low temperatures, as before. Well, that machine can probably make variants of itself that are far simpler to design than a biologically compatible repair robot. Say a variant that instead of performing additive manufacturing at the nanoscale, it can tear down an existing object at the nanoscale and inform the control machinery about the pattern it finds.

Anyways, long story short : with a lot less effort, the same technology needed for nanosurgery to be possible could deconstruct preserved human brains and build computers powerful enough to simulate these brains accurately and at high speed. This solves the problem of "death" quite neatly : rather than trying to patch up your decaying mass of biological tissue with nanosurgery, you get yourself preserved and converted into a computer simulation that does not decay at all.

Comment by GeraldMonroe on [Link] The real end of science · 2012-10-04T18:39:22.394Z · LW · GW

The method I described WILL work. The laws of physics say it will. Small scale experiments show it working. It isn't that complicated to understand. Bad mRNA present = cell dies. All tumors, no matter what, have bad mRNAs, wherever they happen to be found in the body.

But it has to be developed and refined, with huge resources put into each element of the problem.

Here, specifically, is the difference between my proposed method and the current 'state of the art'. Ok, so the NIH holds a big meeting. They draw a massive flow chart. Team 1,2,3 - your expertise is in immunology. Find a coating that will evade the immune system and can encapsulate a large enough device. Million dollar prize to the first team that succeeds. Here are the specific criteria for success.

Team 4 - for some reason, health cells are dying when too many copies of the prototype device are injected. Million dollars if you can find a solution to this problem within 6 months.

Team 5 - we need alternate chemotherapy agents to attach to this device.

Team 6 - we need a manufacturing method.

Once a goal is identified and a team is assigned, they are allocated resources within a week. Rather than awarding and penny pinching funds, the overall effort has a huge budget and equipment is purchased or loaned between groups as needed. The teams would be working in massive integrated laboratories located across the country, with multiple teams in each laboratory for cross trading of skills and ideas.

And so on and so forth. The current model is "ok, so you want to research if near infrared lasers and tumor cells will work. You have this lengthy list of paper credentials, and lasers and cancer sound like buzzwords we like to hear. Also your buddies all rubber stamped your idea during review. Here's your funds, hope to see a paper in 2 years"...

No one ever considers "how likely are actually going to be better than using high frequency radiation we already have? How much time is this really going to buy a patient even if this is a better method?".

The fact is, I've looked at the list of all ongoing research at several major institutions, and they are usually nearly all projects of similarly questionable long term utility. Sure, maybe a miracle will happen and someone will discover and easy and cheap method that works incredibly well that no one ever thought would work.

But a molecular machine, composed of mostly organic protein based parts, that detects bad mRNAs and kills the cell is an idea that WILL work. It DOES work in rats. More importantly, it is a method that can potentially hunt down tumor cells of any type, no matter where they are hiding, no matter how many metastases are present.

Anyone using rational thought would realize that this is an idea that actually is nearly certain to work (well, in the long run, not saying a big research project might not hit a few showstoppers along the way).

And there is money going to this idea - but it's having to compete with 1000 other methods that don't have the potential to actually kill every tumor cell in a patient and cure them.

Comment by GeraldMonroe on The difficulty in predicting AI, in three lines · 2012-10-04T01:37:10.528Z · LW · GW

A working AI probably needs to duplicate thousands of individual systems found in the human mind. Whether we get there by scanning a brain for 4 years and 1 million electron beams working in parallel, or we have thousands of programming teams develop each subsystem, this is not going to be cheap.

You don't get there by accident - evolution did it, but it took millions of years, with each subsystem being developed to build upon previous ones.

Have you heard anything about some massive corporation or government getting ready to drop a few tril on an all out effort?

No, and the current discussions are how there are not enough common resources to pay for current needs. There isn't enough money to fund large militaries and to pay all of the expenses for the elderly and fix the roads and do everything else as it is. Money has to be borrowed from more successful economies, which just makes the fiscal crisis worse in the future.

Also, no corporation can justify spending more money than any company on the planet actually has to develop something that no one has ever done before and thus seems likely to fail.

Having read the brain emulation roadmap, and articles on how modern neural networks can model individual subsystems in the human mind successfully, this does not seem like a problem that we have to wait another 100 years to solve. The human race might be able to do it in 20 years if they started today and put the needed resources into the problem.

But it isn't going to happen, and predictions of success can't really be made until the actual process is actually started. It could be 10 years from now, it could be 200, before the actual effort is initiated. On the plus side, as time goes on, the cost to do this does go down to an extent. The total "bill of materials" for the hardware goes down with every year with Moore's law. Better software techniques make it more likely that such a huge project could be developed and not be so buggy it wouldn't run at all. But, in 30 years from now, it will still be a difficult and expensive endeavor needing a lot of resources.

Comment by GeraldMonroe on [Link] The real end of science · 2012-10-04T01:25:47.386Z · LW · GW

You missed the boat completely. Not modding down because this is an easy cognitive error to make, and I just hit you with a wall of text that does need better editing.

I just said that the model of "basic research" is WRONG. You can't throw billions at individual groups, each eating away a tiny piece of the puzzle doing basic research and expect to get a working device that fixes the real problems.

You'll get rafts of "papers" that each try to inform the world about some tiny element about how things work, but fail miserably in their mission for a bunch of reasons.

Instead you need targeted, GOAL oriented research, and a game plan to win. When groups learn things, they need to update a wiki or some other information management tool with what they have found out and how certain they are correct - not hide their actual discovery in a huge jargon laden paper with 50 references at the end.

Comment by GeraldMonroe on When does something stop being a “self-consistent idea” and become scientific fact? · 2012-10-04T01:14:23.740Z · LW · GW

George the Giant or invisible cosmic springs that are too small to ever measure? Also a bunch of extra spatial dimensions that information can travel through without us being to see it. I see what you did there.

Ultimately I'd say there is nothing wrong with the primitives thinking, as long as they are willing to upgrade their model as better evidence becomes available. When the primitives finally send someone to check the other side of the mountain and see no giant, they need to eventually assume that the cause must be somewhere else they cannot see, like under the ground.

Also, if they think of a simpler model RIGHT NOW that meets all the above restrictions, they should "upgrade" to that simpler model because it saves computational time to work with it.

When the primitives develop tools to actually see the ground moving, and find out it does this all the time, they have to upgrade their model further to realize that somehow earthquakes are a property of the ground itself.

And so on up to present, and possibly the future when we can model the entire earth as a series of potential energy reservoirs and predict precisely when stress levels are building to the point of an energy transfer occurring.

Comment by GeraldMonroe on [Link] The real end of science · 2012-10-04T00:45:34.441Z · LW · GW

It's easy to point fingers at a very sick subset of scientific endeavors - biomedical research. The reasons it is messed up and not very productive are myriad. Fake and non-reproducible results that waste everyone's time are one facet of the problem. The big one I observed was that trying to make a useful tool to solve a real problem with the human body is NOT something that the traditional model can handle very well. The human body is so immensely complex. This means that "easy" solutions are not going to work. You can't repair a jet engine by putting sawdust in the engine oil or some other cheap trick, can you? Why would you think a very small molecule that can interact with any one of tens of thousands of proteins in an unpredictable manner could fix anything either? (or a beam of radiation, or chopping out an entire sub-system and replacing it with a shoddy substitute made by cannibalizing something else, or delivering crude electric shocks to a huge region. I've just named nearly every trick in the arsenal)

Most biomedical research is slanted towards this "cheap trick" solution, however. The reason is because the model encourages it. University research teams usually consist of a principle investigator and a small cadre of graduate students, and a relatively small budget. They are under a deadline to come up with something-anything useful within a few years, and the failures don't receive tenure and are fired. Pharmaceutical research teams also want a quick and cheap solution, generally, for a similar reason. Most of the low hanging fruit - small molecule drugs that are safe and effective - has already been plucked, and in any case there is a limit to the problems in biological systems that can actually be fixed with small molecules. If a complex machine is broken, you usually need to shut it off and replace major components. You are not going to be able to spray some magic oil and fix the fault.

For example, how might you plausible cure cancer? Well, what do cancer cells share in common? Markers on the outside of the cells? Nope, if there were, the immune system would usually detect them. Are the cells always making some foreign protein? Nope, same problem. All tumors share mutated genes, and thus have mRNAs present in the cells that you can detect.

So how might you exploit this? Somehow you have to build a tool that can get into cells near the tumor and detect the ones with these faulty mRNAs(and kills them). Also, this tool needs to not affect healthy cells.

If you break down the components of the tool, you realize it would have to be quite complex, with many sub-elements that have to be developed. You cannot solve this problem with 10 people and a few million dollars. You probably need many interrelated teams, all of whom are tasked with developing separate components of the tool. (with prizes if they succeed, and multiple teams working on each component using a different method to minimize risks)

No one is going to magically publish a working paper in Nature tomorrow where they have succeeded in such an effort overnight. Yet, this is basically what the current system expects. Somehow someone is going to cure cancer tomorrow without there being an actual integrated plan, with the billions of dollars in resources needed, and a sound game plan that minimizes risk and rewards individual successes.

Professors I have pointed this out to say that no central agency can possibly "know" what a successful cancer cure might look like. The current system just funds anyone who wants to try anything, assuming they pass review and have the right credentials. Thus a large variety of things are tried. I don't see it. I don't think there is a valid solution to cancer that can be found with a small team just trying things with a million or 2 of equipment, supplies, and personnel.

Growing replacement organs is a similar endeavor. Small teams have managed to show that it is viable - but they cannot actually solve the serious problems because they lack the resources to go about it in a systematic and likely to succeed way. While Wake Forest has demonstrated years ago that they can make a small heart that beats, there isn't a huge team of thousands systematically attacking each element of the problem that has to be solved to make full scale replacement hearts.

One final note : this ultimately points to gross misapplication of resources. Our society spends billions to kill a few Muslims who MIGHT kill some people violently. It spends billions to incarcerate millions of people for life who individually MIGHT commit some murders. It spends billions on nursing homes and end of life care to statistically extend the lives of millions by a matter of months.

Yet real solutions to problems that kill nearly everyone, for certain, are not worth the money to solve them in a systematic way.

The reason for this is lack of rationality. Human beings fear emotionally extremely rare causes of death much more than extremely likely, "natural" causes. They fear the idea of a few disgruntled Muslims or a criminal who was let out of prison murdering them far more than they fear their heart suddenly failing or their tissues developing a tumor when they are old.

Comment by GeraldMonroe on [Link] The real end of science · 2012-10-04T00:21:58.312Z · LW · GW

Won't reality eventually sort this out?

Essentially what is being said here is that "the scientific establishment in the West (mostly the USA) is becoming dysfunctional. If the current trend continues, enough science will be wrong or fraudulent that no forward progress is made at all."

However, science isn't just an abstract idea with intangible moral rules. If scientists fake results on a large scale, they will cease discovering useful new ideas or create anything that is objectively better than what Western society currently has. This will have consequences as governed by the one entity that can't be befuddled - the actual universe. Western machinery and medicine will become relatively less efficient compared to competitors because it does not keep getting improved. Over enough of a period of time, this will be deleterious to the entire civilization.

As long as their are competing civilizations (you can divide the world up into other sub-groups, although of course there are many inter-relationships) such as Eastern Europe, Asia, the South Americas, etc then over the long term (centuries) this will simply give these competing civilizations an opportunity to surge ahead. Overall global progress does not stop.

A broader theme here : I'm saying that from the very dawn of rational thought and the scientific method, combined with a method to record the progress made and prevent information loss (the printing press), the overall trajectory is unstoppable. Various human groups competing among each other will continue to increase their control over environment, ultimately resulting in the development of tools that allow true mastery. (AIs, nano-machinery that can self replicate and quickly pattern everything, etc)

It's sort of a popular idea to talk about ways that this could somehow not happen, but short of a large scale erasure of existing information (aka nuclear weapons or very large meteors erasing the current "state" of the global information environment) it's hard to see how anything else could ultimately happen.

Comment by GeraldMonroe on Female Test Subject - Convince Me To Get Cryo · 2012-10-03T21:30:53.085Z · LW · GW
  1. How, precisely, would this happen? We aren't writing sci-fi here. There's dozens of countries on this planet with world class R&D occurring each and every day. The key technology needed to revive frozen brains is the development of nanoscale machine tools that are versatile enough to aid in manufacturing more copies of themselves. This sort of technology would change many industries, and in the short term would give the developers of the tech (assuming they had some means of keeping control of it) enormous economic and military advantages.

    a. Economic - these tools would be cheap in mass quantities because they can be used to make themselves. Nearly any manufactured good made today could probably be duplicated, and it would not require the elaborate and complex manufacturing chains that it takes today. Also, the products would be very close to atomically perfect, so there would be little need for quality control. b. Military - high end weapons are some of the most expensive to manufacture products available, for a myriad of reasons. (I mean jets, drones, tanks, etc). Nanoscale printers would drop the price to rock bottom for each additional copy of a weapon.

A civilization armed with these tools of course would not be worried about resources or environmental damage.
a. There are a lot of resources not feasible today because we can't manufacture mining robots at rock bottom prices and send them to go after these low yield resources. b. We suffer from a lack of energy because solar panels and high end batteries have high manufacturing costs. (the raw materials are mostly very cheap). Same goes for nuclear reactors.
c. We cannot reverse environmental damage because we cannot afford to manufacture square miles worth of machinery to reverse the damage. (mostly C02 and other greenhouse gas capturing plants, but also robots to clean up various messes)

I say we revive people as soon as possible as computer simulations to give us a form of friendly AI that we can more or less trust. These people could be emulated at high speed and duplicated many times and used to counter the other risks.

I agree with you entirely on the irreversible brain damage. I think this problem can be fixed with systematic efforts to solve it (and a legal work around or a change to the laws) but this requires resources that Alcor and CI lack at the moment.

Comment by GeraldMonroe on [Link] Nobel laureate challenges psychologists to clean up their act · 2012-10-03T21:17:12.527Z · LW · GW

How would registry of the trials work?

When I heard a lecture on this subject (there is pretty damning statistical evidence that drug trials are always slanted towards the company paying for the trials) the only viable proposal I heard discussed was to have the testing completely performed and controlled by an unbiased third party. (probably the government)

Comment by GeraldMonroe on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T23:45:14.936Z · LW · GW

Worse : a sensible system would in fact not ONLY give you a "robot body made of titanium" but would maintain multiple backup copies in vaults (and for security reasons, not all of the physical vault locations would be known to you, or anyone) and would use systems to constantly stream updated memory state data to these backup records. (stored as incremental backups, of course)

More than likely, the outcome for "successfully" committing suicide would be to wake up again and face some form of negative consequences for your actions. Suicide could actually be prosecuted as a crime.

Comment by GeraldMonroe on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T23:34:54.151Z · LW · GW

This depends heavily on assumptions. Consider this : the oldest cryonics patients have survived more than 30 years. The loss per decade for reasonably well funded cryonics organizations is currently 0.

If you check a chart of causes of death, the overwhelming majority of causes are ones where a cryonics team could be there.

You would have to choose a legal method of suicide in some of these cases, however (like voluntarily dehydrating yourself to the point of death), or your brain would deteriorate from progressive disease to the point of probably being non-viable for a future revival.

As for long term risks : ultimately these depend on your perception of risks to human civilization and the chance of ultimately developing a form of nanotechnology that could scan your frozen brain and create an emulation at minimal cost. I personally don't think there are many probable causes that could cause civilization to fail, and I think the developement of the nanotechnology to be almost certain. There is no future world I can imagine where eventually a commercial or governmental enitity would not have extreme levels of motivation to develop the technology, due to the incredible advantages it would grant.

This is my personal bias, perhaps, but let's look at this a bit more rationally.

a. How could a civilization ending event actually happen? Are nuclear escalations the most likely outcome or are the exchanges ending at a city or 2 nuked more probable? b. What could stop a civilization from developing molecular tools with self replication? Living cells are an existence proof that the tools are possible, and developing the tools would give the entity that possessed them incredible power and wealth. c. Cryonics organizations have already survived 30 years. Maybe they need to survive 90 or 120 more. They have more money and resources today, decreasing the probability of failure with each year. What is the chance that they will not be able to survive the rest of the needed time? In another 20 years, they might have hardened facilities in the desert with backup power and liquid nitrogen production.

And so on. This is a complicated question, but I have an educated hunch that the risks of failure for cryonics are lower than many of the estimates might show. I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.

Comment by GeraldMonroe on Cryonics: Can I Take Door No. 3? · 2012-09-06T01:54:02.144Z · LW · GW

We can and we can't. Here's an 11 year old article where rats successfully regained function : http://www.jneurosci.org/content/21/23/9334.abstract

That's just an example. I think that if society were far more tolerant of risks, and there was more funding, and the teams working on the problem were organized and led properly, then human patient successes would be seen in the near future.

Comment by GeraldMonroe on Cryonics: Can I Take Door No. 3? · 2012-09-06T01:39:48.606Z · LW · GW

There are fractures like that in existing patients. Note that my hypothesis is that some of the cells would still be viable. I did not say any neurons were viable. I'm merely saying that cryonics is provably better than dehydration or plastination because of this viability factor.

Despite this, IF patients frozen using current techniques can ever be revived, the techniques used will more than likely require a destructive scan of their brains, followed by loading into some kind of hardware or software emulator.

Trying to think of what this might subjectively be like is hard to view rationally. I don't know if a good emulation or replica is the same person or not : you can make solid arguments either way.

Extremely advanced, better versions of cyronics might eventually reach the point of actually preserving the brain in a manner where reheating brings it back to life and a transplant is possible. However, a destructive scan and upload might still remain the safer choice.

Regardless of how the revivals were actually done in practice, if reproducible and public demonstrations of viability were every performed, I would expect that cryonics would gain widespread prevalence, mainstream acceptance, and become a standard medical procedure.

Comment by GeraldMonroe on Cryonics: Can I Take Door No. 3? · 2012-09-06T01:23:59.155Z · LW · GW

Yes, but it doesn't fracture everywhere. Hence, if you rewarmed a tissue that was cryogenically frozen, some cells would probably still be viable. Hence, my hypothesis that if you took samples from a current patient where things were done right, some of the cells would still be alive.

A related article : http://www.nature.com/ncomms/journal/v3/n6/full/ncomms1890.html?WT.mc_id=FBK_NCOMMS

Comment by GeraldMonroe on Cryonics: Can I Take Door No. 3? · 2012-09-05T23:45:35.468Z · LW · GW

Why this proposal is a bad one :

Cryonics is based upon a working technology, cryogenic freezing of living tissues.

The latest cryonics techiques use M22, an ice crystal growth inhibitor that has been used to preserve small organs and successfully thaw them. More than likely, if you were to rewarm some of the tissues from a cryonics patient frozen today, some of the original cells would still be alive and viable. I don't know if this particular experiment has been performed, however : there is a reason why cryonics has a bad reputation for pseudoscience.

If you dehydrate a mammalian cell and then add water again, it's still dead. If you freeze and rewarm, heating and cooling at a rapid enough rate to prevent ice crystal growth, not only is the cell alive, but it can be more viable than newer cells later. Cryogenically frozen sperm or ova from a young person can be more viable than the same substance obtained from the same person later in life.

There are further improvements to cryonics that have not been made because it lacks the funding and resources it deserves.

Better cryoprotectants are more than likely possible. Better techniques are almost certainly achievable. The method used to preserve a viable rabbit kidney used extremely rapid cooling. Cooling the brain more rapidly might yield better results. There are potentially revolutionary improvements possible.

Allegedly, a Japanese company claims that oscillating magnetic fields prevent orderly crystal growth by water. They have experimental results and succes in preserving human teeth this way. If this method is viable, cryonics could use very large magnets on the human brain and potentially get perfect preservations with demonstrable proof of viability. http://www.teethbank.jp/ http://singularityhub.com/2011/01/23/food-freezing-technology-preserves-human-teeth-organs-next/

The first source I think is a better one : As far as a google search will tell me, this is the only existing human tooth bank in the world. If the teeth weren't viable it seems unlikely that credible dentists would be attempting the transplants and succeeding. (I think the technology being used is a lot better indication of it being legitimate than papers or singularity hub articles)

Comment by GeraldMonroe on Cryonics: Can I Take Door No. 3? · 2012-09-05T23:34:51.536Z · LW · GW

The original dichotomy is correct if you think about the consequences of cryonic success.

IF and only if cryonics succeeds, the world had developed the technology to restore you from a cracked, solid mass of brain tissue. (the liquid nitrogen will fracture your brain because it cools it below the glass transition point)

Also, as sort of a secondary thing, it has figured out a way to give you a new body or a quality substitute. (it's secondary because growing a new body is technically possible, if unethical, today)

Anyways, this technical capacity means that almost certainly the technology exists to make backup copies of you. If this is possible, it would be also possible to keep you alive for billions of years, or some huge multiple of your original lifespan that it could be approximated as infinite.

You might consider these technical capabilities to be absurd, and lower that 5% chance to some vanishingly small number like many cryonics skeptics. However, one conclusion naturally falls from the other.

Comment by GeraldMonroe on Cryonics: Can I Take Door No. 3? · 2012-09-05T20:56:53.259Z · LW · GW

The overwhelming majority of the human population disagrees with you. Yes, rationally, we know with great certainty there is no afterlife. (well, at least we know with almost the same certainty that there is no flying spaghetti monster, the p value of each possibility is infitesmally smaller than 1)

But we choose to accept unverified statements from our elders regarding the afterlife rather than dwell on the facts of death and fail to procreate.

Comment by GeraldMonroe on Group rationality diary, 9/3/12 · 2012-09-05T04:59:47.984Z · LW · GW

Once dead it doesn't matter what happened or didn't happen. This thought has been disturbing me for around 3 years now.

The context was this : it was the first week of medical school. We went to the anatomy lab, and looked at the cadavars. Practically from day 1 we had to do dissections that felt incredibly wrong and disturbing (chopping deep into the person's back). So, while in the lab with the corpses, seeing everyone else around me cheerfully talking about various things, I could not understand everyone else's irrational points of view. THIS was what mattered...who cares what our lives are like if we end up as stiff, cold corpses who remember nothing at all, our brains rapidly rotting to mush.

I think the worst part was going to a sheet of paper on the wall that had the tag number of each corpse, and a description of the age, cause of death, and prior occupation of the corpse. By cross referencing the two I realized that in fact death kills everyone equally without regard to occupation or age, and again, nothing matters after that.

Actually observing these horrors of existence directly changed my perspective radically. Before, I was happily willing to lie to myself and pretend there had to be some sort of afterlife or the world wouldn't be very fair. After feeling the truth in my own gloved hands (and smelling the stink of decay), the objective, rational truth become apparent.

All my grand plans at that moment in time to become a great surgeon or something seemed meaningless...what difference did it make. Any patient I "saved" would only gain an extra few months to years before they died of something else, and on the day they died any effort I had made (and dollars that were spent) would be meaningless. They "might as well" have died earlier.

Unfortunately, I saw first hand the world of biology research. Progress is so glacially slow that I would be totally un surprised if there were no effective rejuvenative treatments at all available when it comes my time to die in ~60 years from today. The reasons why progress is so slow are myriad, but the key reason is that no one is willing to take risks, and so new treatments are almost never attempted.

That's why I focus on cryonics because the basic concept of stopping the clock on degrading biological tissues seems like a winner.