Most transferable skills? 2012-05-11T21:58:01.712Z
Crowdsourcing the availability heuristic 2012-04-25T06:20:13.789Z
Attention control is critical for changing/increasing/altering motivation 2012-04-11T00:48:04.567Z


Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T21:51:42.450Z · LW · GW

Yes. If we have an AGI, and someone sets forth to teach it how to be able to lie, I will get worried.

I am not worried about an AGI developing such an ability spontaneously.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T21:50:01.155Z · LW · GW

In the infinite number of possible paths, the percent of paths we are adding up to here is still very close to zero.

Perhaps I can attempt another rephrasing of the problem: what is the mechanism that would make an AI automatically seek these paths out, or make them any more likely than infinite number of other paths?

I.e. if we develop an AI which is not specifically designed for the purpose of destroying life on Earth, how would that AI get to a desire to destroy life on Earth, and by which mechanism would it gain the ability to accomplish its goal?

This entire problem seems to assume that an AI will want to "get free" or that its primary mission will somehow inevitably lead to a desire to get rid of us (as opposed to a desire to, say, send a signal consisting of 0101101 repeated an infinite number of times in the direction of Zeta Draconis, or any other possible random desire). And that this AI will be able to acquire the abilities and tools required to execute such a desire. Every time I look at such scenarios, there are abilities that are just assumed to exist or appear on their own (such as the theory of mind), which to the best of my understanding are not a necessary or even likely products of computation.

In the final rephrasing of the problem: if we can make an AGI, we can probably design an AGI for the purpose of developing an AGI that has a theory of mind. This AGI would then be capable of deducing things like deception or the need for deception. But the point is - unless we intentionally do this, it isn't going to happen. Self-optimizing intelligence doesn't self-optimize in the direction of having theory of mind, understanding deception, or anything similar. It could, randomly, but it also could do any other random thing from the infinite set of possible random things.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T21:42:21.276Z · LW · GW

You are correct. I did not phrase my original posts carefully.

I hope that my further comments have made my position more clear?

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T21:40:19.972Z · LW · GW

We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T21:38:27.466Z · LW · GW

My apologies, but this is something completely different.

The scenario takes human beings - which have a desire to escape the box, possess theory of mind that allows them to conceive of notions such as "what are aliens thinking" or "deception", etc. Then it puts them in the role of the AI.

What I'm looking for is a plausible mechanism by which an AI might spontaneously develop such abilities. How (and why) would an AI develop a desire to escape from the box? How (and why) would an AI develop a theory of mind? Absent a theory of mind, how would it ever be able to manipulate humans?

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T20:20:46.168Z · LW · GW

Yet again: ability to discern which parts of fiction accurately reflect human psychology.

An AI searches the internet. It finds a fictional account about early warning systems causing nuclear war. It finds discussions about this topic. It finds a fictional account about Frodo taking the Ring to Mount Doom. It finds discussions about this topic. Why does this AI dedicate its next 10^15 cycles to determination of how to mess with the early warning systems, and not to determination of how to create One Ring to Rule them All?

(Plus other problems mentioned in the other comments.)

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T20:17:05.756Z · LW · GW

Doesn't work.

This requires the AI to already have the ability to comprehend what manipulation is, to develop manipulation strategy of any kind (even one that will succeed 0.01% of the time), ability to hide its true intent, ability to understand that not hiding its true intent would be bad, and the ability to discern which issues are low-salience and which high-salience for humans from the get-go. And many other things, actually, but this is already quite a list.

None of these abilities automatically "fall out" from an intelligent system either.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T20:14:39.738Z · LW · GW

This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.

Or if your search algorithm never accesses relevant search space. Quantitative advantage in one system does not translate into quantitative advantage in a qualitatively different system.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T20:12:45.047Z · LW · GW

That is my point: it doesn't get to find out about general human behavior, not even from the Internet. It lacks the systems to contextualize human interactions, which have nothing to do with general intelligence.

Take a hugely mathematically capable autistic kid. Give him access to the internet. Watch him develop ability to recognize human interactions, understand human priorities, etc. to a sufficient degree that it recognizes that hacking an early warning system is the way to go?

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T20:09:30.955Z · LW · GW

Only if it has the skills required to analyze and contextualize human interactions. Otherwise, the Internet is a whole lot of jibberish.

Again, these skills do not automatically fall out of any intelligent system.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T19:00:57.228Z · LW · GW

I'm not so much interested in the exact mechanism of how humans would be convinced to go to war, as in an even approximate mechanism by which an AI would become good at convincing humans to do anything.

Ability to communicate a desire and convince people to take a particular course of action is not something that automatically "falls out" from an intelligent system. You need a theory of mind, an understanding of what to say, when to say it, and how to present information. There are hundreds of kids on autistic spectrum who could trounce both of us in math, but are completely unable to communicate an idea.

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

Maybe I'm missing something, but I don't see a straightforward way something like that could happen. And I would like to see even an outline of a mechanism for such an event.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T18:00:04.795Z · LW · GW

I'm vaguely familiar with the models you mention. Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl. This has been put forward as one of the main reasons for research into optronics, spintronics, etc.

We do NOT have sufficient basic information to develop processors based on simulation alone in those other areas. Much more practical work is necessary.

As for point 2, can you provide a likely mechanism by which a FOOMing AI could detonate a large number of high-yield thermonuclear weapons? Just saying "human servitors would do it" is not enough. How would the AI convince the human servitors to do this? How would it get access to data on how to manipulate humans, and how would it be able to develop human manipulation techniques without feedback trials (which would give away its intention)?

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T17:55:22.780Z · LW · GW

By all means, continue. It's an interesting topic to think about.

The problem with "atoms up" simulation is the amount of computational power it requires. Think about the difference in complexity when calculating a three-body problem as compared to a two-body problem?

Than take into account the current protein folding algorithms. People have been trying to calculate folding of single protein molecules (and fairly short at that) by taking into account the main physical forces at play. In order to do this in a reasonable amount of time, great shortcuts have to be taken - instead of integrating forces, changes are treated as stepwise, forces beneath certain thresholds are ignored, etc. This means that a result will always have only a certain probability of being right.

A self-replicating nanomachine requires minimal motors, manipulators and assemblers; while still tiny, it would be a molecular complex measured in megadaltons. To precisely simulate creation of such a machine, an AI that is trillion times faster than all the computers in the world combined would still require decades, if not centuries of processing time. And that is, again, assuming that we know all the forces involved perfectly, which we don't (how will microfluidic effects affect a particular nanomachine that enters human bloodstream, for example?).

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T05:26:09.357Z · LW · GW

See my answer to dlthomas.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T05:25:24.673Z · LW · GW

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an unpredicted and previously unknown obstacle, which is uncovered by experimentation. 4- work is done to overcome this obstacle. 5- goto 2, for many cycles, until a goal is achieved - which may or may not be close to the original idea.

I am not the one who is making positive claims here. All I'm saying is that what has happened before is likely to happen again. A team of human researchers or an AGI can use currently available information to build something (anything, nanoscale or macroscale) to the place to which it has already been built. Pushing it beyond that point almost invariably runs into previously unforeseen problems. Being unforeseen, these problems were not part of models or simulations; they have to be accounted for independently.

A positive claim is that an AI will have a magical-like power to somehow avoid this - that it will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step. I find that to be unlikely.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T02:56:21.198Z · LW · GW

With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.

If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.

In other words, your criticism here seems to boil down to saying "I believe that an AI can take an incomplete dataset and, by using some AI-magic we cannot conceive of, infer how to END THE WORLD."

Color me unimpressed.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T02:24:28.623Z · LW · GW

Yes, but it can't get to nanotechnology without a whole lot of experimentation. It can't deduce how to create nanorobots, it would have to figure it out by testing and experimentation. Both steps limited in speed, far more than sheer computation.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T01:49:16.166Z · LW · GW

I'm not talking about limited sensory data here (although that would fall under point 2). The issue is much broader:

  • We humans have limited data on how the universe work
  • Only a limited subset of that limited data is available to any intelligence, real or artificial

Say that you make a FOOM-ing AI that has decided to make all humans dopaminergic systems work in a particular, "better" way. This AI would have to figure out how to do so from the available data on the dopaminergic system. It could analyze that data millions of times more effectively than any human. It could integrate many seemingly irrelevant details.

But in the end, it simply would not have enough information to design a system that would allow it to reach its objective. It could probably suggest some awesome and to-the-point experiments, but these experiments would then require time to do (as they are limited by the growth and development time of humans, and by the experimental methodologies involved).

This process, in my mind, limits the FOOM-ing speed to far below what seems to be implied by the SI.

This also limits bootstrapping speed. Say an AI develops a much better substrate for itself, and has access to the technology to create such a substrate. At best, this substrate will be a bit better and faster than anything humanity currently has. The AI does not have access to the precise data about basic laws of universe it needs to develop even better substrates, for the simple reason that nobody has done the experiments and precise enough measurements. The AI can design such experiments, but they will take real time (not computational time) to perform.

Even if we imagine an AI that can calculate anything from the first principles, it is limited by the precision of our knowledge of those first principles. Once it hits upon those limitations, it would have to experimentally produce new rounds of data.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-17T01:11:41.134Z · LW · GW

Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.

You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.

1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity of possibilities (i.e. beyond anything that any physically possible intelligence can consider), or it will deviate from reality. The general intelligence is only as good as the data its inferences are based upon.

Experiments take time, data analysis takes time. No matter how efficient the inferential step may become, this puts an absolute limit to the speed of growth in capability to actually change things.

2) The Oracle AI that "goes FOOM" confined to a server cloud would somehow have to create servitors capable of acting out its desires in the material world. Otherwise, you have a very angry and very impotent AI. If you increase a person's intelligence trillionfold, and then enclose them into a sealed concrete cell, they will never get out; their intelligence can calculate all possible escape solutions, but none will actually work.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario? After all, it's not like we have any fully-automated nanobot production factories that could be hijacked.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-14T18:38:46.268Z · LW · GW
  1. Yes, if you can avoid replacing the solvent. But how do you avoid that, and still avoid creation of ice crystals? Actually, now that I think of it, there is a possible solution: expressing icefish proteins within neuronal cells. Of course, who knows shat they would do to neuronal physiology, and you can't really express them after death...

  2. I'm not sure that less toxic cryoprotectants are really feasible. But yes, that would be a good step forward.

  3. I actually think it's better to keep them together. Trying theoretical approaches as quickly as possible and having an appliable goal ahead at all times are both good for the speed of progress. There is a reason science moves so much faster during times of conflict, for example.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-14T18:34:15.881Z · LW · GW

All good reason to keep working on it.

The questions you ask are very complex. The short answers (and then I'm really leaving the question at that point until a longer article is ready):

  • Rehydration involves pulling off the stabilizer molecules (glycerol, trehalose) and replacing them dynamically with water. This can induce folding changes, some of which are irreversible. This is not theoretical: many biochemists have to deal with this in their daily work.
  • Membrane distortions also distort relative position of proteins within that membrane (and the structure of synaptic scaffold, a complex protein structure that underlies the synaptic membrane). Regenerating the membrane and returning it to the original shape and position doesn't necessarily return membrane-bound molecules to their original position.
Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-14T18:28:12.366Z · LW · GW

I don't think any intelligence can read information that is no longer there. So, no, I don't think it will help.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-14T18:25:06.895Z · LW · GW

In order, and briefly:

  • In Milwaukee protocol, you are giving people ketamine and some benzo to silence brain activity. Ketamine inhibits NMDA channels - which means that presynaptic neurons can still fire, but the signal won't be fully received. Benzos make GABA receptors more sensitive to GABA - so they don't do anything unless GABAergic neurons are still firing normally.

In essence, this tunes down excitatory signals, while tuning up the inhibitory signals. It doesn't actually stop either, and it certainly doesn't interfere with the signalling processes within the cell.

  • You are mixing three different processes here. First is cooling down. Cooling down is not the same as freezing. There are examples of people who went into deep hypothermia, and were revived even after not breathing for tens of minutes, with little to no brain damage. If the plan was to cool down human brains and then bring them back within a few hours (or maybe even days), I would put that into "possible" category.

Second is freezing. Some human neurons could survive freezing, if properly cultured. Many C. elegans neurons do not survive very deep freezing. It depends on the type of neuron and its processes. Many of your ganglionic neurons might survive freezing. Large spiny neurons, or spindle cells? Completely different story.

The third is freezing plus cryoprotectants. You need cryoprotectants, otherwise you destroy most cells, and especially most fine structures. But then you get membrane distortions and solvent replacement, and everything I've been talking about in other posts.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T22:15:39.944Z · LW · GW

We are deep into guessing territory here, but I would think that coarser option (magnesium, phosphorylation states, other modifications, and presence and binding status of other cofactors, especially GTPases) would be sufficient. Certainly for a simulated upload.

No, I don't work with Ed. I don't use optogenetics in my work, although I plan to in not too distant future.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T22:14:09.753Z · LW · GW

All of it! Coma is not a state where temporal resolution is lost!

You can silence or deactivate neurons in thousands of ways, by altering one or more signaling pathways within the cells, or by blocking a certain channel. The signaling slows down, but it doesn't stop. Stop it, and you kill the cell within a few minutes; and even if you restart things, signaling no longer works the way it did before.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T22:10:28.203Z · LW · GW

I don't believe so. Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. I don't think that would ever be readable into anything but a pale copy of the original person, no matter what kind of technological advance occurs (information simply isn't there to be read, regardless of how advanced the reader may be).

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T22:08:30.815Z · LW · GW

This was supposed to be a quick side-comment. I have now promised to eventually write a longer text on the subject, and I will do so - after the current "bundle" of texts I'm writing is finished. Be patient - it may be a year or so. I am not prepared to discuss it at the level approaching a scientific paper; not yet.

Keep in mind two things. I am in favor of life extension, and I do not want to discourage cryonic research (we never know what's possible, and research should go on).

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T22:04:11.160Z · LW · GW

Perhaps a better definition would help: I'm thinking about active zones within a synapse. You may have one "synapse" which has two or more active zones of differing sizes (the classic model, Drosophila NMJ, has many active zones within a synapse). The unit of integration (the unit you need to understand) is not always the synapse, but is often the active zone itself.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T22:01:07.551Z · LW · GW

In general, uploading a C. elegans, i.e. creating an abstract artificial worm? Entirely doable. Will probably be done in not-too-distant future.

Uploading a particular C. elegans, so that the simulation reflects learning and experiences of that particular animal? Orders of magnitude more difficult. Might be possible, if we have really good technology and are looking at the living animal.

Uploading a frozen C. elegans, using current technology? Again, you might be able to create an abstract worm, with all the instinctive behaviors, and maybe a few particularly strong learned ones. But any fine detail is irretrievably lost. You lose the specific "personality" of the specific worm you are trying to upload.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T21:55:50.155Z · LW · GW

Local ion channel density (i.e. active zones), plus the modification status of all those ion channels, plus the signalling status of all the presynaptic and postsynaptic modifiers (including NO and endocannabinoids).

You see, knowing the strength of all synapses for a particular neuron won't tell you how that neuron will react to inputs. You also need temporal resolution: when a signal hits the synapse #3489, what will be the exact state of that synapse? The state determines how and when the signal will be passed on. And when the potential from that input goes down the dendritic tree and passes by the synapse #9871, which is receiving an input at that precise moment - well, how is it going to affect synapse #9871, and what is the state of synaps #9871 at that precise moment?

Depending on the answer to this question, stimulation of #3498 followed very soon after with stimulation of #9871 might produce an action potential - or it might not. And this is still oversimplifying things, but I hope you get the general idea.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T21:48:36.316Z · LW · GW

I agree with you on both points. And also about the error bars - I don't think I can "prove" cryonics to be pointless.

But one has to make decisions based on something. I would rather build a school in Africa than have my body frozen (even though, to reiterate, I'm all for living longer, and I do not believe that death has inherent value).

Biggest obstacles are membrane distortions, solvent replacement and signalling event interruptions. Mind is not so much written into the structure of the brain as into the structure+dynamic activity. In a sense, in order to reconstruct the mind within a frozen brain, you would have to already know what that mind looks like when it's active. Then you need molecular tools which appear impossible from the fundamental principles of physics (uncertainty principle, molecular noise, molecular drift...).

My view of cryonics is that it is akin to mercuric antibiotics of the late 19th century. Didn't really work, but they were the only game in town. So perhaps with further research, new generation of mercuric substances will be developed which will solve all the problems, right? In reality, a much better solution was discovered. I believe this is also the case with life extension - cryonics will fade away, and we'll move in with a combination of stem cell treatments, technologies to eliminate certain accumulated toxins (primarily insoluble protein aggregates and lipid peroxidation byproducts), and methods to eliminate or constrain cellular senescence (I'm actually willing to bet ~$5 that these are going to be the first treatments to hit the market).

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T21:37:06.157Z · LW · GW

I'll eventually organize my thoughts in something worthy of a post. Until then, this has already gone into way more detail than I intended. Thus, briefly:

The damage that is occurring - distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways. Just changing the exact localization of Ca microdomains within a synapse can wreak havoc, replacing the liquid completely? Not going to work.

I don't necessarily think that low temps have anything to do with denaturation. Replacing the solvent, however, would do it almost unavoidably (adding the cryoprotectant might not, but removing it during rehydration will). With membrane-bound proteins you also have the issue of asymmetry. Proteins will seem fine in a symmetric membrane, but more and more data shows that many don't really work properly; there is a reason why cells keep phosphatydilserine and PIPs predominantly on the inner leaflet.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-13T21:19:24.717Z · LW · GW

Whether "working memory" is memory at all, or whether it is a process of attentional control as applied to normal long-term memory... we don't know for sure. So in that sense, you are totally right.

But what is the exact nature of the process is, perhaps strangely, unimportant. The question is whether the process can be enhanced, and I would say that the answer is very likely to be yes.

Also, keep in mind that working memory enhancement scenario is just one I pulled from thin air as an example. The larger point is that we are rapidly gaining the ability to non-invasively monitor activities of single neuronal cells (with fluorescent markers, for instance), and we are, more importantly, gaining the ability to control them (with finely tuned and targeted optogenetics). Thus, reading and writing into the brain is no longer an impossible hurdle, requiring nanoimplants or teeny-tiny electrodes (with requisite wiring). All you need are optical fibers and existing optogenetic tools (in theory, at least).

To generalize the point even further: we have the tools and the know-how with which we could start manipulating and enhancing existing neural networks (including those in human brains). It would be bad, inefficient and with a great deal of side-effects, we don't really understand the underlying architecture enough to really know what we are doing - but could still theoretically begin today, if for some reason we decided to (and lost our ethics along the way). On the other hand, we don't have a clue how to build an AGI. Regardless of any ethical or eschatonic concerns, we simply couldn't do it even if we wanted to. My personal estimate is, therefore, that we will make it to the first goal far sooner than we make it to the second one.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-13T04:56:17.481Z · LW · GW

It would appear that all of us have very similar amounts of working memory space. It gets very complicated very fast, and there are some aspects that vary a lot. But in general, its capacity appears to be the bottleneck of fluid intelligence (and a lot of crystallized intelligence might be, in fact, learned adaptations for getting around this bottleneck).

How superior would it be? There are some strong indication that adding more "chunks" to the working space would be somewhat akin to adding more qubits to a quantum computer: if having four "chunks" (one of the most popular estimates for an average young adult) gives you 2^4 units of fluid intelligence, adding one more would increase your intelligence to 2^5 units. The implications seem clear.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-13T04:44:03.121Z · LW · GW

Um...there is quite a bit of information. For instance, one major hurdle was ice crystal formation, which has been overcome - but at the price of toxicity (currently unspecified, but - in my moderately informed guesstimate - likely to be related to protein misfolding and membrane distortion).

We also have quite a bit of knowledge of synaptic structure and physiology. I can make a pretty good guess at some of the problems. There are likely many others (many more problems that I cannot predict), but the ones I can are pretty daunting.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-12T21:58:56.785Z · LW · GW

Ok, now we are squeezing a comment way too far. Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.

Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can't simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).

Etc, etc. I can't even begin to cover complications I see as soon as I look at what's happening here. I'm all for life extension, I just don't think cryonics is a viable way to accomplish it.

Instead of writing a series of posts in which I explain this in detail, I asked a quick side question, wondering whether there is some research into this I'm unaware of.

Does this clarify things a bit?

Comment by kalla724 on Most transferable skills? · 2012-05-12T21:50:27.422Z · LW · GW

Let me add to your description of the "Loci method" (also the basis of ancient Ars Memoria). You are using spatial memory (which is probably the evolutionarily oldest/most optimized) to piggyback the data you want to memorize.

There is an easier way for people who don't do that well in visualization. Divide a sheet of paper into areas, then write down notes on what you are trying to remember. Make areas somewhat irregular, and connect them with lines, squiggles, or other unique markers. When you write them, and when you look them over, make note of their relative position - formula A is in the left top corner, while formula Z is down and to the right of it, just beyond the spiral squiggle.

For a lot of people, this works just as well as Ars Memoria, and is a lot easier to learn and execute on the fly.

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-12T21:45:37.202Z · LW · GW

I can try, but the issue is too complex for comments. A series of posts would be required to do it justice, so mind the relative shallowness of what follows.

I'll focus on one thing. An artificial intelligence enhancement which adds more "spaces" to the working memory would create a human being capable of thinking far beyond any unenhanced human. This is not just a quantitative jump: we aren't talking someone who thinks along the same lines, just faster. We are talking about a qualitative change, making connections that are literally impossible to make for anyone else.

(This is even more unclear than I thought it would be. So a tangent to, hopefully, clarify. You can hold, say, seven items in your mind while considering any subject. This vastly limits your ability to consider any complex system. In order to do so at all, you have to construct "composite items" out of many smaller items. For instance, you can think of a mathematical formula, matrix, or an operation as one "item," which takes one space, and therefore allows you to cram "more math" into a thought than you would be able to otherwise. Alternate example: a novice chess player has to look at every piece, think about likely moves of every one, likely responses, etc. She becomes overwhelmed very quickly. An expert chess player quickly focuses on learned series of moves, known gambits and visible openings, which allows her to see several steps ahead.

One of the major failures in modern society is the illusion of understanding in complex systems. Any analysis picks out a small number of items we can keep in mind at one time, and then bases the "solutions" on them (Watts's "Everything is Obvious" book has a great overview of this). Add more places to the working memory, and you suddenly have humans who have a qualitatively improved ability to understand complex systems. Maybe still not fully, but far better than anyone else. Sociology, psychology, neuroscience, economics... A human being with a few dozen working memory spaces would be for economy the same thing a quantum computer with eight qubits would be for cryptography - whoever develops one first, can take wreak havoc as they like.)

When this work starts in earnest (ten to twelve years from now would be my estimate), how do we control the outcomes? Will we have tightly controlled superhumans, surrounded and limited by safety mechanisms? Or will we try to find "humans we trust" to become first enhanced humans? Will we have a panic against such developments (which would then force further work to be done in secret, probably associated with military uses)?

Negative scenarios are manifold (lunatic superhumans destroying the world, or establishing tyranny; lobotomized/drugged superhumans used as weapons of war or for crowd manipulation; completely sane superhumans destroying civilization due to their still present and unmodified irrational biases; etc.). Positive scenarios are comparable to Friendly AI (unlimited scientific development, cooperation on a completely new scale, reorganization of human life and society...).

How do we avoid the negative scenarios, and increase the probability of the positive ones? Very few people seem to be talking about this (some because it still seems crazy to the average person, some explicitly because they worry about the panic/push into secrecy response).

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-12T00:25:35.105Z · LW · GW

? No.

I fully admitted that I have only an off-the-cuff estimation (i.e. something I'm not very certain about).

Then I asked you if you have something better - some estimate based in reality?

Comment by kalla724 on Most transferable skills? · 2012-05-11T22:25:09.291Z · LW · GW

Good point. I'm trying to cast a wide net, to see whether there are highly transferable skills that I haven't considered before. There are no plans (yet), this is simply a kind of musing that may (or may not) become a basis for thinking about plans later on.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-11T22:10:16.270Z · LW · GW

What are the numbers that lead to your 1% estimate?

I will eventually sit down and make a back of the envelope calculation on this, but my current off-the-cuff estimate is about twenty (yes, twenty) orders of magnitude lower.

Comment by kalla724 on Neil deGrasse Tyson on Cryonics · 2012-05-11T22:07:10.565Z · LW · GW

If what you say were true - we "never cured cancer in small mammals" - then yes, the conclusion that cancer research is bullshit would have some merit.

But since we did cure a variety of cancers in small mammals, and since we are constantly (if slowly) improving both length of survival and cure rates in humans, the comparison does not stand.

(Also: the integration unit of human mind is not the synapse; it is an active zone, a molecular raft within a synapse. My personal view, as a molecular biophysicist turned neuroscientist, is that freezing damage is not fixable from basic principles (molecular drift over a few years is sufficient to prevent it completely). In my mind, the probability that some magical "damage repair" technique will be developed is within the same order of magnitude with probability that Rapture will occur. Cryonics is important primarily in the sense that it provides impetus for further research; but a radically different method of preservation is required before possibility of revivification reaches any reasonable level.)

Comment by kalla724 on Thoughts on the Singularity Institute (SI) · 2012-05-10T23:26:58.854Z · LW · GW

Very good. Objection 2 in particular resonates with my view of the situation.

One other thing that is often missed is the fact that SI assumes that development of superinteligent AI will precede other possible scenarios - including the augmented human intelligence scenario (CBI producing superhumans, with human motivations and emotions, but hugely enhanced intelligence). In my personal view, this scenario is far more likely than the creation of either friendly or unfriendly AI, and the problems related to this scenario are far more pressing.

Comment by kalla724 on Rationality Quotes May 2012 · 2012-05-02T01:35:54.030Z · LW · GW

The possession of knowledge does not kill the sense of wonder and mystery. There is always more mystery.

-- Anais Nin

Comment by kalla724 on Crowdsourcing the availability heuristic · 2012-04-27T21:29:44.252Z · LW · GW

I see your point, but I'll argue that yes, crowdsourcing is the appropriate term.

Google may be the collective brain of the entire planet, but it will give you only those results you search for. The entire idea here is that you utilize things you can't possibly think of yourself - which includes "which terms should I put into the Google search."

Comment by kalla724 on Crowdsourcing the availability heuristic · 2012-04-26T20:30:52.765Z · LW · GW

I may have to edit the text for clarification. In fact, I'm going to do so right now.

Comment by kalla724 on Crowdsourcing the availability heuristic · 2012-04-26T05:12:58.332Z · LW · GW

True enough. Anything can be overdone.

Comment by kalla724 on Crowdsourcing the availability heuristic · 2012-04-25T22:12:13.957Z · LW · GW

It's not, not necessarily. There isn't as much research on help-seeking as there should be, but there are some interesting observations.

I'm failing to find the references right now, so take with several grains of salt, but this is what I recall: asking for assistance does not lower status, and might even enhance it; while asking for complete solution is indeed status-lowering. I.e. if you ask for hints or general help in solving a problem, that's ok, but if you ask for someone to give you an answer directly, that isn't.

But all of that is a bit beside the point. In the abovementioned approach, you aren't really "asking for help." You are just talking with people, telling them what you wish to achieve, and asking for their thoughts. They can choose to jump in and offer help if they want (which can be, and most often is, a happiness-enhancing action for themselves as well as for you).

Comment by kalla724 on Attention control is critical for changing/increasing/altering motivation · 2012-04-19T21:06:45.570Z · LW · GW


Otte, Dialogues Clin Neurosci. 2011;13(4):413-21. Driessen and Hollon, Psychiatr Clin North Am. 2010 Sep;33(3):537-55. Flessner, Child Adolesc Psychiatr Clin N Am. 2011 Apr;20(2):319-28. Foroushani et al. BMC Psychiatry. 2011 Aug 12;11:131.

Books, hmm. I have not read it myself, but I heard that Leahy's "Cognitive Therapy Techniques: A Practitioner's Guide" is well-regarded. Very commonly recommended less-professional book is Greenberger and Padesky's "Mind over Mood."

Comment by kalla724 on Attention control is critical for changing/increasing/altering motivation · 2012-04-19T20:56:54.546Z · LW · GW

I'm not aware of any research in this area. It appears plausible, but there could be many confounding factors.