Posts

High energy ethics and general moral relativity 2015-06-21T20:34:02.400Z
Even better cryonics – because who needs nanites anyway? 2015-04-07T20:10:49.757Z
LINK: TED-Ed video on death and cryonics 2014-12-26T04:52:29.780Z

Comments

Comment by maxikov on Bay Area Solstice 2015 · 2015-12-13T01:29:20.012Z · LW · GW

If nothing breaks, we'll be live here: https://www.youtube.com/watch?v=SpUuPr5gYxk

Comment by maxikov on High energy ethics and general moral relativity · 2015-06-21T23:20:41.367Z · LW · GW

PA has a big advantage over object-level ethics: it never suggested things like "every tenth or so number should be considered impure and treated as zero in calculations", while object-level ethics did. The closes thing I can think of in mathematics, where everyone believed X, and then it turned out not X at all, was the idea that it's impossible to take every elementary integral algorithmically or prove that it's non-elementary. But even that was a within-system statement, not meta-statement, and it has an objective truth value. Systems as whole, however, don't necessarily have it. Thus, in ethics either individual humans or the society as whole need a mechanism for discarding ethical systems for good, which isn't that big of an issue for math. And the solution for this problem seems to be meta-ethics.

Comment by maxikov on High energy ethics and general moral relativity · 2015-06-21T22:30:52.220Z · LW · GW

I agree with the first paragraph of the summary, but as for the second - my point is against turning applause lights for utilitarianism on the grounds of such occurrences, or on any grounds whatsoever. And I also observe that ethics haven't gone as far from Bentham as physics have gone from Newton, which I regard as meta-evidence that the existing models are probably insufficient at best.

Comment by maxikov on In praise of gullibility? · 2015-06-21T05:39:44.090Z · LW · GW

Is this a bit Silicon Valley Culture? Because those guys do the same - they have a software idea and work on it individually or with 1-2 co-founders. Why? Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them?

For one thing, because open source community isn't terribly likely to embark on a random poster's new project, and you'll end up developing it mostly by yourself anyway. Furthermore, there's this aspect of hacker culture, and especially open source culture, where it's actively anti-evangelistic, and dislikes developing user-friendly things like Ubuntu, preferring Slackware or Gentoo.

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-09T22:25:37.714Z · LW · GW

That's actually surprising: I thought yeast survives freezing reasonably well, and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC182733/?page=2 seems to confirm that. What was different in your setup so that even the control group had a very low survival rate?

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-09T09:36:39.594Z · LW · GW

Thanks so much for the detailed review and lots of useful reading!

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-09T03:39:20.989Z · LW · GW

Sure, I can easily imagine that by mentally substituting steel with jello - at some point you're tear it apart no matter how thick the walls are. However, that substitute also gives me the impression that most shapes we would normally consider for a vessel don't reach the maximum strength possible for the material.

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-09T02:51:46.453Z · LW · GW

Is that done to convert shear force to tension?

I wonder, how much can be achieved by merely increasing the thickness of the walls (even to such extremes as a small hole in a cubic meter of steel)?

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-08T21:47:01.003Z · LW · GW

Ah, that's true. I guess going back to normal vitals and motion is good enough for preliminary experiments, but of course once that step is over, it's crucial to start examining the effects of preservation on cognitive features of mammals.

Tardigrada and some insects are in fact known to survive ridiculously harsh conditions, freezing (combined with nearly complete dehydration) included. Thus, it makes sense to take a simple organism that isn't known to survive freezing, and make it survive. I suspect though that if you can prevent tardigrades from dehydrating before freezing, the control group won't survive, which means that some experiments can possibly be done on them too.

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-08T20:14:38.277Z · LW · GW

I'm sure I'm following why mammals should be less susceptible to this problem, can you elaborate?

Doing this with mammals has a lot of challenges though, which it'd make sense to bypass in initial experiments. The deepest dive (aside from humans in DSVs) is only 3km, which accounts for 30 MPa. I guess it's safe to say that no mammal can withstand 350 MPa with air or any gas in its lungs, so total liquid ventilation is required, which is just as challenging to do with sea mammals as with land mammals. Also, mammals are warm-blooded, and usually experience asystole at abnormally low body temperatures, which are nonetheless far above freezing. So there's the issue of making it survive the time it takes to go form cardiac arrest to freezing, which is also probably just as hard to do with sea mammals as with land mammals. So although the ultimate goal is to develop a protocol for humans, it'd the much easier to start with an animal that's already capable of surviving 100 MPa of ambient pressure and +4C of its own body temperature.

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-08T19:58:54.731Z · LW · GW

Hmm, I wonder what the exact biochemistry that prevents life forms (including, apparently, vertebrate fish) in Challenger Deep at 111 MPa from experiencing these problems is, and whether it can be replicated in mammals.

They also mentioned that blebbing first appears at 90-120 seconds, but that's way too short even for the fastest protocols possible. Theoretically, it's not unthinkable to cool the body to just above 0C, and then go straight to 632 MPa and above, to make it instantly freeze, before blebbing occurs. And then, if total liquid ventilation allows one to drop the pressure that quickly as well, just go from solid directly to a non-dangerous pressure range. But for any protocol that involves temperature changes under pressure, tens of seconds is positively too short to allow the temperature to stabilize.

As for toxicity though, I though it was entirely due to the increased partial pressure of oxygen (which thus creates too strong of an oxidizing environment) and having too many nitrogen atoms dissolved in tissues, physically messing with fine-grain biochemistry like ion channels. Is there another chemical component of toxicity beyond that?

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-08T18:47:39.559Z · LW · GW

That's an interesting observation! When I was looking into this, I found several suppliers[1][2][3][4] that claim to produce pressure vessels, tubing, and pumps all the way up to 150'000 psi (1GPa). If 300MPa are already pushing the boundaries of steel, do you know what they could use to achieve such pressures?

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-08T00:14:50.916Z · LW · GW

Yep, fixed that, thanks.

Comment by maxikov on Even better cryonics – because who needs nanites anyway? · 2015-04-07T22:59:03.378Z · LW · GW

It seems like the approach of cooling the organism to -30C at 350MPa, and then raising pressure further to ~600Mps to freeze it could actually solve that. As far as I understand, the speed of diffusion in water it far slower that the speed of sound (speed of sound at 25C is 1497 m/s, while diffusion coefficient for protons at 25C is 9.31e-5 cm^2/s, which corresponds to 1.4e-4 m/s - 8 orders of magnitude less), which is the speed of pressure gradient propagation. So if we use raising pressure as a way to initiate phase transition, it will occur nearly simultaneously everywhere, and the solutes won't have time to diffuse anywhere.

ETA: I just realized that since diffusion propagates according to inverse square law, while sound is linear, they should be compared to each other at the shortest distance possible. So I checked the time it takes for a proton to cover 0.1nm (hydrogen atom diameter) in water - 5.37e-13s, which gives us 186 m/s. It's far greater than the original number, but still an order of magnitude smaller than the speed of sound. And if we take 4nm (the thickness of a cell membrane) we have 8.59e-10s - only 4 m/s, so it decreases very quickly, and we're pretty much safe.

Comment by maxikov on Open thread, Feb. 16 - Feb. 22, 2015 · 2015-02-18T23:53:18.301Z · LW · GW

If you only observe by absorbing particles, but not emitting them, you can be far enough away so that the light cone of your observation only intersects with the Earth later than the original departure point. That would only change the past of presumably uninhabited areas of space-time.

Comment by maxikov on An alarming fact about the anti-aging community · 2015-02-18T07:55:56.427Z · LW · GW

So where exactly do I go for that? Googling "freeze your cells" gives me the information about technical details of that, rather than a company that provides such service, or completely irrelevant weight loss surgery information.

Comment by maxikov on Open thread, Feb. 16 - Feb. 22, 2015 · 2015-02-18T07:20:39.803Z · LW · GW

What is the probability of having afterlife in a non-magical universe?

Aside from the simulation hypothesis (which is essentially another form of a magical universe), there is at leas one possibility for afterlife to exist: human ancestors travel back in time (or discover a way to get information from the past without passing anything back) to mind-upload everyone right before they die. There would be astrong incentive for them to not manifest themselves, as well as tolerate all the preventable suffering around the world: if changing the past leads to killing everyone in the original timeline, the price for altering the past is astronomical. Thus, they would have to only observe (with the reading of brain states as a form of observation) the past, but not change it, which is consistent with the observation of no signs of either time travelers or afterlife. But if will happen in future, it means it's already happening right now. How do you even approach estimating the probability of that?

Comment by maxikov on Open Thread, Feb. 2 - Feb 8, 2015 · 2015-02-05T19:12:15.024Z · LW · GW

If the effect of RF doesn't go beyond thermal, then you probably shouldn't be concerned about sitting next to an antenna dish any more than about sitting next to light bulb of the equal power. At the same time, even if the effect is purely thermal, it may be different from the light bulb since RF penetrates deeper in tissues, and the organism may or may not react differently to the heat that comes from inside rather than from outside. Or it may not matter - I don't know.

And apparently, there is a noticeable body of research, in which I can poke some holes, but which at least adheres to basic standards of peer-reviewed journals, that suggests the existence of non-thermal effects, and links to various medical conditions. However, my background in medicine and biology is not enough to thoroughly evaluate this research, beyond noticing that there are some apparent problems with that, but it doesn't appear to be obviously false either.

Comment by maxikov on Open Thread, Feb. 2 - Feb 8, 2015 · 2015-02-05T07:04:12.515Z · LW · GW

The general implication is that the so-called truth-seekers are worse off even though the opposite should be true.

The opposite should be true for a rational agent, but humans aren't rational agents, and may or may not benefit from false beliefs. There is some evidence that religion could be beneficial for humans while being completely and utterly false:

http://www.tandfonline.com/doi/abs/10.1080/2153599X.2011.647849

http://www.colorado.edu/philosophy/vstenger/Folly/NewSciGod/De%20Botton.pdf

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361002/

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0003679

Of course, this is not "checkmate, atheists", and doesn't mean we should all convert to Christianity. There are ways to mitigate the negative impact of false beliefs while preserving the benefits of letting the wiring of the brain do what it wants to do. Unitarian Universalists from the religious side, and Raemon's Solstice from the atheist side are trying to approach this nice zone with the amount of epistemological symbolism and rituals optimal for real humans, until we found a way to rewire everyone. But in general, unless you value truth for its own sake, you may be better off in life with certain false beliefs.

Comment by maxikov on Open Thread, Feb. 2 - Feb 8, 2015 · 2015-02-05T06:36:26.210Z · LW · GW

Should we be concerned about the exposure to RF radiation? I always assumed that no, since it doesn't affect humans beyond heating, but then I found this:

http://www.emfhealthy.com/wp-content/uploads/2014/12/2012SummaryforthePublic.pdf

http://www.sciencedirect.com/science/article/pii/S0160412014001354

The only mechanism they suggest for non-thermal effects is:

changes to protein conformations and binding properties, and an increase in the production of reactive oxygen species (ROS) that may lead to DNA damage (Challis, 2005 and La Vignera et al., 2012)

One of the articles they cite is behind a paywall (http://www.ncbi.nlm.nih.gov/pubmed/15931683), and the other (http://www.ncbi.nlm.nih.gov/pubmed/21799142) doesn't actually seem to control for thermal effects (it has a non-exposed control, but doesn't have a control exposed to the same amount of energy in visible or infrared band). The fact that heat interferes with male fertility is no surprise (http://en.wikipedia.org/wiki/Heat-based_contraception), but it's not clear to me whether there's any difference between being exposed to RF and turning on the heater (maybe there is, if the organism deals with internal and external heat differently, or maybe this effect is negligible).

Nonetheless, if there is a significant non-thermal effect, that alone warrants a lot of research.

Comment by maxikov on Open Thread, Feb. 2 - Feb 8, 2015 · 2015-02-03T18:33:57.973Z · LW · GW

OK, I'll have to read deeper into TDT to understand why that happens, currently that seems counterintuitive as heck.

Comment by maxikov on Rationality Quotes Thread February 2015 · 2015-02-02T22:50:42.134Z · LW · GW

Hypocrisy isn't actually fundamentally wrong, even is deliberate. The idea that it's bad is a final yet arbitrary value that has to be taught to humans. Many religions contain the Golden Rule, which boils down to "don't be a hypocrite", and this is exactly an indicator that is was highly non-obvious before it permeated our culture.

Comment by maxikov on Open Thread, Feb. 2 - Feb 8, 2015 · 2015-02-02T22:40:29.751Z · LW · GW

I, for instance, do not think it's okay to kill a copy of me even if I know I will live on

Not OK in what sense - as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.

From the linked post:

The counterargument is also simple, though: Making copies of myself has no causal effect on me. Swearing this oath does not move my body to a tropical paradise. What really happens is that I just sit there in the cold just the same, but then later I make some simulations where I lie to myself.

If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn't just wrong, but even meaningless, and cannot be defined very well. I'm totally not buying that on gut level, but at the same time I don't see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.

When there is a decision your original body can make that creates a bunch of copies, and the copies are also faced with this decision, your decision lets you control whether you are the original or a copy.

I don't quite get this part - can you elaborate?

If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant - they really are getting killed off in a way that matters more to them than before

What's about the thought experiment with erasing memories though? I doesn't physically violate causality, but from the experience perspective it does - suddenly the person loses a chunk of their experience, and they're basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn't seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it's hard to distinguish them on the logical level.

Comment by maxikov on Open Thread, Feb. 2 - Feb 8, 2015 · 2015-02-02T06:05:46.205Z · LW · GW

Disclaimer: the identity theory that I actually alieve is the most common intuitionist one, and it's philosophically inconsistent: I regard as death teleportation but not sleeping. This comment, however, is written from System 2 perspective, that can operate even with concepts that I don't alieve

The basic idea behind timeless identity is that "I" can only be meaningfully defined inductively as "an entity that has experience continuity with my current self". Thus, we can safely replace "I value my life" with "I value the existence of an entity that feels and behaves exactly like me". That allows us to be OK with quite useful (although hypothetical) things like teleportation, mind uploading, mind backups, etc. It also seems to provide an insight into why it's OK to make a copy of me on Mars, and immediately destroy Earth!me, but not OK to destroy Earth!me hours later: the experiences of Earth!me and Mars!me would diverge, and each of them would value their own lives.

However, here is the thing: in this case we merely replace the requirement "to have an entity with experience continuity with me" with "to have an entity with experience continuity with me, except this one hour". They're actually pretty interchangeable. For example, I forget most of my dreams, which means I'm nearly guaranteed to forget several hours of experience every day, and I'm OK with that. One might say that the value of genuine experiences exceeds that of hallucinations, but I would still be pretty OK with taking a suppressor of RNA synthesis, that would temporarily give me anterograde amnesia, and do something that I don't really care about remembering - clean the house or something. Heck, even retroactively erasing my most cherished memories, although extremely frustrating, is still not nearly as bad as death.

That implies that is there are multiple copies of me, the badness of killing any of them is no more than the increase in the likelihood of all of them being destroyed (which is not a lot, unless there's Armageddon happening around) plus the value of memories formed since the last replication. Also, every individual copy should consider alieve being killed to be no worse than forgetting what happened since the last replication, which also sounds not nearly as horrible as death. That also implies that simulating time travel by discarding time branches is also a pretty OK thing to do, unless the universes diverge strongly enough to create uniquely valuable memories.

Is that correct or am I missing something?

Comment by maxikov on The Bay Area Solstice · 2015-01-03T00:32:45.526Z · LW · GW

We decided that keeping the whole video including personal stories public all the time wouldn't be a very good idea. All the songs, however, are publicly available here: https://www.youtube.com/playlist?list=PLhH76Ztpl1UIHsSvxSsHhoPLc95n_s_6N

Comment by maxikov on LINK: Nematode brain uploaded with success · 2014-12-26T04:38:25.447Z · LW · GW

My primary concern is that the model is very simplified. Although even on this level it may be interesting to invent a metric for the accuracy of encoding the organism's behavior - from completely random to a complete copy.

Comment by maxikov on LINK: Nematode brain uploaded with success · 2014-12-24T03:29:23.539Z · LW · GW

When you think about it, the brain is really nothing more than a collection of electrical signals.

Statements like this make me want to bang my head against a wall. No, it is not. Brain is a collection of neural and glial cells, the role of which we only partially understand. Most of the neurons are connected through various types of chemical synapses, and ignoring their chemical nature would fail to explain the effects of most psychoactive drugs and even hormones. Some of the neurons are linked directly. Some of them are myelinated, while others are not, and this is kinda big deal, since there's no clocking in the nervous system, and the entire outcome of the processing depends on how long it takes for the action potential to propagate through the axon. And how long it takes for the synapse to react. And how long the depolarization persists in the receiving neuron. And all of that is regulated by the chemistry of regulating gene expression patterns. And we're not even talking about learning and forming long-term memories, which are due to neuroplasticity, entirely controlled by gene expression patterns. It's enough to suppress RNA synthesis to cause anterograde amnesia - although it will also cause some retrograde amnesia too., since apparently merely using neurons causes them to change.

Also, C. elegans doesn't even have a brain; it has ganglia.

Look, I understand that this is some interesting research, but calling it "brain uploading" is like comparing the launch of a firework to interstellar travel: essentially, they're the same, but there are couple of nuances.

Comment by maxikov on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-18T12:07:16.934Z · LW · GW

Is there a significant difference between the mathematical universe hypothesis and Hegelian absolute idealism. Both seem to claim the primacy of ideas over matter (mind in case of Hegel, and math in case of MUH), and conclude that matter should follow the law of ideas. MUH just makes one step forward, and says that if there are different kind of maths, there should be different kinds of universes, while Hegel haven't claimed the same about different minds.

Comment by maxikov on The Bay Area Solstice · 2014-12-14T03:19:28.462Z · LW · GW

And we have broadcast: https://www.youtube.com/watch?v=S4eG0JM_93s

Comment by maxikov on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-06T04:25:43.334Z · LW · GW

Surely I do. The hypothesis that after a certain period of hypoxia under the normal body temperature the brain sustains enough damage so that it cannot be recovered even if you manage to get the heart and other internal organs working is rather arbitrary, but it's backed up by a lot of data. The hypothesis that with the machinery for direct manipulation of molecules, which doesn't contradict our current understanding of physics, we could fix a lot beyond the self-recovery capabilities of the brain is perfectly sensible, but it's just a hypothesis without the data to back it up.

This, of course, can remind you the skepticism towards flying machines heavier than air in 19th century. And I do believe that some skepticism was a totally valid position to take, given the evidence that they had. There are various degrees of establishing the truth, and "it doesn't seem to follow from our fundamental physics that it's theoretically impossible" is not the highest of them.

Comment by maxikov on December 2014 Media Thread · 2014-12-03T10:37:28.244Z · LW · GW

中島みゆき:

Comment by maxikov on Link: Rob Bensinger on Less Wrong and vegetarianism · 2014-12-03T04:57:18.435Z · LW · GW

I would distinguish several levels of meta-preferences.

On level 1, an agent has a set of object-level preferences, and wants to achieve the maximum cumulative satisfaction of them over the lifetime. To do that, the agent may want sometimes to override the incentive to maximize the satisfaction at each step if it is harmful in the long run. Basically, it's just switching from a greedy gradient descent to something smarter, and barely requires any manipulations with object-level preferences.

On level 2, the agent may want to change their set of object-level preferences in order to achieve higher satisfaction, given the realistic limits of what's possible. A stupid example: someone who wants one billion dollars but cannot have it may want to start wanting ten dollars instead, and be much happier. More realistic example: a person who became disables may want to readjust their preference and accommodate new limitations. Applying this strategy to its logical end has some failure modes (e.g. the one described in Three Worlds Collide, or, more trivially, opiates), but it still sort of make sense for a utility-driven agent.

On level 3, the agent may want to add or remove some preferences, regardless of the effect of that on the total level of satisfaction, just for their own sake.

Wanting to care more about animals seems to be level-3 meta-preference. In a world where this preference is horribly dissatisfied, where animals are killed at the rate of about one kiloholocaust per year, that clearly doesn't optimize for satisfaction. Consistency of values and motivations - yes, but only if you happen to have consistency as a terminal value in the utility function. That doesn't necessarily have to be the case: in most scenarios, consistency is good because it's useful, because it allows us to solve problems. The lack of compassion to animals doesn't seem to be a problem, unless the inconsistency itself is a problem.

Thus, it seems impossible to make such change without accepting that carrying about animals is good or that having consistent values is good in a morally realist way. Now, I'm not claiming that I'm a complete moral relativist. I'm not even sure that it's possible - so far, all the arguments for moral relativism I've seen are actually realist themselves. However, arguing for switching between different realist-ish moral frameworks seems to be a much harder task.

Comment by maxikov on Link: Rob Bensinger on Less Wrong and vegetarianism · 2014-12-02T21:15:17.982Z · LW · GW

We may be using different definitions of "care". Mine is exactly how much I'm motivated to change something after I became aware that it exists. I don't find myself extremely motivated to eliminate the suffering of humans, and much less for animals. Therefore, I conclude that my priorities are probably different. Also, at least to some extent I'm either hardwired or conditioned to empathize and help humans in my immediate proximity (although definitely to a smaller extent than people who claim to have sleepless nights after observing the footage of suffering), but it doesn't generalize well to the rest of humans and other animals.

As for saving the replica, I probably will, since it definitely belongs to the circle of entities I'm likely to empathize with. However, the exact details really depend on whether I classify my replica as myself or as my copy, which I don't have a good answer to. Fortunately, I'm not likely to encounter this dilemma in foreseeable future, and probably by the time it's likely to occur, I'll have more information to answer this question better. Furthermore, especially in this situation, and in much more realistic situations of being nice to people around me, there are almost always selfish benefits, especially in the long run. However, in the situations where every person around me is basically a bully, who perceives niceness as weakness and the invitation to bully more, I frankly don't feel all that much compassion.

Comment by maxikov on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T20:17:20.283Z · LW · GW

How about putting numbers on it? Without doing so, your argument is quite vague.

I would estimate the cumulative probability as the ballpark of 0.1%

Have you actually looked at the relevant LW census numbers for what "we are hoping"?

I was actually referring to the apparent consensus what I see among researchers, but it's indeed vague. I should look up the numbers if they exist.

Comment by maxikov on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T20:07:58.264Z · LW · GW

I would say it's probably no higher than 0.1%.

But by no means I'm arguing against cryonics. I'm arguing for spending more resources on improving it. All sorts of biologists are working on longevity, but very few seem to work on improving vitrification. And I have a strong suspicion that it's not because nothing can be done about it - most of the time I talked to biologists about it, we were able to pinpoint non-trivial research questions in this field.

Comment by maxikov on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T19:34:08.936Z · LW · GW

Secondly, because the people who are in a position to do such research are less likely than the general population to believe in an afterlife.

On this particular point, I would say that people who are in a position to allocate funds for research programs are probably about as likely as the general population to believe in the belief in afterlife.

Generally, I agree - it's definitely not the only problem. The USSR, where people were at least supposed to not believe in afterlife, didn't have longevity research as its top priority. But it's definitely one of the cognitive stop signs, that prevents people from thinking about death hard enough.

Comment by maxikov on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T08:02:29.306Z · LW · GW

Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.

  • The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)

  • Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.

  • Timeless identity is correct, and it's you rather than your copy that wakes up.

  • We will figure brain scanning.

  • We will figure brain simulation.

  • Alternatively, we will figure nanites, and a way to make them work through the ice.

  • We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; nuclear war; unfriendly AI, etc. That's similar to the longevity escape velocity, although slower: to survive, you don't just have to advance technologies, you have to advance them fast enough.

All that combined, the probability of working out is really darn low. Yes, it is much better than zero, but still low. If I were to play Russian roulette, I would be happy to learn that instead of six bullets I'm playing with five. However, this relief would not stop me from being extremely motivated to remove even more bullets from the cylinder.

The reason why the belief in afterlife is not just neutral but harmful for modern people is that it demotivates them from doing immortality research. Dying is sure scary, we won't truly die, so problem solved, let's do something else. And I'm worried about cryonics becoming this kind of a comforting story for transhumanists. Yes, actually removing one bullet from the cylinder is much much better than hoping that Superman will appear in the last moment, and stop the bullet. But stopping after removing just one bullet isn't a good idea either. Some amount of resources are devoted to the conventional longevity research, but as far as I understand, we're not hoping to achieve the longevity escape velocity for currently living people, especially adults. Cryonics appear to be our only chance to avoid death, and I would be extremely motivated to try to make our only chance as high as we can possibly make it. And I don't think we're trying hard.

Comment by maxikov on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T19:55:57.410Z · LW · GW

being spoken by "figures wearing black robes, and speaking in a dry, whispering voice, and they are actually withered beings who touched the Stone of Evil"

Isn't that what my inner Quirrellmort supposed to be?

Comment by maxikov on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T06:20:21.685Z · LW · GW

You know that spreading rationality is a strong net positive, right? How many lives could we save if people just stopped for a while and though about stuff in a relatively unbiased way? Even then the population of purely selfish but rational agents could do better than we do - and people usually aren't purely selfish. If we could only spread rationality better. But you know as good as I do: it's exactly the biases that make demagogy almost always sounds more convincing than the truth. It is so hard, so frustrating to explain the bitter truth, while competing against comforting lies, pushing all the buttons that - you've learned it - almost guaranteed to make one agree.

But what if you could do a little bit of... you know... marketing? Oh, spreading rationality through irrationality sounds so hypocritical!.. deontologically. But you're utilitarian, you know how to make trade-offs. And you know better than to make trade-offs against some general principles that may be reasonable rules of thumb, but don't even start to encompass the actual people and their happiness. How did you put that - shut up and multiply? Well, go on, multiply: billions of lives saved against millions slightly offended. And here is the thing - before learning about biases they won't be able to recognize your little tricks, and the job would be already done. Many will probably agree that it would have been net positive. Oh, your reputation could be damaged? Well, I though you were an altruist.

Can it even get any worse than it is now? I'm not even talking about the marketing of commodities - adding a little bit of your marketing isn't gonna change anything at all, even if you still believe in those deontological ideas. I'm talking about the market of ideas. You compete against people who learned some of the tricks, but use them with malicious intents, not for the benefit of the consumer. But you know better. They vaguely learned some buttons from classical novels and books by liberal arts majors. You learned how the whole machine works, with mathematical modeling. You know what buttons to push to make your point sweeter ans stickier. You can crush all that irrationality all at once.

After all, there are no arguments without any flavor with them. It's just that you either select to give the randomness and subconsciousness to choose the flavor, and call it "fair", or purposefully select the flavor, and call it "trickery" and "marketing". But since when do rationalists consider obliviousness better than knowledge?

Why do you choose to not use your force for good? What stops you? What's your choice?

Comment by maxikov on xkcd on the AI box experiment · 2014-11-21T22:55:26.648Z · LW · GW

Exactly. Having the official position buried in comments with long chains of references doesn't help to sound convincing compared to a well-formatted (even if misleading) article.

Comment by maxikov on xkcd on the AI box experiment · 2014-11-21T19:31:16.712Z · LW · GW

On meta-level, I find it somewhat ironical that LW community, as well as EY, who usually seem to disapprove of oversensitivity displayed by tumblr's social justice community, seem also deeply offended by prejudice against them and a joke that originates from this prejudice. On object-level, the joke Randall makes would have been rather benign and funny (besides, I'm willing to exercise the though that mocking Roko's Basilisk could be used as a strategy against it), if not for the possibility that many people could take it seriously, especially given the actual existing attacks on LW from Rational Wiki. But going back to meta-level, this is exactly what tumblr folks often complain about: what you say and do may not be terrible per se, but it could invoke and support the actual terrible things.

On object-level, I don't want people to have misconceptions about AGI. On meta-level, I don't want to be a stereotypical oversensitive activist, that everyone else believes is crazy and obnoxious.

Comment by maxikov on Irrationalism on Campus? · 2014-11-20T20:09:28.694Z · LW · GW

I study at a small campus, that only has grad students of technical majors, more than of whom are international students. There's basically no political or societal discourse on campus. Feels good. I actually get much more exposure to politics via LW meet-ups, and most of the political discourse I interact with comes from local EGL community and their facebook feeds. And reddit, of course. But back on topic, our campus seems to operate on implicit politeness and tolerance principles, which aren't really voiced by anyone.

Comment by maxikov on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-20T01:23:21.681Z · LW · GW
  1. Did this study consider the difference between white and non-white immigrants to mostly white Western countries?

  2. Did this study consider the difference between white and non-white immigrants to non-white countries?

  3. Did this study consider the difference between immigrants who (try to) assimilate to local communities, and those who prefer to stay within national communities?

Comment by maxikov on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-20T01:13:41.841Z · LW · GW

I'm not if it works with physical attractiveness, but in case of intellectual adequacy, I'm just not letting any internal doubts in my competence to interfere with external confidence. Even if I suspect that I'm not as smart as people around me, I still act exactly as if I am.

Comment by maxikov on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-19T21:58:47.876Z · LW · GW

That's the whole point: if we can if we can prevent water from expanding by freezing and keeping the sample under high pressure, thus making crystal formation harmless (probably), we can use less cryoprotectant. I don't know if it's possible to get rid of it completely, so I mentioned wood frogs, that already have all the mechanisms necessary to survive slightly below the freezing temperature. It's just their cryoprotectant isn't good enough to go any colder, but it's not so poisonous either. Also, they're small, so it's easier to find high pressure units to fit them in - they're perfect model organisms for cryonics research.

As of now, cryonics is at best an information backup indeed, but I see no reason why we should be content with that. Yes, we will probably eventually invent advance nanomachinery, as well as whole brain simulation and scanning, but that's too many unknowns in the equation. We could do much better than that.

Comment by maxikov on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-19T21:46:44.272Z · LW · GW

That would destroy cryonics companies who make money via insurance that depends on people legally dying.

Wouldn't it just shift to health insurance in this case? But generally, yes, recognizing cryonic patients as alive has a lot of legal ramifications. On the other hand, it provides a much better protection against unfreezing: just like with the patients in a persistent vegetative state, someone authorized has to actively make a decision to kill them, as opposed to no legal protection at all. I'm not sure which of these is the net positive. Besides, that would challenge the current definition of death, which currently basically boils down to "we positively can do nothing to bring the patient back from this state". Including potential technologies in the definition is a rather big perspective change, that can also have consequences for vegetative patients as well.

If I understood it right then vitrification is done to prevent ice crystals from forming. Do you mean something different?

As ZankerH mentioned below, vitrification leads to cryoprotectant poisoning, which is a sufficiently big problem to prevent us from experimenting with unfreezing even in small organisms. If the function of the cryoprotectant can be fully or partially replaced by keeping the sample under the high pressure, that problem is mostly solved. That doesn't prevent crystals from forming, but unlike normal ice, these crystals take less volume than the water they were made of, so they shouldn't damage the cells. In addition, amorphous solids aren't guaranteed to be stable, and can undergo slow crystallization. I'm not sure how pig of a problem that is for cryonics, but in case of going directly to ice-IX, that's definitely not a problem anymore.

Comment by maxikov on I Want To Believe: Rational Edition · 2014-11-19T12:22:17.988Z · LW · GW

Unfounded self-confidence (or any unfounded belief) is very harmful.

Citation needed. Bluff (i.e. unfounded confidence) seems to be a very efficient strategy in many games. Apparently, even in chess:

UNIDENTIFIED MALE #2: Rook to D1.

CAMPBELL: And this particular move was really bad, and so it caused us to give up the game right away.

FOO: This really bad move confused Kasparov. Murray says he heard Kasparov's team stayed up that night trying to analyze the logic behind that move - what it meant. The only thing was - there was no logic.

Comment by maxikov on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-19T12:16:23.236Z · LW · GW

Couple of random thoughts about cryonics:

  • It would actually be better to have cryonics legally recognized as a burial ritual than as a cadaver experimentation. In that way it can be performed on someone who hasn't formally signed a will, granting their body as an anatomical gift to the cryonic service provider. Sure, ideally it should be considered a medical procedure on a living person in a critical condition, but passing such legislation is next to impossible in the foreseeable future, whereas the former sounds quite feasible.

  • The stabilization procedure should be recognized as an acceptable form of active euthanasia. This is probably the shortest way to get to work with not yet brain-dead humans, and it would allow people to trade couple of months or years of rather painful live for better chances at living again.

  • Insulin injections should probably be a part of the stabilization protocol (especially in the previous case). According to "Glucose affects the severity of hypoxic-ischemic brain injury in newborn pigs" by LeBlanc MH et al, hypoglycemic brains sustain hypoxia much better than normally. That totally makes sense: oxygen is mainly consumed for glycolysis, so if there's nothing to oxidize, oxygen consumption will decrease.

  • Some of the major problems of cryonics can probably be solved by preventing water from expanding upon freezing. According to 1 and 2, ice is denser than water at about 30 kbar. That is a bit technically complicated, but I would speculate that with this trick we could have reversible freezing in wood frogs right now.

Comment by maxikov on Can science come to understand consciousness? A problem of philosophical zombies (Yes, I know, P-zombies again.) · 2014-11-17T20:06:06.472Z · LW · GW

http://philpapers.org/rec/ARGMAA-2 - this may be relevant to your question, although I haven't read the whole article yet.

Comment by maxikov on Link: Rob Bensinger on Less Wrong and vegetarianism · 2014-11-14T00:29:07.422Z · LW · GW

This article heavily implies that every LessWronger is a preference utilitarian, and values the wellbeing, happiness, and non-suffering of ever sentient (i.e. non-p-zombie) being. Neither of that is fully true for me, and as this ad-hoc survey - https://www.facebook.com/yudkowsky/posts/10152860272949228 - seems to suggest, I may not be alone in that. Namely, I'm actually pretty much OK with animal suffering. I generally don't empathize all that much, but there a lot of even completely selfish reasons to be nice to humans, whereas it's not really the case for animals. As for non-human intelligent beings - I'll figure that once I meet them, or the probability of such encounter gets somewhat realistic; currently there's too much ambiguity about them.