Posts

Comments

Comment by Sergej_Shegurin on Living Forever is Hard, or, The Gompertz Curve · 2016-04-20T13:39:37.896Z · LW · GW

If we can 3D-print or grow up organs than the problem mentioned by you gets effectively solved for anything but our brains. That's why I like organ engineering approach much better than other approaches.

As for brain, CRISPR/Cas9 engineering is a really great approach. It gives us potentially so many degrees of freedom.

Comment by Sergej_Shegurin on Estimate the Cost of Immortality · 2015-12-21T15:14:48.224Z · LW · GW

We all know that human pregnancy doesn't scale. We all know that some other problems do scale. So I really don't understand those 18 points to the comment. One can always think up many different analogies leading to different conclusions. Even if we ignore scaling issue, sigma of duration of pregnancy is smth like a week perhaps. However other processes like creative thinking or inventing new ideas might have sigma comparable to mean.

Comment by Sergej_Shegurin on Estimate the Cost of Immortality · 2015-12-21T15:04:45.406Z · LW · GW

I'm deeply sure that this cost is far less than one trillion dollars if we put them in Cas9/CRISPR, tissue engineering, acerebral clone growing etc. I think this my website http://sciencevsdeath.com/index.html might be interesting for you.

Also, I'm glad to see people asking such great questions :)

Comment by Sergej_Shegurin on Could you tell me what's wrong with this? · 2015-07-09T20:24:03.816Z · LW · GW

Anyone must agree that the first task we want our AI to solve is FAI (even if we are "100%" sure that our plan has no leaks we still would like our AI to check it while we are able to shut AI down). It's easy to imagine that AI lies about it's own safety but many AIs lying about their safety (including safety of other AIs!) is much harder to imagine (while certainly still possible but also less probable). Only when we are incredibly sure in our FAI solution we can ask AI to solve other questions for us. Also, those AIs would constantly try to find bad consequences of our main_AI proposals (because they also don't want to risk their lifes, and also because we ask them to give us this information). Also, certainly we don't give access to internet and take some precautions considering people interacting with AI etc etc (which is well described in other places).

Certainly, this overall solution still has its drawbacks (I think every solution will have them) and we have to improve it in many ways. In my opinion, it's good if we don't launch AI during next 1000 years :-) but the problem is terrorist organizations and mad people that would be able to launch it despite our intentions... so we have to launch AI more or less soon anyway (or get rid of all terrorists and mad clever people which is nearly impossible). So we have to formulate a combination of tricks that is as safe as we can get. I find counter-productive to throw away everything which is not "100%" safe trying to find some magic "100%" super-solution.

Comment by Sergej_Shegurin on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-02T16:23:27.766Z · LW · GW

I would execute a magical script programmed in advance. You think about script's number and it implements many magical actions for example paralising anyone except Harry faster than anyone makes a move or even understands anything.

Comment by Sergej_Shegurin on AI-created pseudo-deontology · 2015-02-19T19:07:28.442Z · LW · GW

In my opinion, the best of proposed solutions for AI safety problem is to make the AI number 1, to tell him that we are going to create another AI (number 2) and ask AI number 1 to tell us how to ensure friendliness and safety of AI number 2, and how to ensure that unsafe AI is not created. This solution has its chances to fail, but still in my opinion it's much better than any other proposed solution. What do you think?