liso feed - LessWrong 2.0 Reader liso’s posts and comments on the Effective Altruism Forum en-us Comment by Liso on 2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics) https://www.lesswrong.com/posts/FDaYqFW8ExjKNwkqa/2016-lesswrong-diaspora-survey-analysis-part-one-meta-and#CXJ7woh6NmbxrJcfb <p>One child could have two parents (and both could answer) so 598 is questionable number. </p> liso CXJ7woh6NmbxrJcfb 2017-01-22T00:05:45.907Z Comment by Liso on How to escape from your sandbox and from your hardware host https://www.lesswrong.com/posts/TwH5jfkuvTatvAKEF/how-to-escape-from-your-sandbox-and-from-your-hardware-host#93DkJyXk9EzwEruYu <blockquote> <p>More generally, a &quot;proof&quot; is something done within a strictly-defined logic system.</p> </blockquote> <p>Could you prove it? :) </p> <p>Btw. we have to assume that these papers are written by someone who wants slyly to switch some bits in our brain!! </p> liso 93DkJyXk9EzwEruYu 2015-08-04T05:31:35.390Z Comment by Liso on How to escape from your sandbox and from your hardware host https://www.lesswrong.com/posts/TwH5jfkuvTatvAKEF/how-to-escape-from-your-sandbox-and-from-your-hardware-host#8uT7bdpB9suAwNHAD <p>&quot;human&quot;-style humor could be sandbox too :) </p> liso 8uT7bdpB9suAwNHAD 2015-08-04T05:18:28.413Z Comment by Liso on Oracle AI: Human beliefs vs human values https://www.lesswrong.com/posts/b223mLTZNDFExf3Qp/oracle-ai-human-beliefs-vs-human-values#iwH4xPjgRxHQ4h95B <p>I like to add some values which I see not so static and which are proably not so much question about morality:</p> <p>Privacy and freedom (vs) security and power. </p> <p>Family, society, tradition. </p> <p>Individual equality. (disparities of wealth, right to have work, ...)</p> <p>Intellectual properties. (right to own?) </p> liso iwH4xPjgRxHQ4h95B 2015-08-04T05:04:04.477Z Comment by Liso on Oracle AI: Human beliefs vs human values https://www.lesswrong.com/posts/b223mLTZNDFExf3Qp/oracle-ai-human-beliefs-vs-human-values#FkbWpf27upMnbrfgk <p>I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable</p> <p>From <a href="http://www.graines-de-paix.org/il/layout/set/print/nos_idees/trois_idees_fortes_comme_facteurs_de_paix/les_valeurs_humaines/valeurs_humaines">this page</a> -&gt; </p> <p>Human values are, for example:</p> <ul> <li>civility, respect, consideration;</li> <li>honesty, fairness, loyalty, sharing, solidarity;</li> <li>openness, listening, welcoming, acceptance, recognition, appreciation;</li> <li>brotherhood, friendship, empathy, compassion, love. </li> </ul> <hr /> <ol> <li><p>I think none of them we could call belief. </p> </li> <li><p>If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)</p> </li> <li><p>On the contrary - because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)</p> </li> <li><p>I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?) </p> </li> </ol> <p>And last but not least - if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy <a href="http://wiki.lesswrong.com/wiki/Paperclip_maximizer">paperclip maximizer</a> could save humankind. </p> <p>We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)</p> liso FkbWpf27upMnbrfgk 2015-08-04T04:37:42.936Z Comment by Liso on Oracle AI: Human beliefs vs human values https://www.lesswrong.com/posts/b223mLTZNDFExf3Qp/oracle-ai-human-beliefs-vs-human-values#3pZ28K5Y4RnRn9bRL <p>Stuart is it really your implicit axiom that human values are static, fixed? </p> <p>(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?) </p> liso 3pZ28K5Y4RnRn9bRL 2015-07-24T04:26:20.017Z Comment by Liso on Oracle AI: Human beliefs vs human values https://www.lesswrong.com/posts/b223mLTZNDFExf3Qp/oracle-ai-human-beliefs-vs-human-values#QDTF89HyPCyKm28uq <blockquote> <p>more of a question of whether values are stable. </p> </blockquote> <p>or question if human values are (objective and) independent of humans (as subjects who could develop)</p> <p>or question if we are brave enough to ask questions if answers could change us. </p> <p>or (for example) question if it is necessarily good for us to ask questions where answers will give us more freedom. </p> liso QDTF89HyPCyKm28uq 2015-07-22T19:57:42.403Z Comment by Liso on I need a protocol for dangerous or disconcerting ideas. https://www.lesswrong.com/posts/PbuTG9caJ82Pky5aD/i-need-a-protocol-for-dangerous-or-disconcerting-ideas#NGRzwascM8tnwg9YZ <p>I am not expert. And it has to be based on facts about your neurosystem. So you could start with several experiments (blod tests etc). You could change diet, sleep more etc. </p> <p>About rationality and lesswrong -&gt; could you focus your fears to one thing? For example forgot quantum world and focus to superintelligence? I mean could you utilize the power you have in your brain? </p> liso NGRzwascM8tnwg9YZ 2015-07-17T05:40:30.706Z Comment by Liso on I need a protocol for dangerous or disconcerting ideas. https://www.lesswrong.com/posts/PbuTG9caJ82Pky5aD/i-need-a-protocol-for-dangerous-or-disconcerting-ideas#HRoR5Q5X7SxLJjgKA <p>You are talking about rationality and about fear. Your protocol could have several independent layers. You seems to think that your ideas produce your fear, but it could be also opposite. Your fear could produce your ideas (and it is definitely very probable that fear has impact on your ideas (at least on contents)). So you could analyze rational questions on lesswrong and independently solve your irrational part (=fear etc) with terapeuts. There could be physical or chemical reasons why you are concerning more than other people. Your protocol for dangerous ideas needs not only discussing it but also solve your emotional responses. If you like to sleep well then it could depend more on your emotional stability than on rational knowledge. </p> liso HRoR5Q5X7SxLJjgKA 2015-07-13T04:43:13.251Z Comment by Liso on Superintelligence 26: Science and technology strategy https://www.lesswrong.com/posts/kADkXCAq6aBBxSyqE/superintelligence-26-science-and-technology-strategy#H2zdRCtmyXvcxLyig <p>Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.</p> liso H2zdRCtmyXvcxLyig 2015-03-12T08:56:51.460Z Comment by Liso on Superintelligence 23: Coherent extrapolated volition https://www.lesswrong.com/posts/EQFfj5eC5mqBMxF2s/superintelligence-23-coherent-extrapolated-volition#ShM7ssBpZDoTjKWcN <p>@Nozick: we are plugged to machine (Internet) and virtual realities (movies, games). Do we think that it is wrong? Probably it is question about level of connection to reality? </p> <p>@Häggström: there is contradiction in definition what is better. F1 is better than F because it has more to strive and F2 is better than F1 because it has less to strive. </p> <p>@CEV: time is only one dimension in space of conditions which could affect our decisions. Human cultures are choosing cannibalism in some situations. SAI could see several possible future decisions depending on surroundings and we have to think very carefully which conditions are acceptable and which are not. Or we could choose what we choose in some special scene prepared for humanity by SAI. </p> liso ShM7ssBpZDoTjKWcN 2015-02-19T05:46:39.391Z Comment by Liso on Superintelligence 13: Capability control methods https://www.lesswrong.com/posts/398Swu6jmczzSRvHy/superintelligence-13-capability-control-methods#2a3fdmkXnebkz3Lju <p>This could be not good mix -&gt;</p> <p>Our action: 1a) Channel manipulation: other sound, other image, other data &amp; Taboo for AI: lying.</p> <p>This taboo: &quot;structured programming languages.&quot;, could be impossible, because structure understanding and analysing is probably integral part of general intelligence. </p> <p>She could not reprogram itself in lower level programming language but emulate and improve self in her &quot;memory&quot;. (She could not have access to her code segment but could create stronger intelligence in data segment) </p> liso 2a3fdmkXnebkz3Lju 2014-12-10T04:16:50.476Z Comment by Liso on Superintelligence 13: Capability control methods https://www.lesswrong.com/posts/398Swu6jmczzSRvHy/superintelligence-13-capability-control-methods#AoxvxoXjToDnKhD3s <p>Is &quot;transcendence&quot; third possibility? I mean if we realize that human values are not best and we retire and resign to control.</p> <p>(I am not sure if it is not motivation selection path - difference is subtle)</p> <p>BTW. if you are thinking about partnership - are you thinking how to control your partner? </p> liso AoxvxoXjToDnKhD3s 2014-12-09T21:40:32.125Z Comment by Liso on Superintelligence 12: Malignant failure modes https://www.lesswrong.com/posts/BqoE5vhPNCB7X6Say/superintelligence-12-malignant-failure-modes#zNpnfysyjxSb4rzCz <p>Sorry for question out of this particular topic. </p> <p>When we started to discuss I liked and proposed idea to make wiki page with results from our discussion. Do you think that we have any ideas which are collectible in collaboratory wiki page? </p> <p>I think we have at least one - paulfchristiano's &quot;cheated evolution&quot; : <a href="http://lesswrong.com/r/discussion/lw/l10/superintelligence_reading_group_3_ai_and_uploads/bea7">http://lesswrong.com/r/discussion/lw/l10/superintelligence_reading_group_3_ai_and_uploads/bea7</a> </p> <p>Could you add more? </p> liso zNpnfysyjxSb4rzCz 2014-12-02T06:42:27.372Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#4jFQdo4LycL3hcbbz <blockquote> <p>It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.</p> </blockquote> <p>There are two sorts of unsafe AI: one which care and one which doesnt care. </p> <p>Ignorant is fastest - only calculate answer and doesn't care of anything else.</p> <p>Friend and enemy has to analyse additional things...</p> liso 4jFQdo4LycL3hcbbz 2014-11-27T06:25:27.973Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#FCksrH9XReSLgTWzH <blockquote> <p>The other question is: what happens once you know the AI has desire for subversion?</p> </blockquote> <p>There is plenty of people cooperating and supporting dictators and mafians due to selfish reasons. We could expect same in this area. </p> <p>&quot;I will support dangerous AI because I will be more rich,powerful or safe then. &quot;</p> liso FCksrH9XReSLgTWzH 2014-11-27T05:49:07.026Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#hSt6tKMk6swq8ft7C <p>I don't understand negative point. Could it be explained please? </p> <p>Is it because his position equals what he declares and not what he really think?</p> <p>Or becase Bostroms mimicry (=for some reasons prefere negative, catastrophic and alarmist position) is not suposed?</p> liso hSt6tKMk6swq8ft7C 2014-11-27T05:43:09.778Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#8FgboqfhurNbQFt3C <blockquote> <p>Expose it to limited data about the world, or no data about the real world -Provide it little or no information about its operators or the society they live in -Provide it little or no information about its own hardware or software architecture</p> </blockquote> <p>When I read &quot;Oracle AI&quot; I realized one possibility which is probably still overlooked. </p> <p>&quot;Dreams&quot;</p> <p>Erasing data on physical media is not perfect. Zero bit which was before nonzero could be read more slowly or quicker. It could allow SAI to see shadows of past data. Which could lead to phenomenon similar to dreams. </p> liso 8FgboqfhurNbQFt3C 2014-11-27T05:28:43.540Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#ZtcbaGcgNEv2ukS9d <p>I am afraid that we have not precisely defined term goal. And I think we need it.</p> <p>I am trying to analyse this term. </p> <p>Do you think that todays computer's have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction? </p> <p>Probably I could more precisely describe my idea in other way: In Bostrom's book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary. </p> <p>Could we think AI without goals but with subgoals? </p> <p>One posibility could be if they will have &quot;goal centre&quot; externalized in human brain. </p> <p>Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals? </p> <p>If they could be void - will be goal imprinted with first task? </p> <p>Or with first task with word &quot;please&quot;? :)</p> <hr /> <p>About utility maximizer - human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption. </p> <p>We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice? </p> <p>How did that nature? (I am not talking about evolution but about DNA encoding) </p> <p>Balance between &quot;intelligent&quot; neural tissues (SAI) and &quot;stupid&quot; non-neural (humanity). :)</p> <p>Probably we have to see difference between purpose and B-goal (goal in Bostrom's understanding). </p> <p>If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect. </p> <hr /> <p>I have feeling that if you say &quot;do it&quot; Bostrom's AI hear &quot;do it maximally perfect&quot;. </p> <p>If you tell: &quot;tell me how much is 2+2 (and do not destroy anything)&quot; then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2. </p> <p>I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.</p> liso ZtcbaGcgNEv2ukS9d 2014-11-27T00:48:40.505Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#EkPsNMBsD52mqs4y9 <p>Could AI be without any goals?</p> <p>Would that AI be dangerous in default doom way?</p> <p>Could we create AI which wont be utility maximizer?</p> <p>Would that AI need maximize resources for self?</p> liso EkPsNMBsD52mqs4y9 2014-11-26T19:10:34.832Z Comment by Liso on Superintelligence 11: The treacherous turn https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn#ajtALnmg9xxsbyw8r <p>Positive emotions are useful too. :)</p> liso ajtALnmg9xxsbyw8r 2014-11-26T10:28:24.783Z Comment by Liso on Superintelligence 10: Instrumentally convergent goals https://www.lesswrong.com/posts/BD6G9wzRRt3fxckNC/superintelligence-10-instrumentally-convergent-goals#zLnM9LDKxSqTBjJff <p>I think that if SAIs will have social part we need to think altruisticaly about them. </p> <p>It could be wrong (and dangerous too) think that they will be just slaves. </p> <p>We need to start thinking positively about our children. :)</p> liso zLnM9LDKxSqTBjJff 2014-11-26T06:37:45.900Z Comment by Liso on Superintelligence 10: Instrumentally convergent goals https://www.lesswrong.com/posts/BD6G9wzRRt3fxckNC/superintelligence-10-instrumentally-convergent-goals#rFYKsRePN96xroJC7 <p>Just a little idea:</p> <p>In one advertising I saw interesting pyramid with these levels (from top to bottom): vision -&gt; mission -&gt; goals -&gt; strategy -&gt; tactics -&gt; daily planning. </p> <p>I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, ...) work on (vision -&gt; mision -&gt; goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)</p> <p>I am afraid that humanity has not properly defined/analysed nor vision nor mission. And more groups and individuals has more contradictory vision, mission and goals. </p> <p>One big problem with SAI is not SAI but that we will have BIG POWER and we still dont know what we really want. (and what we really want to want)</p> <p>Bostrom's book seems to have paradigm that goal is something on top, rigid and stable. Could not be dynamic and flexible like vision. Probably it could be true that one stupidly defined goal (paperclipper) could be unchangeable and ultimate. But we probably have more possibilities to define SAI's personality. </p> liso rFYKsRePN96xroJC7 2014-11-26T06:34:11.442Z Comment by Liso on Meetup : Bratislava https://www.lesswrong.com/posts/FbGAHaJrtmCsScYQi/meetup-bratislava-0#gMD7w4vd5uiw8Xqzy <p>toto som myslel: <a href="https://neurokernel.github.io/faq.html">https://neurokernel.github.io/faq.html</a></p> <p>Ale je to asi kustik nedorobenejsie nez som si predpokladal </p> <p>Dalsie zdroje info : <a href="http://www.cell.com/current-biology/abstract/S0960-9822%2810%2901522-8">http://www.cell.com/current-biology/abstract/S0960-9822%2810%2901522-8</a> <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3704784/">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3704784/</a> <a href="http://www.flycircuit.tw/">http://www.flycircuit.tw/</a> <a href="https://en.wikipedia.org/wiki/Drosophila_connectome">https://en.wikipedia.org/wiki/Drosophila_connectome</a></p> liso gMD7w4vd5uiw8Xqzy 2014-11-11T21:22:59.833Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#QtEL8eqNcETGDWT3Z <p>I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign. </p> liso QtEL8eqNcETGDWT3Z 2014-11-10T09:50:58.189Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#uu96oJNopeboFDwbB <p>One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.</p> <p>In this moment I just wanted to improve our view at probability of only one SI in starting period. </p> liso uu96oJNopeboFDwbB 2014-11-10T09:26:08.452Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#85DSy456HzTMfxq57 <p>Think prisoner's dilemma! </p> <p>What would aliens do? </p> <p>Is selfish (self centered) reaction really best possibitlity? </p> <p>What will do superintelligence which aliens construct? </p> <p>(no discussion that humans history is brutal and selfish)</p> liso 85DSy456HzTMfxq57 2014-11-07T21:47:55.133Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#JDTXFrCCFeGaX2EPH <blockquote> <p>Let us try to free our mind from associating AGIs with machines.</p> </blockquote> <p>Very good! </p> <p>But be honest! Aren't we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?</p> liso JDTXFrCCFeGaX2EPH 2014-11-07T21:17:50.094Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#QCEJrX4HEv8aMPP7A <p>When I was thinking about past discussions I was realized something like: </p> <p>(selfish) gene -&gt; meme -&gt; goal. </p> <p>When Bostrom is thinking about singleton's probability I am afraid he overlook possibility to run more 'personalities' on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble's telescope to observe diffferent objects)</p> <p>And not only possibility but probably also necessity. </p> <p>If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity. </p> <p>We need to analyze how to slightly different goals could control each other. </p> liso QCEJrX4HEv8aMPP7A 2014-11-07T21:05:05.028Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#hhMc5vjDee5ArHDBQ <p>moral, humour and spiritual analyzer/emulator. I like to know more about these phenomenas. </p> liso hhMc5vjDee5ArHDBQ 2014-11-07T20:36:55.527Z Comment by Liso on Superintelligence 8: Cognitive superpowers https://www.lesswrong.com/posts/C9KJ8BrLqfsBuHgpB/superintelligence-8-cognitive-superpowers#j2McbpH6mXesFK5FM <p>When we <a href="http://lesswrong.com/lw/l0o/superintelligence_reading_group_2_forecasting_ai/bdoa?context=3">discuss about evil AI</a> I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level. </p> <p>Now I am thankful because your comment enlarge possibilities to think about Fermi. </p> <p>We could not think only self destruction - we could think modesty and self sustainability. </p> <p>Sauron's ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one). </p> <p>We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers. </p> liso j2McbpH6mXesFK5FM 2014-11-07T20:34:07.406Z Comment by Liso on Superintelligence 7: Decisive strategic advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintelligence-7-decisive-strategic-advantage#Dpya3jev4PvAAcMCJ <p>Market is more or less stabilized. There are powers and superpowers in some balance. (gain money sometimes could be illusion like bet (and get) more and more in casino). </p> <p>If you are thinking about money making - you have to count sum of all money in society. If investments means bigger sum of values or just exchange in economic wars or just inflation. (if foxes invest more to hunting and eat more rabbits, there could be more foxes right? :)</p> <p>In AI sector there is much higher probability of phase-transition (=explosion). I think that's the diference. </p> <p>How? </p> <ol> <li><p>Possibility: There could be probably enough HW and we just wait for spark of new algorithm. </p> </li> <li><p>Possibility: If we count agriculture revolution as explosion - we could also count massive change in productivity from AI (which is probably obvious). </p> </li> </ol> liso Dpya3jev4PvAAcMCJ 2014-11-01T11:52:14.171Z Comment by Liso on Superintelligence 7: Decisive strategic advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintelligence-7-decisive-strategic-advantage#zRQX9q9fpkRH5gAZp <blockquote> <p>Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.</p> </blockquote> <p>Ants are probably good example how could organisational intelligence (?) be advantage. </p> <p>According to <a href="https://en.wikipedia.org/wiki/Ant">wiki</a> ''Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.''. See also <a href="http://answers.google.com/answers/threadview?id=536123">google answer</a>, <a href="http://en.wikipedia.org/wiki/Biomass_%28ecology%29#Global_biomass">wiki table</a> or <a href="http://skeptics.stackexchange.com/questions/7602/is-the-total-biomass-of-ants-roughly-equal-to-the-total-biomass-of-humans">stackexchange</a>. </p> <p>Although we have to think careful - apex predators does not use to form large biomass. So it could be more complicated to define success of life form. </p> <p>Problem of humanity is not only global replacer - something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing. </p> <p>And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands... (pets or AI) ... is also unwanted. </p> <p>We are afraid not decisive strategic advance over ants but over humans. </p> liso zRQX9q9fpkRH5gAZp 2014-11-01T11:22:12.539Z Comment by Liso on Superintelligence 7: Decisive strategic advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintelligence-7-decisive-strategic-advantage#6ySaGcdsPdwyH2m4J <blockquote> <p> It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.</p> </blockquote> <p>We could test it in thought experiment.</p> <p>Chess game human-grandmaster against AI. </p> <ol> <li><p>it is <strong>not rapid</strong> (not checkmate in begining).<br />We could also suppose one move per year to slow it down. It bring to AI next advantage because it's ability to concentrate so long time. </p> </li> <li><p>capabilities<br />a) intellectual capabilities we could suppose at <strong>same level</strong> during the game (if it is played in one day, otherwise we have to think Moore's law)<br />b) human lose (step by step) positional and material capabilities during the game. And it is <strong>expected</strong></p> </li> </ol> <p>Could we still talk about decisive advantage if it is not rapid and not unexpected? I think so. At least if we won't break the rules. </p> liso 6ySaGcdsPdwyH2m4J 2014-11-01T08:15:39.517Z Comment by Liso on Superintelligence 7: Decisive strategic advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintelligence-7-decisive-strategic-advantage#WQyCtZbHug3kmLL8B <p>One possibility to prevent smaller group gain strategic advantage is something like <a href="https://en.wikipedia.org/wiki/Operation_Opera">operation Opera</a>. </p> <p>And it was only about nukes (see <a href="http://www.cnet.com/news/elon-musk-artificial-intelligence-could-be-more-dangerous-than-nukes/">Elon Musk statement</a>)...</p> liso WQyCtZbHug3kmLL8B 2014-11-01T07:27:44.156Z Comment by Liso on Superintelligence 6: Intelligence explosion kinetics https://www.lesswrong.com/posts/GT8uvxBjidrmM3MCv/superintelligence-6-intelligence-explosion-kinetics#vgopSabuCNMY3B4dM <p>Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)</p> <p> Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)</p> <p>Next theorem is obvious :)</p> liso vgopSabuCNMY3B4dM 2014-10-21T21:47:42.194Z Comment by Liso on Superintelligence 6: Intelligence explosion kinetics https://www.lesswrong.com/posts/GT8uvxBjidrmM3MCv/superintelligence-6-intelligence-explosion-kinetics#8kv7xC4Ld5pjXXQwL <p>This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be &quot;cheated&quot;. </p> <p> One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years. </p> <p>Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc. </p> <p>But also 1.5x more discoveries could bring 10x bigger profit!</p> <p>We could not suppose only linear dependencies in such a complex problems. </p> liso 8kv7xC4Ld5pjXXQwL 2014-10-21T21:21:32.227Z Comment by Liso on Superintelligence 5: Forms of Superintelligence https://www.lesswrong.com/posts/semvkn56ZFcXBNc2d/superintelligence-5-forms-of-superintelligence#8hsZ3KtHR3uhTP8ZW <p>Difficult question. Do you mean also ten times faster to burn out? 10x more time to rest? Or due to simulation not rest, just reboot? </p> <p>Or permanently reboot to drug boosted level of brain emulation on ten times quicker substrate? (I am afraid of drugged society here)</p> <p>And I am also afraid that ten time quicker farmer could not have ten time summer per year. :) So economic growth could be limited by some botlenecks. Probably not much faster. </p> <p>What about ten time faster philosophic growth? </p> liso 8hsZ3KtHR3uhTP8ZW 2014-10-15T05:19:41.910Z Comment by Liso on Superintelligence 5: Forms of Superintelligence https://www.lesswrong.com/posts/semvkn56ZFcXBNc2d/superintelligence-5-forms-of-superintelligence#Yj5PbFC4dFkksfWRu <blockquote> <p> Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.</p> </blockquote> <p>We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow. </p> <p>Humanity could be overtaken also by slow (and alien) superintelligence. </p> <p>It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act... (like slowly loosing pieces in chess game)</p> <p>If strong entities in our world will (are?) driving by poorly designed goals - for example &quot;maximize profit&quot; then they could really be very dangerous to humanity. </p> <p>I really dont want to spoil our discussion with politics rather I like to see rational discussion about all existential threats which could raise from superintelligent beings/entities. </p> <p>We have not underestimate any form and not underestimate any method of our possible doom. </p> <p>With bigdata comming, our society is more and more ruled by algorithms. And algorithms are smarter and smarter. </p> <p>Algorithms are not independent from entities which have enough money or enough political power to use it. </p> <p>BTW. Bostrom wrote (sorry not in chapter we discussed yet) about possible perverse instantiation which could be done due to not well designed goal by <strong>programmer</strong>. I am afraid that in our society it will be <strong>manager</strong> or <strong>politician</strong> who will/is design goal. (we have find way that there be also <strong>philosopher</strong> and <strong>mathematician</strong>)</p> <p>In my oppinion first (if not singleton) superintelligence will be (or is) most probably '<strong>mixed form'</strong>. Some group of well organized people (dont forget <strong>lawyers</strong>) with big database and supercomputer. </p> <p>Next stages after intelligence explosion could have any other forms. </p> liso Yj5PbFC4dFkksfWRu 2014-10-15T05:04:34.731Z Comment by Liso on SRG 4: Biological Cognition, BCIs, Organizations https://www.lesswrong.com/posts/SEtkbLgusgtQ9dAzX/srg-4-biological-cognition-bcis-organizations#ofeEcqBsArCSndmES <p>This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer. </p> <p>But I think in this moment we could (and have) analyse also trained &quot;horses&quot; with trained &quot;raiders&quot;. And also trained &quot;pairs&quot; (or groups?)</p> <p>Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy)</p> <p>So <strong>yes = I expect they could be substantially useful</strong> also in case that direct physical interace would too difficult in next decade(s). </p> liso ofeEcqBsArCSndmES 2014-10-10T03:57:02.248Z Comment by Liso on SRG 4: Biological Cognition, BCIs, Organizations https://www.lesswrong.com/posts/SEtkbLgusgtQ9dAzX/srg-4-biological-cognition-bcis-organizations#WvRbu4j4xMKgmyxy8 <p>This is also one of points where I dont agree with Bostrom's (fantastic!) book. </p> <p>We could use analogy from history: <strong>human-animal</strong> = soldier+hourse didnt need the physical iterface (like in Avatar movie) and still added awesome military advance.</p> <p>Something similar we could get from better weak AI tools. (probably with better GUI - but it is not only about GUI)</p> <p>&quot;Tools&quot; dont need to have big general intelligence. They could be at hourse level: </p> <ul> <li>their incredible power of analyse big structure (big memory buffer) </li> <li>speed of &quot;rider&quot; using quick &quot;computation&quot; with &quot;tether&quot; at your hands</li> </ul> liso WvRbu4j4xMKgmyxy8 2014-10-10T03:43:47.789Z Comment by Liso on Superintelligence Reading Group 2: Forecasting AI https://www.lesswrong.com/posts/56b8n8FT6fksnDZwY/superintelligence-reading-group-2-forecasting-ai#9oy5v5Cgy79tQDCSW <p>what we have in history - it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger. </p> <p>But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)</p> <p>For example still miss (and probably will miss) in book: </p> <p>a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )</p> <p>b) AI Impact to religion</p> <p>etc. </p> liso 9oy5v5Cgy79tQDCSW 2014-09-25T03:58:03.273Z Comment by Liso on Superintelligence Reading Group 2: Forecasting AI https://www.lesswrong.com/posts/56b8n8FT6fksnDZwY/superintelligence-reading-group-2-forecasting-ai#HxRBm5Yb8mk9sCvJR <p>But why collapsed evil AI after apocalypse? </p> liso HxRBm5Yb8mk9sCvJR 2014-09-25T03:47:51.166Z Comment by Liso on Superintelligence Reading Group 2: Forecasting AI https://www.lesswrong.com/posts/56b8n8FT6fksnDZwY/superintelligence-reading-group-2-forecasting-ai#iCzCoW5cEdcruJb7q <p>Katja pls interconnect discussion parts by links (or something like <a href="http://en.wikipedia.org/wiki/Help:Section#Table_of_contents_.28TOC.29">TOC</a> )</p> liso iCzCoW5cEdcruJb7q 2014-09-25T03:45:07.125Z Comment by Liso on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities https://www.lesswrong.com/posts/mmZ2PaRo86pDXu8ii/superintelligence-reading-group-section-1-past-developments#sMKtyAiXGSCoc6frH <p>Are you played this type of game? </p> <p>[pollid:777]</p> <p>I think that if you played on big map (freeciv support really huge) then your goals (<strong>like in real world</strong>) could be better fulfilled if you play <strong>WITH (not against) AI</strong>. For example managing 5 tousands engineers manually could take several hours per round. </p> <p>You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game... </p> liso sMKtyAiXGSCoc6frH 2014-09-23T04:52:19.105Z Comment by Liso on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities https://www.lesswrong.com/posts/mmZ2PaRo86pDXu8ii/superintelligence-reading-group-section-1-past-developments#5QbjEEY2PEYqP53Cs <p>This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )</p> <p>Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?)</p> <p>In other worlds: Is Searle's <a href="https://en.wikipedia.org/wiki/Chinese_room">chinese room</a> intelligent? (in definition which The Book use for (super)intelligence)</p> <p>And if it is then it is human or alien mind? </p> <p>And could be superintelligent? </p> <p>What arguments we could use to prove that none of today's corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ? </p> <p>And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How? </p> <p>Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?) </p> liso 5QbjEEY2PEYqP53Cs 2014-09-19T04:13:22.026Z Comment by Liso on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities https://www.lesswrong.com/posts/mmZ2PaRo86pDXu8ii/superintelligence-reading-group-section-1-past-developments#HPzT6c7whEokEfWp2 <p><strong>First of all thanx for work with this discussion! :)</strong></p> <p>My proposals:</p> <ul> <li>wiki page for collaborative work </li> </ul> <p><em>There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it</em> </p> <ul> <li>better time for europe and world?</li> </ul> <p><em>But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)</em></p> liso HPzT6c7whEokEfWp2 2014-09-16T06:13:34.296Z