Comment by Liso on 2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics) · 2017-01-22T00:05:45.907Z · LW · GW

One child could have two parents (and both could answer) so 598 is questionable number.

Comment by Liso on How to escape from your sandbox and from your hardware host · 2015-08-04T05:31:35.390Z · LW · GW

More generally, a "proof" is something done within a strictly-defined logic system.

Could you prove it? :)

Btw. we have to assume that these papers are written by someone who wants slyly to switch some bits in our brain!!

Comment by Liso on How to escape from your sandbox and from your hardware host · 2015-08-04T05:18:28.413Z · LW · GW

"human"-style humor could be sandbox too :)

Comment by Liso on Oracle AI: Human beliefs vs human values · 2015-08-04T05:04:04.477Z · LW · GW

I like to add some values which I see not so static and which are proably not so much question about morality:

Privacy and freedom (vs) security and power.

Family, society, tradition.

Individual equality. (disparities of wealth, right to have work, ...)

Intellectual properties. (right to own?)

Comment by Liso on Oracle AI: Human beliefs vs human values · 2015-08-04T04:37:42.936Z · LW · GW

I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable

From this page ->

Human values are, for example:

  • civility, respect, consideration;
  • honesty, fairness, loyalty, sharing, solidarity;
  • openness, listening, welcoming, acceptance, recognition, appreciation;
  • brotherhood, friendship, empathy, compassion, love.

  1. I think none of them we could call belief.

  2. If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)

  3. On the contrary - because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)

  4. I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?)

And last but not least - if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy paperclip maximizer could save humankind.

We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)

Comment by Liso on Oracle AI: Human beliefs vs human values · 2015-07-24T04:26:20.017Z · LW · GW

Stuart is it really your implicit axiom that human values are static, fixed?

(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)

Comment by Liso on Oracle AI: Human beliefs vs human values · 2015-07-22T19:57:42.403Z · LW · GW

more of a question of whether values are stable.

or question if human values are (objective and) independent of humans (as subjects who could develop)

or question if we are brave enough to ask questions if answers could change us.

or (for example) question if it is necessarily good for us to ask questions where answers will give us more freedom.

Comment by Liso on I need a protocol for dangerous or disconcerting ideas. · 2015-07-17T05:40:30.706Z · LW · GW

I am not expert. And it has to be based on facts about your neurosystem. So you could start with several experiments (blod tests etc). You could change diet, sleep more etc.

About rationality and lesswrong -> could you focus your fears to one thing? For example forgot quantum world and focus to superintelligence? I mean could you utilize the power you have in your brain?

Comment by Liso on I need a protocol for dangerous or disconcerting ideas. · 2015-07-13T04:43:13.251Z · LW · GW

You are talking about rationality and about fear. Your protocol could have several independent layers. You seems to think that your ideas produce your fear, but it could be also opposite. Your fear could produce your ideas (and it is definitely very probable that fear has impact on your ideas (at least on contents)). So you could analyze rational questions on lesswrong and independently solve your irrational part (=fear etc) with terapeuts. There could be physical or chemical reasons why you are concerning more than other people. Your protocol for dangerous ideas needs not only discussing it but also solve your emotional responses. If you like to sleep well then it could depend more on your emotional stability than on rational knowledge.

Comment by Liso on Superintelligence 26: Science and technology strategy · 2015-03-12T08:56:51.460Z · LW · GW

Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.

Comment by Liso on Superintelligence 23: Coherent extrapolated volition · 2015-02-19T05:46:39.391Z · LW · GW

@Nozick: we are plugged to machine (Internet) and virtual realities (movies, games). Do we think that it is wrong? Probably it is question about level of connection to reality?

@Häggström: there is contradiction in definition what is better. F1 is better than F because it has more to strive and F2 is better than F1 because it has less to strive.

@CEV: time is only one dimension in space of conditions which could affect our decisions. Human cultures are choosing cannibalism in some situations. SAI could see several possible future decisions depending on surroundings and we have to think very carefully which conditions are acceptable and which are not. Or we could choose what we choose in some special scene prepared for humanity by SAI.

Comment by Liso on Superintelligence 13: Capability control methods · 2014-12-10T04:16:50.476Z · LW · GW

This could be not good mix ->

Our action: 1a) Channel manipulation: other sound, other image, other data & Taboo for AI: lying.

This taboo: "structured programming languages.", could be impossible, because structure understanding and analysing is probably integral part of general intelligence.

She could not reprogram itself in lower level programming language but emulate and improve self in her "memory". (She could not have access to her code segment but could create stronger intelligence in data segment)

Comment by Liso on Superintelligence 13: Capability control methods · 2014-12-09T21:40:32.125Z · LW · GW

Is "transcendence" third possibility? I mean if we realize that human values are not best and we retire and resign to control.

(I am not sure if it is not motivation selection path - difference is subtle)

BTW. if you are thinking about partnership - are you thinking how to control your partner?

Comment by Liso on Superintelligence 12: Malignant failure modes · 2014-12-02T06:42:27.372Z · LW · GW

Sorry for question out of this particular topic.

When we started to discuss I liked and proposed idea to make wiki page with results from our discussion. Do you think that we have any ideas which are collectible in collaboratory wiki page?

I think we have at least one - paulfchristiano's "cheated evolution" :

Could you add more?

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-27T06:25:27.973Z · LW · GW

It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.

There are two sorts of unsafe AI: one which care and one which doesnt care.

Ignorant is fastest - only calculate answer and doesn't care of anything else.

Friend and enemy has to analyse additional things...

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-27T05:49:07.026Z · LW · GW

The other question is: what happens once you know the AI has desire for subversion?

There is plenty of people cooperating and supporting dictators and mafians due to selfish reasons. We could expect same in this area.

"I will support dangerous AI because I will be more rich,powerful or safe then. "

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-27T05:43:09.778Z · LW · GW

I don't understand negative point. Could it be explained please?

Is it because his position equals what he declares and not what he really think?

Or becase Bostroms mimicry (=for some reasons prefere negative, catastrophic and alarmist position) is not suposed?

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-27T05:28:43.540Z · LW · GW

Expose it to limited data about the world, or no data about the real world -Provide it little or no information about its operators or the society they live in -Provide it little or no information about its own hardware or software architecture

When I read "Oracle AI" I realized one possibility which is probably still overlooked.


Erasing data on physical media is not perfect. Zero bit which was before nonzero could be read more slowly or quicker. It could allow SAI to see shadows of past data. Which could lead to phenomenon similar to dreams.

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-27T00:48:40.505Z · LW · GW

I am afraid that we have not precisely defined term goal. And I think we need it.

I am trying to analyse this term.

Do you think that todays computer's have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?

Probably I could more precisely describe my idea in other way: In Bostrom's book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.

Could we think AI without goals but with subgoals?

One posibility could be if they will have "goal centre" externalized in human brain.

Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals?

If they could be void - will be goal imprinted with first task?

Or with first task with word "please"? :)

About utility maximizer - human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption.

We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice?

How did that nature? (I am not talking about evolution but about DNA encoding)

Balance between "intelligent" neural tissues (SAI) and "stupid" non-neural (humanity). :)

Probably we have to see difference between purpose and B-goal (goal in Bostrom's understanding).

If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect.

I have feeling that if you say "do it" Bostrom's AI hear "do it maximally perfect".

If you tell: "tell me how much is 2+2 (and do not destroy anything)" then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2.

I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-26T19:10:34.832Z · LW · GW

Could AI be without any goals?

Would that AI be dangerous in default doom way?

Could we create AI which wont be utility maximizer?

Would that AI need maximize resources for self?

Comment by Liso on Superintelligence 11: The treacherous turn · 2014-11-26T10:28:24.783Z · LW · GW

Positive emotions are useful too. :)

Comment by Liso on Superintelligence 10: Instrumentally convergent goals · 2014-11-26T06:37:45.900Z · LW · GW

I think that if SAIs will have social part we need to think altruisticaly about them.

It could be wrong (and dangerous too) think that they will be just slaves.

We need to start thinking positively about our children. :)

Comment by Liso on Superintelligence 10: Instrumentally convergent goals · 2014-11-26T06:34:11.442Z · LW · GW

Just a little idea:

In one advertising I saw interesting pyramid with these levels (from top to bottom): vision -> mission -> goals -> strategy -> tactics -> daily planning.

I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, ...) work on (vision -> mision -> goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)

I am afraid that humanity has not properly defined/analysed nor vision nor mission. And more groups and individuals has more contradictory vision, mission and goals.

One big problem with SAI is not SAI but that we will have BIG POWER and we still dont know what we really want. (and what we really want to want)

Bostrom's book seems to have paradigm that goal is something on top, rigid and stable. Could not be dynamic and flexible like vision. Probably it could be true that one stupidly defined goal (paperclipper) could be unchangeable and ultimate. But we probably have more possibilities to define SAI's personality.

Comment by Liso on Meetup : Bratislava · 2014-11-11T21:22:59.833Z · LW · GW

toto som myslel:

Ale je to asi kustik nedorobenejsie nez som si predpokladal

Dalsie zdroje info :

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-10T09:50:58.189Z · LW · GW

I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-10T09:26:08.452Z · LW · GW

One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.

In this moment I just wanted to improve our view at probability of only one SI in starting period.

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-07T21:47:55.133Z · LW · GW

Think prisoner's dilemma!

What would aliens do?

Is selfish (self centered) reaction really best possibitlity?

What will do superintelligence which aliens construct?

(no discussion that humans history is brutal and selfish)

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-07T21:17:50.094Z · LW · GW

Let us try to free our mind from associating AGIs with machines.

Very good!

But be honest! Aren't we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-07T21:05:05.028Z · LW · GW

When I was thinking about past discussions I was realized something like:

(selfish) gene -> meme -> goal.

When Bostrom is thinking about singleton's probability I am afraid he overlook possibility to run more 'personalities' on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble's telescope to observe diffferent objects)

And not only possibility but probably also necessity.

If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.

We need to analyze how to slightly different goals could control each other.

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-07T20:36:55.527Z · LW · GW

moral, humour and spiritual analyzer/emulator. I like to know more about these phenomenas.

Comment by Liso on Superintelligence 8: Cognitive superpowers · 2014-11-07T20:34:07.406Z · LW · GW

When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.

Now I am thankful because your comment enlarge possibilities to think about Fermi.

We could not think only self destruction - we could think modesty and self sustainability.

Sauron's ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).

We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.

Comment by Liso on Superintelligence 7: Decisive strategic advantage · 2014-11-01T11:52:14.171Z · LW · GW

Market is more or less stabilized. There are powers and superpowers in some balance. (gain money sometimes could be illusion like bet (and get) more and more in casino).

If you are thinking about money making - you have to count sum of all money in society. If investments means bigger sum of values or just exchange in economic wars or just inflation. (if foxes invest more to hunting and eat more rabbits, there could be more foxes right? :)

In AI sector there is much higher probability of phase-transition (=explosion). I think that's the diference.


  1. Possibility: There could be probably enough HW and we just wait for spark of new algorithm.

  2. Possibility: If we count agriculture revolution as explosion - we could also count massive change in productivity from AI (which is probably obvious).

Comment by Liso on Superintelligence 7: Decisive strategic advantage · 2014-11-01T11:22:12.539Z · LW · GW

Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.

Ants are probably good example how could organisational intelligence (?) be advantage.

According to wiki ''Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.''. See also google answer, wiki table or stackexchange.

Although we have to think careful - apex predators does not use to form large biomass. So it could be more complicated to define success of life form.

Problem of humanity is not only global replacer - something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing.

And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands... (pets or AI) ... is also unwanted.

We are afraid not decisive strategic advance over ants but over humans.

Comment by Liso on Superintelligence 7: Decisive strategic advantage · 2014-11-01T08:15:39.517Z · LW · GW

It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.

We could test it in thought experiment.

Chess game human-grandmaster against AI.

  1. it is not rapid (not checkmate in begining).
    We could also suppose one move per year to slow it down. It bring to AI next advantage because it's ability to concentrate so long time.

  2. capabilities
    a) intellectual capabilities we could suppose at same level during the game (if it is played in one day, otherwise we have to think Moore's law)
    b) human lose (step by step) positional and material capabilities during the game. And it is expected

Could we still talk about decisive advantage if it is not rapid and not unexpected? I think so. At least if we won't break the rules.

Comment by Liso on Superintelligence 7: Decisive strategic advantage · 2014-11-01T07:27:44.156Z · LW · GW

One possibility to prevent smaller group gain strategic advantage is something like operation Opera.

And it was only about nukes (see Elon Musk statement)...

Comment by Liso on Superintelligence 6: Intelligence explosion kinetics · 2014-10-21T21:47:42.194Z · LW · GW

Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)

Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)

Next theorem is obvious :)

Comment by Liso on Superintelligence 6: Intelligence explosion kinetics · 2014-10-21T21:21:32.227Z · LW · GW

This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be "cheated".

One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years.

Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc.

But also 1.5x more discoveries could bring 10x bigger profit!

We could not suppose only linear dependencies in such a complex problems.

Comment by Liso on Superintelligence 5: Forms of Superintelligence · 2014-10-15T05:19:41.910Z · LW · GW

Difficult question. Do you mean also ten times faster to burn out? 10x more time to rest? Or due to simulation not rest, just reboot?

Or permanently reboot to drug boosted level of brain emulation on ten times quicker substrate? (I am afraid of drugged society here)

And I am also afraid that ten time quicker farmer could not have ten time summer per year. :) So economic growth could be limited by some botlenecks. Probably not much faster.

What about ten time faster philosophic growth?

Comment by Liso on Superintelligence 5: Forms of Superintelligence · 2014-10-15T05:04:34.731Z · LW · GW

Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.

We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.

Humanity could be overtaken also by slow (and alien) superintelligence.

It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act... (like slowly loosing pieces in chess game)

If strong entities in our world will (are?) driving by poorly designed goals - for example "maximize profit" then they could really be very dangerous to humanity.

I really dont want to spoil our discussion with politics rather I like to see rational discussion about all existential threats which could raise from superintelligent beings/entities.

We have not underestimate any form and not underestimate any method of our possible doom.

With bigdata comming, our society is more and more ruled by algorithms. And algorithms are smarter and smarter.

Algorithms are not independent from entities which have enough money or enough political power to use it.

BTW. Bostrom wrote (sorry not in chapter we discussed yet) about possible perverse instantiation which could be done due to not well designed goal by programmer. I am afraid that in our society it will be manager or politician who will/is design goal. (we have find way that there be also philosopher and mathematician)

In my oppinion first (if not singleton) superintelligence will be (or is) most probably 'mixed form'. Some group of well organized people (dont forget lawyers) with big database and supercomputer.

Next stages after intelligence explosion could have any other forms.

Comment by Liso on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-10T03:57:02.248Z · LW · GW

This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer.

But I think in this moment we could (and have) analyse also trained "horses" with trained "raiders". And also trained "pairs" (or groups?)

Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy)

So yes = I expect they could be substantially useful also in case that direct physical interace would too difficult in next decade(s).

Comment by Liso on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-10T03:43:47.789Z · LW · GW

This is also one of points where I dont agree with Bostrom's (fantastic!) book.

We could use analogy from history: human-animal = soldier+hourse didnt need the physical iterface (like in Avatar movie) and still added awesome military advance.

Something similar we could get from better weak AI tools. (probably with better GUI - but it is not only about GUI)

"Tools" dont need to have big general intelligence. They could be at hourse level:

  • their incredible power of analyse big structure (big memory buffer)
  • speed of "rider" using quick "computation" with "tether" at your hands
Comment by Liso on Superintelligence Reading Group 2: Forecasting AI · 2014-09-25T03:58:03.273Z · LW · GW

what we have in history - it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.

But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)

For example still miss (and probably will miss) in book:

a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )

b) AI Impact to religion


Comment by Liso on Superintelligence Reading Group 2: Forecasting AI · 2014-09-25T03:47:51.166Z · LW · GW

But why collapsed evil AI after apocalypse?

Comment by Liso on Superintelligence Reading Group 2: Forecasting AI · 2014-09-25T03:45:07.125Z · LW · GW

Katja pls interconnect discussion parts by links (or something like TOC )

Comment by Liso on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-23T04:52:19.105Z · LW · GW

Are you played this type of game?


I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.

You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game...

Comment by Liso on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-19T04:13:22.026Z · LW · GW

This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )

Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?)

In other worlds: Is Searle's chinese room intelligent? (in definition which The Book use for (super)intelligence)

And if it is then it is human or alien mind?

And could be superintelligent?

What arguments we could use to prove that none of today's corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ?

And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How?

Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)

Comment by Liso on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T06:13:34.296Z · LW · GW

First of all thanx for work with this discussion! :)

My proposals:

  • wiki page for collaborative work

There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it

  • better time for europe and world?

But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)