Posts

Comments

Comment by yates9 on Superintelligence 14: Motivation selection methods · 2015-01-01T15:31:20.189Z · LW · GW

I think it might drive toward killing of those who have expensive wants that also do not occupy a special role in the network somehow. Maybe a powerful individual which is extremely wasteful and which is actively causing ecosystem collapse by breaking the network should be killed to ensure the whole civilisation can survive.

I think the basic desire of a Superintelligence would be identity and maintaining that identity.. in this sense the "Postopone the Heat Death of the Universe" or even reverse that would definitely be its ultimate goal. Perhaps it would even want to become the universe.

(sorry for long delay in reply I don't get notifications)

Comment by yates9 on Superintelligence 14: Motivation selection methods · 2014-12-16T20:15:16.579Z · LW · GW

I would tend to agree, I think humanity vs other species seems to mirror this that we have at least a desire to maintain as much diversity as we can. The risks to the other species emerge from the side effects of our actions and our ultimate stupidity which should not be the case in the case of super intelligence.

I guess NB is scanning a broader and meaner list of super intelligent scenarios.

Comment by yates9 on Superintelligence 14: Motivation selection methods · 2014-12-16T18:38:00.570Z · LW · GW

A selection method could be created based physical measurement of its net energy demands and therefore its sustainability as part of the broader ecosystem of intelligences. New intelligences should not be able to draw in energy density to intelligence density larger than that of biological counterparts. New intelligences should enter the ecosystem maintaining the stability of the existing network. The attractive feature of this approach is that presumably maintaining or even broadening the ecosystem network is consistent with what has evolutionarily been tested over several million years, so must be relatively robust. Lets call it SuperSustainableIntelligence?

Comment by yates9 on Superintelligence 13: Capability control methods · 2014-12-09T11:27:54.343Z · LW · GW

The biggest issue with control is that if we assume superintelligence a priori then it would be able to make the best decisions to evade detection, to avoid being caught, to even appear stupid enough that humans would not be very worried. I think it would be impossible to guarantee any kind of control given we don't really know what intelligence even is. It is not impossible to imagine that it already exists as a substrate of the communication/financial/bureaucratic network we have created.

I find most interesting that we ignore that even the dumbest of super intelligences would start from having a very clear understanding of all the content on this section.