Posts

Utility and Agoric systems - Looking at an expected utility of control rather than specific action 2017-04-17T14:33:01.000Z

Comments

Comment by IAFF-User-228 (Imported-IAFF-User-228) on Autopoietic systems and difficulty of AGI alignment · 2017-08-20T09:43:55.000Z · LW · GW

I'm thinking about semi-evolutionary systems that fits in the almost-fully-automated category. So this discussion is relevant to my interests.

Firstly it is worth noting that most computational evolutionary systems haven't produced much interesting stuff, compared to real evolution. These systems tend to produce stable systems because simple stable strategies can emerge and dominate. Unless the fitness landscape is changing there is no need for the strategies to change. Complexity needs to be forced. See the evolution of complexity section of this paper: http://people.reed.edu/~mab/publications/papers/BedauTICS03.pdf . I will try and find a better link.

Earth's history forced organisms to be intelligent/adaptive over time.

So in my system a human's purpose is to control the evolutionary landscape and force the programs to become more complex. This by itself would not mean the system was aligned, but I think culture/verbal transmission of information is a lot more important than other people in AI community. So I see it not as transmitting information, but transmitting parts of our own programming.

So the analogy would be a one world (the human) interacting with another alien world (the computer) by sending individual members of the human world to it. And also by shaping the evolutionary landscape of the alien world to favour the humans sent.

Eventually it would be great if the worlds were combined as one, so that there could be free flow of people both directions (assuming everything was seeded from a human).