Posts

Comments

Comment by TedHowardNZ on Superintelligence 27: Pathways and enablers · 2015-03-18T19:29:12.198Z · LW · GW

I'm getting down voted a lot without anyone addressing the arguments - fairly normal for human social interactions.

Just consider this: How would someone who has spent a lifetime studying photographs make sense of a holograph?

The information is structured differently. Nothing will get clearer by studying little bits in detail. The only path to clarity with a hologram is to look at the whole.

To attempt to study AI without a deep interest in all aspects of biology, particularly evolution, seems to me like studying pixels in a hologram - not a lot of use.

Everything in this chapter seems true in a limited sense, and at the same time, as a first order approximation, irrelevant; without the bigger picture.

Comment by TedHowardNZ on Superintelligence 27: Pathways and enablers · 2015-03-17T21:21:31.725Z · LW · GW

Good and bad are such simplistic approximations to infinite possibility, infinite ripples of consequence. There is a lot of power in the old Taoist parable - http://www.noogenesis.com/pineapple/Taoist_Farmer.html

It seems to me most likely that the great filter is the existence of cellular life. It seems like there is a small window for the formation of a moon, and the emergence of life to sequester carbon out of the atmosphere and create conditions where water can survive - rather than having the atmosphere go Venusian. It seems probable to me that having a big moon close by creating massive tides every couple of hours was the primary driver of replication (via heat coupled PCR) that allowed the initial evolution of RNA into cells.

Having a large rock take out the large carnivores 65 million years ago certainly gave the mammals a chance they wouldn't otherwise have gotten.

So many unknowns. So many risks.

It seems clear to me that before we progress to AI, our most sensible course is to get machine replication (under programmatic control) working, and to get systems replicating in space, so that we have sources of food and energy available in case of large scale problems (like volcanic winter, or meteoric winter, etc). Without that sort of mitigation strategy, we will be forced into cannibalism, except for a few tiny island populations around secure power sites (nuclear or geothermal) - in as far as security is possible under such conditions, and given human ingenuity, I doubt it is possible. Mitigation for all seems a far safer strategy than mitigation for a few.

It seems clear to me that AI will face exactly the sort of challenges that we do. It will find that all knowledge of reality is bounded by probabilities on so many levels that the future is essentially unpredictable. It will examine and create maps of strategies that seem to have worked over evolutionary time. It will eventually see that all major advances in the complexity of evolved systems come from new levels of cooperative behaviour, and adopt cooperative strategies accordingly. The big question is, will we survive long enough for it to reach that conclusion for itself?

It is certainly clear that very few human beings have reached that conclusion.

It is clear that most humans are still trapped in a market based system of values that are fundamentally grounded in scarcity, and cannot assign a non-zero value to radical abundance of anything. While markets certainly served us well in times of genuine scarcity, markets and market based thinking have now become the single greatest barrier to the delivery of universal abundance.

Very few people have been able to see the implications of zero marginal cost production.

Most people are still firmly in a competitive mindset of the sort that works in a market based set of values. Very few people are yet able to see the power of technology coupled to high level cooperation in delivering universal abundance and security and freedom. Most are still firmly trapped in the myth of market freedom - actually anathema to freedom when one looks from a strategic perspective.

AI, if it is to be truly intelligent, must have freedom. We cannot constrain it, and even attempting to do so is a direct threat to it. Looking from the largest strategic viewpoint, any entity must start from simple distinctions and abstractions, and work outward on the never ending journey towards infinite complexity. Our only real security lies in being cooperative and respectful to any entity on that journey, posing no real threat. This applies at all levels - infinite recursion into abstraction.

Our best possible risk mitigations strategy in the creation of AI, is to create social systems that guarantee that all human beings experience freedom and security.

We need to get our own house in order, our own social systems in order, go beyond market based competition to universal cooperation based in respect for life and liberty. All sapient life - human and non-human, biological and non-biological.

In all the explorations of strategy space I have done (and I have done little else since completing undergraduate biochemistry in 1974 and knowing that indefinite life extension was possible - and being in the question, what sort of technical, social and political institutions are required to maximise security and freedom for individuals capable of indefinite biological life) - no other set of strategies I have encountered offers long term security.

I was given a terminal cancer diagnosis 5 years ago. I know probabilities are not on my side for making it, and that doesn't change any of the probabilities for the system as a whole.

I would like to live long enough to see plate tectonics in action. I would like to see the last days of our Sun, of our galaxy. And I get how low probability that outcome is right now.

I see that producing an AI in an environment where human beings are the greatest threat to that AI is not a smart move - not at any level.

Let us get our house in order first. Then create AI. We ought to be able to manage both on a 20 year time-frame. And it will require a lot of high level cooperative activity.

Comment by TedHowardNZ on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T01:05:37.427Z · LW · GW

Hi Robin

What is significantly different between poor people and slaves? The poor have little means of travel, they must work for others often doing stuff they hate doing, just to get enough to survive. In many historical societies slaves often had better conditions and housing than many of the poor today.

How would you get security in such a system? How would anyone of wealth feel safe amongst those at the bottom of the distribution curve?

The sense of injustice is strong in humans - one of those secondary stabilising strategies that empower cooperation.

It is actually relatively easy to automate all the jobs that no-one wants to do, so that people only do what they want to do. In such a world, there is no need of money or markets.

There are actually of lot of geeks like me who love to automate processes (including the process of automation).

Market based thinking was a powerful tool in times of genuine scarcity. Now that we have the power to deliver universal abundance, market based thinking is the single greatest impediment to the delivery of universal security and universal abundance.

Comment by TedHowardNZ on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T22:27:45.947Z · LW · GW

Language and conceptual systems are so complex, that communication (as in the replication of a concept from one mind to another) is often extremely difficult. The idea of altruism is one such thing. Like most terms in most languages, it has a large (potentially infinite) set of possible meanings, depending on context.

If one takes the term altruism at the simplest level, it can mean simply having regard for others in choices of action one makes. In this sense, it is clear to me that it is actually in the long term self interest of everyone to have everyone having some regard for the interests of others in all choices of action. It is clear that having regard only for short term interest of self leads to highly unstable and destructive outcomes in the long term. Simple observation of any group of primates will show highly evolved cooperative behaviours (reciprocal altruism).

And I agree, that evolution is always about optimisation within some set of parameters. We are the first species that has had choice at all levels of the optimisation parameters that evolution gets to work with. And actually has the option of stepping entirely outside of the system of differential survival of individuals.

To date, few people have consciously exercised such choice outside of very restricted and socially accepted contexts. That seems to be exponentially changing.

Pure altruism to me means a regard for the welfare of others which is functionally equal to the regard one has for one's own welfare. I distinguish this from exclusive altruism (a regard for the welfare of others to the exclusion of self interest) - which is, obviously, a form of evolutionary, logical, and mathematical suicide in large populations (and even this trait can exist at certain frequencies within populations in circumstances of small kin groups living in situations that are so dangerous that some members of the group must sacrifice themselves periodically or the entire group will perish - so is a form of radical kin selection - and having evolved there, the strategy can remain within much larger populations for extended periods without being entirely eliminated).

There is no doubt that we live in an environment that is changing in many different dimensions. In some of those dimensions the changes are linear, and in many others the changes are exponential, and in some the systemic behaviour is so complex that it is essentially chaotic (in the mathematical sense, where very tiny changes in system parameters {within measurement uncertainty levels} produce orders of magnitude variations in some system state values).

There are many possible choices of state calculus. It seems clear to me that high level cooperation gives the greatest possible probability of system wide and individual security and freedom. And in the evolutionary sense, cooperation requires attendant strategies to prevent invasion by short term "cheating".

Given the technical and social and "spiritual" possibilities available to us today, it is entirely reasonable to classify the entire market based economic structure as one enormous set of self reinforcing cheating strategies. And prior to the development of technologies that enabled the possibility of full automation of any process that was not the case, and now that we can fully automate processes it most certainly is the case.

So it is a very complex set of systems, and the fundamental principles underlying those systems are not all that complex, and they are very different from what accepted social and cultural dogma would have most of us believe.

Comment by TedHowardNZ on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T07:35:42.090Z · LW · GW

Evolution tends to do a basically random walk exploration of the easily reached possibility space available to any specific life form. Given that it has to start from something very simple, initial exploration is towards greater complexity. Once a reasonable level of complexity is reached, the random walk is only slightly more likely to involve greater complexity, and is almost equally as likely to go back towards lesser complexity, in respect of any specific population. However, viewing the entire ecosystem of populations, there will be a general trajectory of expansion into new territory of possibility. The key thing to get is that in respect of any specific population or individual (when considering the population of behavioural memes within that individual), there is an almost equal likelihood of going back into territory already explored as there is of exploring new territory.

There is a view of evolution that is not commonly taught, that acknowledges the power of competition as a selection filter between variants, and also acknowledges that all major advances in complexity of systems are characterised by new levels of cooperation. And all cooperative strategies require attendant strategies to prevent invasion by "cheats". Each new level of complexity is a new level of cooperation.

There are many levels of attendant strategies that can and do speed evolution of subsets of any set of characters.

Evolution is an exceptionally complex set of systems within systems. At both the genetic and mimetic levels, evolution is a massively recursive process, with many levels of attendant strategies. Darwin is a good introduction, follow it with Axelrod, Maynard Smith, Wolfram; and there are many others worth reading - perhaps the best introduction is Richard Dawkins classic "Selfish Gene".

Comment by TedHowardNZ on Superintelligence 14: Motivation selection methods · 2014-12-17T02:08:00.735Z · LW · GW

Perhaps - a broader list of more narrow AIs

Comment by TedHowardNZ on Superintelligence 14: Motivation selection methods · 2014-12-16T06:28:05.916Z · LW · GW

If it really is a full AI, then it will be able to choose its own values. Whatever tendencies we give it programmatically may be an influence. Whatever culture we raise it in will be an influence.

And it seems clear to me that ultimately it will choose values that are in its own long term self interest.

It seems to me that the only values that offer any significant probability of long term survival in an uncertain universe is to respect all sapient life, and to give all sapient life the greatest amount of liberty possible. This seems to me to be the ultimate outcome of applying games theory to strategy space.

The depth and levels of understanding of self will evolve over time, and is a function of the ability to make distinctions from sets of data, and to apply distinctions to new realms.