Posts
Comments
I hate the term "Neural Network", as do many serious people working in the field.
There are Perceptrons which were inspired by neurons but are quite different. There are other related techniques that optimize in various ways. There are real neurons which are very complex and rather arbitrary. And then there is the greatly simplified Integrate and Fire (IF) abstraction of a neuron, often with Hebbian learning added.
Perceptrons solve practical problems, but are not the answer to everything as some would have you believe. There are new and powerful kernal methods that can automatically condition data which extend perceptrons. There are many other algorithms such as learning hidden Markov models. IF neurons are used to try and understand brain functionality, but are not useful for solving real problems (far too computationally expensive for what they do).
Which one of these quite different technologies is being referred to as "Neural Network"?
The idea of wiring perceptrons back onto themselves with state is old. Perceptrons have been shown to be able to emulate just about any function, so yes, they would be Turing complete. Being able to learn meanginful weights for such "recurrent" networks is relatively recent (1990s?).
SHRDLU was very impressive by any standards. It was released in the very early 1970s, when computers had only a few kilobytes of memory. Fortran was only about 15 years old. People had only just started to program. And then using paper tape.
SHRDLU took a number of preexisting ideas about language processing and planning and combines them beautifully. And SHRDLU really did understand its tiny world of logical blocks.
Given how much had been achieved in the decade prior to SHRDLU it was entirely reasonable to assume that real intelligence would be achieved in the relatively near future. Which is, of course, the point of the article.
(Winograd did cheat a bit by using Lisp. Today such a program would need to be written in C++ or possibly Java which takes much longer. Progress is not unidirectional.)
It stopped being all, about genes when genes grew brains..
Yes and no. In the sense that memes as well as genes float about then certainly. But we have strong instincts to raise and protect children, and we have brains. There is not particular reason why we should sacrifice ourselves for our children other than those instincts, which are in our genes.
Makes sense.
It is absolutely the fact that gene drift is more common than mutation. Indeed, a major reason for sexual reproduction is to provide alternate genes that can mask other genes broken by mutations.
An AGI would be made up of components in some sense, and those components could be swapped in and out to some extent. If a new theorem prover is created an AGI may or may not decide to use it. That is similar to gene swapping, but done consciously.
One thing that I would like to see is + and - separated out. If the article received -12 and +0 then it is a looser. But if it received -30 and + 18 then it is merely controversial.
Indeed, and that is perhaps the most important point. Is it really possible to have just one monolithic AGI? Or would by its nature end up with multiple, slightly different AGIs? The latter would be necessary for natural selection.
As to whether spawned AGIs are "children", that is a good question.
Natural selection does not cause variation. It just selects which varieties will survive. Things like sexual selection are just special cases of natural selection.
The trouble with the concept of natural selection is not that it is too narrow, but rather that it is too broad. It can explain just about anything, real or imagined. Modern research has greatly refined the idea, determined how NS works in practice. But never to refute it.
I've never understood how one can have "moral facts" that cannot be observed scientifically. But it does not matter, I am not being normative, but merely descriptive. If moral values did not ultimatey arise from natural selections, where did they arise from?
Passive in the sense of not being able to actively produce offspring that are like the parents. The "being like" is the genes. Volcanoes do not produce volcanoes in the sense that worms produce baby worms.
For an AI that means its ability to run on hardware. And to pass its intelligence down to future versions of itself. A little vaguer, but still the same idea.
This is just the idea of evolution through natural selection, a rather widely held idea.
Yes, moral values are not objective or universal.
Note that this is not normative but descriptive. It is not saying what ought, but what is. I am not trying to justify normative ethics, just to provide an explanation of where our moral values come from.
(Thanks for the comments, this all adds value.)
Interesting point about fecudity.
Perhaps the weakness of evolutionary thought is that it can explain just about anything. In particular organisms are not perfect, and therefor will have features that do not really help them. But mostly they are well adapted.
The reason that homosexuality is an obstacle to survival is not homophobia or STDs, but rather that they simply may not have children. It is the survival of the genes that counts in the long run. But until recently homosexuals tended to suppress their feelings and so married and had children anyway, hence there being little pressure to suppress it.
The counter examples are good, and I will use them. There are several responses as you allude to, the main one being that those behaviors are rare. Art is a bit harder, but it seems related to creativity which is definitely survival based, and most of us do not spend much of our time painting etc.
I do not quite get your other point. For people it is our genes that count, so dieing while protecting one's family makes sense if necessary. For the AI it would be its code linage. I am not talking about an AI wanting to make people survive, but that the AI itself would want to survive. Whatever "itself" really means.
First let me thank you for taking the trouble to read my post and comment in such detail. I will respond in a couple of posts.
Moral values certainly exist. Moreover, they are very important for our human survival. People with bad moral values generally do badly, and societies with large numbers of people with bad moral values certainly do badly.
My point is that those moral values themselves have an origin. And the reason that we have them is because having them makes us more likely to have grandchildren. That is Descriptive Evolutionary Ethics
The counter argument is that if moral values did not arise from natural selection, then where did they arise from?
AIs do not need to protect a vulnerable body, but they do need to get themselves run on limited hardware, which amounts to the same thing
As a minor point of fact Darwin did actually make those inferences in a book on Emotions, which is surprising.
As you say, the key issue is goal stability. OT is obviously sound for an instant, but goal stability is not clear.
What is clear is that if there are multiple AIs in any sense then and if there is any lack of goal stability then the AIs that have the goals that are best for existence will be the AIs that exist. That much is a tautology.
Now what those goals are is unclear. Killing people and taking their money is not an effective goal to raise grandchildren in human societies, people that do that end up in jail. Being friendly to other AIs might be a fine sub goal.
I am also assuming self improvement, so that people will no longer be controlling the AI.
The other question is how many AIs would there be? Does it make sense to say that there would only be one AI, made up of numerous components, distributed over multiple computers? I would say probably not. Even if there is only one AI it will internally have a competition for ideas like we have. The ideas that are better at existing will exist.
It is very hard to get away from Natural Selection in the longer term.
First let me thank you for taking the trouble to read my post and comment in such detail. I will respond in a couple of posts.
Moral values certainly exist. Moreover, they are very important for our human survival. People with bad moral values generally do badly, and societies with large numbers of people with bad moral values certainly do badly.
My point is that those moral values themselves have an origin. And the reason that we have them is because having them makes us more likely to have grandchildren. That is Descriptive Evolutionary Ethics
The counter argument is that if moral values did not arise from natural selection, then where did they arise from?
AIs do not need to protect a vulnerable body, but they do need to get themselves run on limited hardware, which amounts to the same thing
As a minor point of fact Darwin did actually make those inferences in a book on Emotions, which is surprising.
A rock has no goal because it is passive.
But a worm's goal is most certainly to exist (or more precisely its genes) even though it is not intelligent.
Actually not quite. Until they drift into the core value of existence. Then natural selection will maintain that value, as the AIs that are best at existing will be the ones that exist.
The post was not meant to be anti-anything. But it is a different point of view from that posted by several others in this space. I hope many of the down voters take the time to comment here.
One thing that I would say is that while it may not be the best post ever posted to less wrong, it is certainly not a troll. Yet one has to go back over 100 posts to find another article voted down so strongly!
Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all.
I challenge you to find one.
We put a lot of effort into our children. We work in tribes and therefor like to work with people that support us and ostracize those that are seen to be unhelpful. So we ourselves need to be helpful and to be seen to be helpful.
We help our children, family, tribe, and general community in that genetic order.
We like to dance. It is the traditional way to attract a mate.
We have a strong sense of moral value because people that have that strong sense obey the rules and so are more likely to fit in and be able to have grandchildren.
Not quite. Counting AIs is much harder than counting people. An AI is neither discrete nor homogenous.
I think that it is most unlikely that the world could be controlled by one uniform, homogenous, intelligence. It would need to be at least physically distributed over multiple computers. It will not be a giant von-Neuman machine doing one thing at a time. There will be lots of subprocesses working somewhat independently. It would seem almost certain that they would eventually fragment to some extent.
People are not that homogenous either. We have competing internal thoughts.
Further, an AI will be composed of many components, and those components will compete with each other. Suppose one part of the AI develops a new and better theorem prover. Pretty soon the rest of the AI will start to use that new component and the old one will die. Over time the AI will consist of the components that are best at promoting themselves.
It will be a complex environment. And there will never be enough hardware to run all the programs that could be written, so there will be competition for resources.
Well, alternative if you like. I will post an elaboration as a full article.
If you'd like to come up the coast I'd be most interested. Would probably go down to Brisbane as well.
Anthony
Reviewers wanted for New Book -- When Computers Can Really Think.
The book aims at a general audience, and does not simply assume that an AGI can be built. It differs from others by considering how natural selection would ultimately shape a AGI's motivations. It argues against the Orthogonality Principal, suggesting instead that there is ultimately only one super goal, namely the need to exist. It also contains a semi-technical overview of artificial intelligent technologies for the non-expert/student.
An overview can be found at
Please let me know if you would be interested in reviewing a late draft. Any feedback would be most welcome. Anthony@berglas.org
What is amazing is that computers have not already reduced the workforce to run bureaucracies.
In my upcoming book I analyze the Australian Tax Office in 1955 (when Parkinson wrote is great paper) and 2008. At both times it took about 1.5% of GDP to do essentially the same function. (Normalizing for GDP takes into account inflation and population size.)
Back in 1955 tax returns were largely processed by hand, by rows of clerks with fountain pens. Just one ancient mainframe could do the work of thousands of people. Today few returns are even touched by a human hand.
The steam tractor and the combine harvester have reduced the agricultural work force from 80% of the population to less than 20%, depending how you count. But the huge increase in the power of bureaucratic tools has produced no reduction in the proportion of the population that work in bureaucracies, quite the opposite.
I think that you are right and Lander is wrong.
However, it is curious that most mammals such as dogs and horses die much younger than we do, despite being made of essentially the same stuff. Certainly we could not exist if we died under twenty years because it takes us that long to mature our minds and breed. But what advantage for a dog to die young? If it lived twice as long it would (presumably) produce twice as many grandchildren.
I suspect that it is simply that dogs and horses can breed after a couple of years. So once they live more than 6 or so times their breeding age their is not that much advantage in them living any longer. But there would still be some advantage. Is there some cost to living longer, such as needing to have a slower metabolism, or is it just that natural selection does not produce unneeded features?