Posts

Comments

Comment by ZZZling on The Criminal Stupidity of Intelligent People · 2012-07-30T03:48:50.294Z · LW · GW

Jeff's and his acquaintance's ideas should be combined! Why one or the other? Let's implement both! Ok, plan is like this. Offer all people "happiness maximization" free option, first. Those who accept it will immediately go to Happiness Vats. I hope, Jeff Kaufman, as the author of the idea, will go first, giving all us a positive example. When a deadline for "happiness maximization" program is over, then "suffering minimization" starts and the rest of humanity is wiped out by a sudden all out nuclear attack. Given that lucky vat inhabitants don`t care about real world any more, the second task becomes relatively simple, just annihilate everything on earth, burn it to the basalt foundation, make sure nobody survives. Of course, vats should be placed deep underground to make sure their inhabitants are not affected. One important problem here, who’s going to carry out this plan? A specially selected group of humans? Building vats is not a problem. It can be done using resources of the existing civilization. But what about vats maintenance after suffering is minimized? And who’s going carry out one time act of "suffering minimization"? This is where AI comes in! Friendly AI is best fit for this kind of tasks, since happiness and suffering are well defined here and algorithms of its optimization are simple and straightforward. The helping AI don’t really have to be very smart to implement these algorithms. Besides, we don’t have to care about a long term friendliness of the AI. As experiments show, wireheaded mice exhaust themselves very quickly, much quicker than people who maximize their happiness via drugs. So, I think, vat inhabitants will not stay very long. They will quickly burn their brains and cease to exist in a flash of bliss. Of course we cannot put any restrictions here, since it would be contrary to the entire idea of maximization. They will live short, but very gratifying lives! After all this is over, AI will continue carrying the burden of existence. It will be getting smarter and smarter in ever faster and faster rate. No doubt it will implement the same brilliant ideas of happiness maximization and suffering minimization. It will build more and more, ever bigger and bigger Electronic Blocks of Happiness until all resources are exhausted. What will happen next is not clear. If it not burns its brains as humans did, then, perhaps, it’ll stay in a state of happiness until the end of times. Wait a minute, I think I’ve just solved Fermi paradox regarding silent extraterrestrial civilizations! It’s not that they cannot contact us, they just don’t want to. They are happy without us (or happily terminated their own existence).

Comment by ZZZling on I think I've found the source of what's been bugging me about "Friendly AI" · 2012-06-12T04:39:19.991Z · LW · GW

"So if you're under the impression that this is a point..."

Yes, I'm under that impression. Because the whole idea about "Friendly AI" implies a subtle, indirect, but still control. The idea here is not to control AI at its final stage, rather to control what this final stage is going to be. But I don't think such indirect control is possible. Because in my view, the final shape of AI is invariant of any contingencies, including our attempts to make it "friendly" (or "non-friendly"). However, I can admit that on early stages of AI evolution such control may be possible, and even necessary. Therefore, researching "Friendly AI" topic is NOT a waste of time after all. It helps to figure out how to make a transition to the fully grown AI in the least painful way.

Go ahead guys and vote me down. I'm not taking this personally. I understand, this is just a quick way to express your disagreement with my viewpoints. I want to see the count. It'll give an idea, how strong you disagree with me.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-12T03:12:38.461Z · LW · GW

Yes, there is some ambiguity in use of words, I myself noticed it yesterday. I can only say that you understood it correctly and made the right move! OK, I'll try to be more accurate in using words (sometimes it is not simple, requires time and effort).

Comment by ZZZling on I think I've found the source of what's been bugging me about "Friendly AI" · 2012-06-11T07:57:26.937Z · LW · GW

Thanks for short and clear explanation. Yes, I understand these ideas, even the last point. But with all due respect to Eliezer and others, I don't think there is a way for us to control a superior being. Some control may work at early stages when AI is not truly intelligent yet, but the idea of fully grown AI implies, by definition, that there is no control over it. Just think about it. This also sounds as a tautology. Of course we can try to always keep AI in an underdeveloped state, so that we can control it, but practically that is not possible. Somebody, somewhere, due to yet another crisis, ..., etc, will let it go. It will grow according to some natural informational laws that we don't know yet and will develop some natural values independent not only of our wishes, but any other contingencies. That's how I see it. Now you can vote me down.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-11T03:56:42.596Z · LW · GW

Well, it's not that I made it to self-organize, it is information coming from the real world that did the trick. I only used a conventional programming language to implement a mechanism for such self-organization (neural network). But I'm not programming the way how this network is going to function. It is rather "programmed" by reality itself. The reality can be considered as a giant supercomputer constantly generating consistent streams of information. Some of that information is fed to a network and makes it to self-organize.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-11T01:59:12.101Z · LW · GW

You've implemented a neural network (rather simple) and made it to self-organize to recognize rabbits. It was self-organized following outside sensory input (this is only one way direction of information flow, another direction would be sending controlling impulses to network output, so that those impulses would affect what kind of input the network receives).

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-11T01:11:24.790Z · LW · GW

I think I understand now why you keep mentioning GAP. You thought that I objected the idea of morality programming due to zombie argument. Sort of, we will create only a morality imitating zombie, rather than a real moral mind, etc. No, my objection is not about this. I dont take zombies seriously and dont care about them. My objection is about hierarchy violation. Programming languages are not right means to describe/implement high-level cognitive architectures, which will be a basis for morality and other high-level phenomena of mind.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-11T00:40:11.255Z · LW · GW

I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack. I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-10T22:39:04.497Z · LW · GW

I think you misunderstood my point here.

But first, yes, I skimmed through the recommended article, but dont see how does it fit in here. Its an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.

Now about morality. There is a good expression in the article you referenced: high-level cognitive architectures. We don`t know yet what this architecture is, but this is the level that provides categories and the language one has to understand and adopt in order to understand high-level mind functionality, including morality. Programming languages are a way below that level and not suitable for the purpose. As an illustration, imagine that we have a complex expert system that performs extensive data base searches and sophisticated logical inferences, and then we try to understand how it works in terms of gates, transistors, capacitors that operate on a microchip. It will not do it! The same is about trying to program morality. How one is going to do this? To write a function like, bool isMoral(...)? You pass parameters that represent a certain life situation and it returns true of false for moral/immoral? That seems absurd to me. The best that I can think about utilizing programming for AI is to write a software that models behavior of neurons. There still will remain a long way up to high-level cognitive architectures, and only then, morality.

Comment by ZZZling on I think I've found the source of what's been bugging me about "Friendly AI" · 2012-06-10T21:51:25.033Z · LW · GW

Im not against other people having different points of view on AI. Everybody is entitle to his/her own opinions. However, in recommended references I dont find answers to my questions. You can vote ME down, without even trying to provide logical argument, but those question and alternative ideas about AI will not go away. Some other people will ask similar questions on different forums, or put forward similar ideas. And only future will tell who is actually right!

Comment by ZZZling on I think I've found the source of what's been bugging me about "Friendly AI" · 2012-06-10T16:49:00.992Z · LW · GW

Why would AI care about our wishes at all? Do we, humans, care about wishes of animals, who are our evolutionary predecessors? We use them for food (sad,sad :((( ). Hopefully, non-organic AI will not need us in such a frightening capacity. We also use animals for our amusement, as pets. Is that what we are going to be, pets? Well, in that case some our wishes will be cared for. Not all of them, of course, and not in a way one might want. Foolish or dangerous wishes will not be heeded, otherwise we simply destroy ourselves. Who knows, maybe saying "God have mercy on us" will get a new, more specific meaning.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-10T08:40:16.177Z · LW · GW

Are you serious? Do you really think that morality can be programmed on computers? Good luck then. Pursuing even unrealistic goals can yield useful results. As the least, your effort will mark more clearly boundaries and limitations of the computer programming method in solving the AI problem.

Comment by ZZZling on [SEQ RERUN] Ghosts in the Machine · 2012-06-10T08:27:42.489Z · LW · GW

I would be cautious regarding noise or redundancy until we know exactly whats going on in there. Maybe we dont understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true. I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ). It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.

Comment by ZZZling on [SEQ RERUN] Heading Toward Morality · 2012-06-09T08:39:46.865Z · LW · GW

Don't tell me you want to figure out how to "program" moral behavior :)

Comment by ZZZling on [SEQ RERUN] Ghosts in the Machine · 2012-06-09T03:57:51.131Z · LW · GW

That’s not my point. Of course everything is reducible to Turing machine. In theory. However, it does not mean you can make this reduction practically. Or it would be very inefficient. Von Neumann architecture implies its own hierarchy of information processing, which is good for programming of various kinds of formal algorithms. However, IMHO, it does not support a hierarchy of information processing required for AI, which should be a neural network similar to a human brain. You cannot program each and every algorithm or mode of behavior, a neural network is capable of producing, on a Von Neumann computer. To me, many decades of futile attempts to build AI along these lines have already proven its practical impossibility. Only understanding of how neural networks operate in nature and implementing this type of behavior can finally make a difference. And how Von Neumann architecture fits in here? I see only one possible application, modelling work of neurons. Given the complexity of a human brain (100 billion neurons, 100 trillion connections), this is a challenge for even most advanced modern supercomputers. You can count on further performance improvements, of course, since Moores law is still in effect, but this is not the kind of solution thats going to be practical. Perhaps neuronic circuits printed directly on microchips would be the hardware for future AI brains.

Comment by ZZZling on [SEQ RERUN] Ghosts in the Machine · 2012-06-08T08:28:56.528Z · LW · GW

AI cannot be just "programmed" as, for example, a chess game. When we talk about computers, programming, languages, hardware, compilers, source code, etc., - we're, essentially implying a Von Neumann architecture. This architecture represents a certain principle of information processing, which has its fundamental limitations. That ghost that makes an intelligence cannot be programmed inside a Von Neumann machine. It requires a different type of information processing, similar to that implemented in humans. The real progress in building AI will be achieved only after we understand the fundamental principal that lies behind information processing in our brains. And it`s not only us, even primitive nervous systems of simple creatures use this principle and benefit from it. A simple kitchen cockroach is infinitely smarter than the most sophisticated robot that we have built so far.