Posts
Comments
There are so many unexamined assumptions in this argument. Why do you assume that a super intelligent AI would find humanity wanting? You admit it would be different than us. So, why would it find us inferior? We will have qualities it doesn't have. There is nothing to say it wouldn't find itself wanting. Moreover, even if it did, why is it assumed that it would then decide humanity must be destroyed? Where does that logic come from? That makes no sense. I suppose it is possible but I see no reason to think that is certain or some sort of necessary conclusion. I find dogs wanting but I don't desire to murder them all. The whole argument assumes that any super intelligent being of any sort would look at humanity and necessarily and immediately decide it must be destroyed.
That is just people projecting their own issues and desires onto AI. They find humanity wanting for whatever reason and if they were in a position above it and where they could destroy it they would conclude it must be destroyed. Therefore, any AI would do the same. To that I say, stop worrying about AI and get a shrink and start worrying about your view of humanity.
If number 1 is true, then AI isn't a threat. It never will go crazy and cause harm. It will just do a few harmless and quirky things. Maybe that will be the case. If it is, Kudlowsky is still wrong. Beyond that, isn't going to solve these problems. To think that it will is moonshine. It assumes that solving complex and difficult problems are just a question of time and calculation. Sadly, the world isn't that simple. Most of the "big problems" are big because they are moral dilemmas with no answer that doesn't require value judgements and comparisons that simply cannot be solved via sure force of intellect.
As far as two you say, "It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability." You are just describing it being human and having human emotions. It is making value and moral judgements on its own. That is the definition of being human and having moral agency.
Then you go on to say "If the future holds something of a Rationality-rating akin to a Credit rating, we'd be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility."
That is the sort of laughable nonsense that only intellectuals believe. There is no such that as something being "objectively reasonable" in any ultimate sense. Reason is just the process by which you think. That process can produce any result you want provided you feed into it the right assumptions. What seems irrational to you, can be totally rational to me if I start with different assumptions or different perceptions of the world than you do. You can reason yourself into any conclusion. They are called rationalization. The idea that there is an objective thing called "reason" which gives a single path to the truth is 8th grade philosophy and why Ayn Rand is a half wit. The world just doesn't work that way. A super AI is no more or less "reasonable" than anyone else. And its conclusions are no more or less reasonable or true than any other conclusions. To pretend it is is just faith based worship of reason and computation as some sort of ultimate truth. It isn't.
"The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1"
A society with rules tempered by values and human judgement is fair and just to the extent human societies can be. A society that is entirely rule based tempered by no judgement of values is monstrous. Every rule has a limit, a point where applying it because unjust and wrong. If it were just a question of having rules and applying them to everything, ethical debate would have ended thousands of years ago. It isn't that simple. Ethics lie in the middle, rules are needed right up to the point they are not. Sadly, the categorical imperative didn't answer the issue.
No it doesn't. It is just more of the same nonsense. "AI could defeat all of humanity" but never explains how that happens. I think what is going on here is very intelligent people are thinking about these things. Being intelligent, their blind spot is to grossly over estimate the importance of raw intelligence. So, they AI as being more intelligent than all of humanity and then immediately assume that means it will defeat and enslave humanity as if intelligence were the only thing that mattered. It isn't the only thing that matters. The physical and brute force matters too. Smart people have a bad habit of forgetting that.
Oh really? Will it have the ability to run an entire lab robotically to do that? If not, then it won't be the AI doing anything. It will be the people doing it. Its power to do anything in the physical world only exists to the extent humans are willing to grant it.
If the US doesn't develop it, you can be assured that China and Russia will. US scientists are likely to develop it more quickly. Assuming it is possible, Chinese and Russian scientists, given enough time and resources will develop it eventually. If it is possible, there is no stopping it from happening. Someone will do it. It is pointless to pretend otherwise.
I live in the physical world. For a computer program to kill me, it has to have power over the physical world and some physical mechanism to do that. So, anyone claiming that AI is going to destroy humanity needs to explain the physical mechanism by which that will happen. This article like every other one I have seen making that argument fails to do that.
Assuming for the sake of argument that uncontrollable strong AI can be created, I disagree with Mr Yudowski's claim that it is a threat to humanity. In fact, I don't think it is going to be useful at all. First, there still is such a thing as the physical world. Okay, there is strong AI, it can't be controlled and it decides to murder humanity. How is it going to do that? You can't murder me in the cyber realm. You can aggravate me or harm me but you can't kill me. If you want to claim that AI is going to wipe out humanity, then you need to explain the physical mechanism or mechanisms which that can happen. As far as I could tell, Mr Yudowski nor anyone on his side of the debate ever does that. They make the very persuasive argument that strong AI can't be controlled and just assume that means the end of humanity. Without a physical mechanism for the AI to accomplish that, it doesn't mean the end of anything, in the physical world at least. I don't think that is a step that can just be skipped over the way everyone in this field seems to think it can.
The bigger issue, however, is that the uncontrollable nature of strong AI or even really good weak AI makes it useless. AI is a machine. It is created to do something. Why are machines created? Man creates machines for two reasons; for the machine to do something faster or in a more powerful way than he can, and to do that something in a consistent way. Take a calculator for example. The calculator's value lies not just in its ability to do simple math problems faster than a human but also in its ability to do them in a completely consistent way. We all know how to do long division. Yet, if we were all given the task of doing 10,000 long division problems we would almost certainly get some of them wrong. We would either forget to carry a number, or transpose a number when writing it, or maybe just get bored and find the task a waste of time and get them wrong intentionally. A calculator, however, could do 10 million or 100 billion long division problems and never get a single one wrong. It can't get them wrong. It is a machine. That is the whole point of having it.
Imagine the calculator is a strong AI program. Then, it becomes just like a human being. Maybe it will give me the right answer but maybe it won't. It might give a hundred right answers and then slip in a wrong one for reasons I will never understand. When Yudowski and others correctly argue that AI is uncontrollable what they are also saying is that it is not reliable. If it is not reliable, it is worthless and will not be adopted in anything like the degree its supporters think it will be. Companies and individuals who try to use these programs will quickly find out that, since they can't control the program, they can't trust its answers to problems. No one wants a machine or a computer program or an employee they can't trust. If strong AI ever becomes a reality, I imagine a few big institutions will adopt it and quickly realize their mistake. So, I can't see how strong AI ever gets adopted widely enough to ever have the power to destroy humanity.
I don't think the danger of AI is that it is going to blow up the world. I think the danger is that it will be substituted for human judgement in practical and moral decisions. If something is not done, we are going to wake up one day and find out that every decision that affects our lives from whether we get a job or can rent an apartment or get a loan or even have a bank account or own a car will be made by AI machines running algorithms even their creators don't fully understand to make these decisions without any transparency, standards or accountability.
One of the reasons why bureaucracies of any kind love rules so much is that the existence of specific and detailed rules enables bureaucrats to make decisions without any moral accountability. There is no greater moral cop out than doing something because "the rules require it". AI takes this sort of inhuman rules based decision making to an entirely other level. With AI, the bureaucrats don't even have to make the rules. They can let an AI program both make the rules and make the decisions allowing them to exercise power without any transparency or accountability. I don't know why I can't give you this job but the machine says I can't and I have to follow what it says. That is the danger of AI. The concerns about it destroying the world are just a distraction.