A rant against robots
post by Lê Nguyên Hoang (le-nguyen-hoang-1) · 2020-01-14T22:03:19.668Z · LW · GW · 7 commentsContents
It's about algorithms, not robots! AIs that matter are algorithms The power of information Better algorithms are information game changers The case of robots is very misleading Distributed algorithms are really hard to interrupt Algorithms work on very different space and time scales Today's algorithms already need alignment! Conclusion None 7 comments
What comes to your mind when you hear the word "artificial intelligence" (or "artificial general intelligence")? And if you want to prepare the future, what should come to your mind?
It seems that when most people hear AI, they think of robots. Weirdly, this observation includes both laymen and some top academics. Stuart Russell's book (which I greatly enjoyed) is such an example. It often presents robots as an example of an AI.
But this seems problematic to me. I believe that we should dissociate a lot more AIs from robots. In fact, given that most people will nevertheless think of robots when we discuss AIs, we might even want to use the terminology algorithms rather AIs. And perhaps algorithms with superhuman-level world model and planning capabilities instead of AGIs...
To defend this claim, I shall argue that the most critical aspects of today's and tomorrow's world-scale ethical problems (including x-risks) have and will have to do with algorithms; not robots. Moreover, and most importantly, the example of robots raises both concerns and solutions that seem in fact irrelevant to algorithms. Finally, I'll conclude by arguing that the example of today's large-scale algorithms is actually useful, because it motivates AI alignment.
It's about algorithms, not robots!
AIs that matter are algorithms
Today's AI research is mostly driven by non-robotic applications, from natural language processing to image analysis, from protein folding to query answering, from autocompletion to video recommendation. This is where the money is. Google is investing (hundreds of?) millions of dollars in improving its search engine and YouTube's recommendation system. Not in building robots.
Today's ranking, content moderation and automated labeling algorithms are arguably a lot more influential than robots. YouTube's algorithms have arguably become the biggest opinion-maker worldwide. They present risks and opportunities on a massive scale.
And it seems that there is an important probability that tomorrow's most influential algorithms will be somewhat similar, even if they achieve artificial general intelligence. Such algorithms will likely be dematerialized, on the cloud, with numerous copies of themselves stored in several data centres and terminals throughout the world.
And they will be extremely powerful. Not because they have some strong physical power. But because they control the flow of information.
The power of information
At the heart of the distinction between algorithms and robots is the distinction between information and matter. Physics has long turned our attention towards matter and energy. Biology studied on animals, plants and key molecules. Historians focused on monuments, artefacts and industrial revolution. But as these fields grew, they all seem to have been paying more and more attention to information. Physics studied entropy. Biology analyzed gene expressions. History celebrated the invention of language, writing, printing, and now computing.
Arguably, this is becoming the case for all of our society. Information has become critical to every government, every industry and every charity. Essentially all of today's jobs are actually information processing jobs. They are about collecting information, storing information, processing information and emitting information. This very blog post was written after a collection of information, which were then processed and now emitted.
By collecting and analyzing information, you can have a much better idea of what is wrong and what to do. And crucially, by emitting the right information to the right entities, you can start a movement, manipulate individuals and start a revolution. Information is what changes the world.
Better algorithms are information game changers
We humans used to be the leading information processing units on earth. Our human brains were able to collect, store, process and emit information in a way that nothing else on earth could.
But now, there are algorithms. They can collect, store, process and emit far more information than any group of humans ever could. They can now figure out what is wrong and what to do, sometimes far better than we humans can, by learning from information that we humans could not collect, store, process and emit. Algorithms can start movements, manipulate individuals and start revolutions on a global scale. They have become the most powerful entities on earth.
In fact, because such powerful algorithms are deployed by the most powerful companies which also have huge incentives to make their algorithms more capable, it seems much more likely to me that the first algorithm with superhuman-level world model and planning capabilities will be much more similar to YouTube's recommendation algorithm than to a robot. Recall that such an algorithm has access to a truly massive amount of data from all over the world. And that data is clearly critical to algorithmic capabilities.
As another example, an algorithm able to send messages through the internet to get a 3D-printer to print killer drones seem a lot more dangerous than any of the killer drones it creates...
This is why I believe that the biggest challenges of AI safety and ethics have likely little to do with robots. These challenges rather seem to concern information and information processing. They are about algorithms.
Not robots. Algorithms.
The case of robots is very misleading
It would be fine to still focus on robots if they were similar enough to algorithms. In the end, I don't really care why you would want to solve AI or AGI safety; it just matters (to me) that you do want to solve AI or AGI safety.
Unfortunately though, having a robot in mind as an example of AI or AGI seems also extremely misleading. In fact, so many AGI safety debates could probably be easily shortcut by focusing on algorithms rather than robots.
Distributed algorithms are really hard to interrupt
Let's take the case of safe interruptibility. Many AI safety critics would say that this isn't a problem, because you can just unplug the AI. Well, admittedly, if a robot is not skillful enough to prevent you from unplugging it, and if you have access to its plug, yes, sure, you could probably unplug it.
But now try to unplug an algorithm. Especially a distributed algorithm like the YouTube recommendation system or the Google search engine! Even if you were the CEO of Google, I'm skeptical you would be able to interrupt these algorithms.
There's worse. Try to unplug Bitcoin. Well, essentially, you would have to unplug all of the Internet... Good luck with that! This is because Bitcoin was designed to be uninterruptible by any small group of users. This is the whole point of designing distributed algorithms! They are designed to be so-called Byzantine-fault tolerant.
It seems more than reasonable to assume that any algorithm with superhuman-level world knowledge and planning capabilities will make sure it is Byzantine-fault tolerant too.
Algorithms work on very different space and time scales
Another key feature of robots that is misleading is that we usually expect them to interact with us at our space and time scale. Intuitively, whatever a robot says can be analyzed. And if what he says is suspicious, we could have the time to correct it before it causes harm.
The case of large-scale algorithms like the YouTube recommendation system is very different. YouTube "speaks" at the rate of millions of recommendations per minute. It "reads" at the rate of 500 hours of videos per minute, and millions of new human behaviours per minute. And YouTube does so on a global scale.
In particular, this means that no human could ever check even a small fraction of what this algorithm does. The mere oversight of large-scale algorithms is way beyond human capability. We need algorithms for algorithmic surveillance.
Today's algorithms already need alignment!
Finally, and perhaps most importantly, robots just aren't here. Even self-driving cars have yet to be commercialized. In this context, it's hard to get people to care about AGI risks, or about alignment. The example of robots is not something familiar to them. It's even associated with science fiction and other futuristic dubious stories.
Conversely, large-scale hugely influential and sophisticated algorithms are already here. And they're already changing the world, with massive unpredictable uncontrollable side effects. In fact, it is such side effects of algorithm deployments that are existential risks, especially if algorithms gain superhuman-level world model and planning capabilities.
Interestingly, today's algorithms also already pose huge ethical problems that absolutely need to be solved. Whenever a user searches "vaccine", "Trump" or "AGI risks" on YouTube, there's an ethical dilemma over which video should be recommended first. Sure, it's not a life or death solution (though "vaccine" could be). But this occurs billions of times per day! And it might make a young scholar mock AGI risks rather than be concerned about them.
Perhaps most interestingly to me, alignment (that is, making sure the algorithm's goal is aligned with ours) already seems critical to make today's algorithms robustly beneficial. This means that by focusing on the example of today's algorithms, it may be possible to convince AI safety skeptics to do research that is nevertheless useful to AGI safety. As an added bonus, we wouldn't need to sacrifice any respectability.
This is definitely something I'd sign for!
Conclusion
In this post, I briefly shared my frustration to see people discuss AIs and robots often in a same sentence, without clear distinction between the two. I think that this attitude is highly counter-productive to the advocacy of AI risks and the research in AI safety. I believe that we should insist a lot more on the importance of information and information processing through algorithms. This seems to me to be a more effective way to promote quality discussion and research on algorithmic alignment.
7 comments
Comments sorted by top scores.
comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2020-01-16T06:50:43.691Z · LW(p) · GW(p)
This is one of those things that seems obvious but it did cause some things to click for me that I hadn't thought of before. Previously my idea of AGI becoming uncontrollable was basically that somebody would make a superintelligent AGI in a box, and we would be able to unplug it anytime we wanted, and the real danger would be the AGI tricking us into not unplugging it and letting it out of the box instead. What changed this view was this line: "Try to unplug Bitcoin." Once you think of it that way it does seem pretty obvious that the most powerful algorithms, the ones that would likely first become superintelligent, would be distributed and fault-tolerant, as you say, and therefore would not be in a box of any kind to begin with.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2020-01-16T23:39:50.278Z · LW(p) · GW(p)
that the most powerful algorithms, the ones that would likely first become superintelligent, would be distributed and fault-tolerant, as you say, and therefore would not be in a box of any kind to begin with.
Algorithms don't have a single "power" setting. It is easier to program a single computer than to make a distributed fault tolerant system. Algorithms like alpha go are run on a particular computer with an off switch, not spread around. Of course, a smart AI might soon load its code all over the internet, if it has access. But it would start in a box.
Replies from: andrew-jacob-sauer↑ comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2020-01-18T00:11:50.240Z · LW(p) · GW(p)
Funny you mention AlphaGo, since the first time AlphaGo(or indeed any computer) beat a professional go player(Fan Hui), it was distributed across multiple computers. Only later did it become strong enough to beat top players with only a single computer.
comment by [deleted] · 2020-01-16T13:28:45.412Z · LW(p) · GW(p)
So people who do work on AI are not ignorant of the arguments that you make. The common association of of AI with robotics, while not an exclusive arrangement, is one that is carefully thought out and not just a manifestation of anthropic bias. Of course there's the point that robotic control requires a fair amount of AI--it is neurons that control our muscles--and so the fields naturally go together. But a more fundamental aspect you might not be considering is googable with the phrase "embodied intelligence":
It is likely the case our own higher intelligence is shaped by the fact that we are a minimally biased general intelligence (our brain's neocortex) layered on top of a perceptual input and robotic control system (our senses and body). So the theory goes that much of our learning heuristics and even moral instincts are developed, as babies, by learning how to control our own bodies, then generalizing that knowledge. If this is true, there is a direct connection between embodiment and both general intelligence and moral agents, at least in people. This is part of why, for example, Ben Goertzel's project is to create an "artificial toddler" using Hanson Robotics and OpenCog, or why AGI researchers at MIT have been so fascinated with making robots play with blocks.
I don't think the connection between robotics and AI is so tenuous as you make out. An intelligence need not be embedded.. but that raises the bar significantly, as now a lot of the priors have to be built-in rather than learned. Easier to just give it effectors and sense organs and learn that on its own.
Replies from: le-nguyen-hoang-1↑ comment by Lê Nguyên Hoang (le-nguyen-hoang-1) · 2020-01-16T20:27:51.621Z · LW(p) · GW(p)
This is probably more contentious. But I believe that the concept of "intelligence" is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".
I would rather stress two key properties of an algorithm: the quality of the algorithm's world model and its (long-term) planning capabilities. It seems to me (but maybe I'm wrong) that "embodied intelligence" is not very relevant to world model inference and planning capabilities.
Replies from: None↑ comment by [deleted] · 2020-01-17T00:08:43.296Z · LW(p) · GW(p)
Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".
Don't make the mistake of basing your notions of AI on uncomputable formalisms. That mistake has destroyed more minds on LW than probably anything else.
comment by Lê Nguyên Hoang (le-nguyen-hoang-1) · 2020-01-16T07:38:52.487Z · LW(p) · GW(p)
By the way, I've just realized that the Wikipedia page on AI ethics begins with robots. 😤