Max Autonomy
post by blogospheroid · 2012-07-25T07:22:17.541Z · LW · GW · Legacy · 8 commentsContents
8 comments
I would like to raise a discussion topic in the spirit of trying to quantify risk from uncontrolled / unsupervised software.
What is the maximum autonomy that has been granted to an algorithm according to your best estimates? What is the likely trend in the future?
The estimates could be in terms of money, human lives, processes, etc.
Another estimate could be on the time it takes for a human to come in the process and say "This isn't right".
A high speed trading algorithm has a lot of money on the line, but a drone might have lives on the line.
A lot of business processes might get affected by data coming in via an API from a system that might have had slightly different assumptions resulting in catastrophic events. eg. http://en.wikipedia.org/wiki/2010_Flash_Crash
The reason this topic might be worth researching is that it is a relatively easy to communicate risk of AGI. There might be many people who have an implicit assumption that whatever software is being deployed in the real world, there are humans to counter balance it. For them, empirical evidence that they are mistaken about the autonomy given to present day software may shift beliefs.
EDIT : formatting
8 comments
Comments sorted by top scores.
comment by [deleted] · 2012-07-25T13:54:03.051Z · LW(p) · GW(p)
I've thought about this, and as far as I can tell, the big step which will greatly increase algorithmic autonomy is delivery.
For instance, currently we have automated factories, which can even run with the lights off, that the article appears to be getting to the point that: "Typically, workers are necessary to set up tombstones holding parts to be manufactured, and to remove the completed parts" It seems safe to refer to this as delivery, because it seems similar to delivery (Drop off this, pick up that, move it to another spot.) so if you were to automate this, that would allow you to further cut the number of humans needed in the process. http://en.wikipedia.org/wiki/Lights_out_%28manufacturing%29
We are going to have automated vehicles soon, if Google's research and states passing legislation is any indication. This is also an important step in delivering things. http://en.wikipedia.org/wiki/Autonomous_car
So, if we can automate delivery, we could have:
Raw materials are mined, and then automatically delivered to an automated factory which automatically assembles them into a finished product which is automatically delivered to you when you place an order through an automated online system. That's a pretty massive amount of autonomy.
Thinking about it, it seems likely that in the future, automating delivery would cause a huge surge in algorithmic autonomy. At the moment, there are a bunch of separate automatic systems, and all sorts of Humans deliver things from one to the other.
You might have things like:
A Hay Trucker drives to a farm and delivers Hay to the Farms Hay Storage.
The Farmer delivers Hay to Cows.
Cow Milk is delivered to a cheese Manufacturer who manufactures cheese.
Cheese is delivered to the warehouse of a Grocery store by another trucker.
And is then delivered to the front of the store and to restaurant owners by store stockers/cashiers.
Restaurant owners put the cheese into their automatic pizza machine to have it make pizza,
Which is then delivered to customers, who eat pizza.
The reason that Pizza required so much human effort seemed to be primarily because humans are delivering things from one automated system (say, an automated milker) to the next automated system(a cheese making machine), to the next automated system(an automatic pizza making machine) to the customers. If you could automate that type of step in general, then you'd have a huge surge in algorithmic autonomy.
I don't want to imply that delivering things is easy or hard to figure out. But it's automation would by far cause the great upward shift in algorithm autonomy that I can think of. And without it's automation, algorithms wouldn't be nearly as autonomous at all, because humans would have to keep picking up things at one end and moving them to the beginning of the next automatic system.
Is this the kind of focus you were thinking of?
Replies from: magfrump, blogospheroid↑ comment by magfrump · 2012-07-29T14:10:18.835Z · LW(p) · GW(p)
Are pizzas usually made automatically these days?
I mean, I could imagine some frozen pizzas, but I've never heard or experienced an "automatic pizza machine."
(I don't mean this as a substantial objection I'm just wondering about your specific example.)
Replies from: None↑ comment by [deleted] · 2012-07-30T13:34:35.900Z · LW(p) · GW(p)
I had recently heard about them and put a link to an Article that shows a demo of one below. They do exist, although I can't really give you specifics for how common they are and they probably aren't currently used for most deliveries. I'd imagine a local pizza place probably has less automation, but it will definitely vary in extent and how fast they need to produce pizzas.
http://www.huffingtonpost.com/2012/06/13/pizza-vending-machine-lets-pizza_n_1593115.html
↑ comment by blogospheroid · 2012-07-26T08:08:40.279Z · LW(p) · GW(p)
A very interesting way to look at the problem. Thanks for your thoughts.
comment by JoshuaFox · 2012-07-25T14:05:46.378Z · LW(p) · GW(p)
The question here is about goal-oriented systems that have done things that the creators did not expect and maybe does not want, but nonetheless which help achieve the goal as originally defined.
Replies from: blogospheroid↑ comment by blogospheroid · 2012-07-26T08:11:00.988Z · LW(p) · GW(p)
There is a lot of literature on misinterpretation of goals, goodhart's law, etc. I wasn't looking at that in specific in this question, though it is a part of overall concern.
comment by OrphanWilde · 2012-07-25T13:59:34.760Z · LW(p) · GW(p)
I'm not sure autonomy is the right word in reference to an algorithm, as the laws for the algorithm, so to speak, have been set down by another agent, the programmer. Which means the real question from a policy perspective may be what level of autonomy should you grant to the programmers.
The idea that there is a human to balance software has a weird opposite; I frequently see programs given "autonomous" access to data that the programmer isn't permitted to touch. If you wouldn't want programmers making life-and-death decisions, you don't want them codifying those decision-making processes, either.
I suspect the fact that programmers are making life and death decisions might be more alarming to many people than the idea that programs are making life and death decisions. For those who aren't alarmed by that possibility yet, a simple phrase may instill alarm: "These are the same kinds of people who programmed Outlook."
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-07-26T13:00:38.875Z · LW(p) · GW(p)
... I frequently see programs given "autonomous" access to data that the programmer isn't permitted to touch. If you wouldn't want programmers making life-and-death decisions, you don't want them codifying those decision-making processes, either.
In the context of the first quoted sentence, the second is false. An independent code review can make the program far more secure than any one of the reviewers would be.
If you mean something like, 'we don't have a good decision procedure for this case', then that's not a programming problem, it's a domain knowledge problem.