Everybody's talking about machine ethics
post by sbenthall · 2014-09-17T17:20:57.516Z · LW · GW · Legacy · 16 commentsContents
16 comments
There is a lot of mainstream interest in machine ethics now. Here are some links to some popular articles on this topic.
By Zeynep Tufecki, a professor at the I School at UNC, on Facebook's algorithmic newsfeed curation and why Twitter should not implement the same.
By danah boyd, claiming that 'tech folks' are designing systems that implement an idea of fairness that comes from neoliberal ideology.
danah boyd (who spells her name with no capitalization) runs the Data & Society, a "think/do tank" that aims to study this stuff. They've recently gotten MacArthur Foundation funding for studying the ethical and political impact of intelligent systems.
A few observations:
First, there is no mention of superintelligence or recursively self-modifying anything. These scholars are interested in how, in the near future, the already comparatively powerful machines have moral and political impact on the world.
Second, these groups are quite bad at thinking in a formal or mechanically implementable way about ethics. They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades. On the contrary, mathematical formulation of ethical positions appears to be ya'll's specialty.
Third, however much the one-true-morality may be indeterminate or presently unknowable, progress towards implementable descriptions of various plausible moral positions could at least be incremental steps forward towards an understanding of how to achieve something better. Considering a slow take-off possible future, iterative testing and design of ethical machines with high computational power seems like low-hanging fruit that could only better inform longer-term futurist thought.
Personally, I try to do work in this area and find the lack of serious formal work in this area deeply disappointing. This post is a combination heads up and request to step up your game. It's go time.
Sebastian Benthall
PhD Candidate
UC Berkeley School of Infromation
16 comments
Comments sorted by top scores.
comment by fubarobfusco · 2014-09-18T01:30:28.852Z · LW(p) · GW(p)
One of boyd's examples is a pretty straightforward feedback loop, recognizable to anyone with the slightest degree of systems engineering:
Consider, for example, what’s happening with policing practices, especially as computational systems allow precincts to distribute their officers “fairly.” In many jurisdictions, more officers are placed into areas that are deemed “high risk.” This is deemed to be appropriate at a societal level. And yet, people don’t think about the incentive structures of policing, especially in communities where the law is expected to clear so many warrants and do so many arrests per month. When they’re stationed in algorithmically determined “high risk” communities, they arrest in those communities, thereby reinforcing the algorithms’ assumptions.
This system — putting more crime-detecting police officers (who have a nontrivial false-positive rate) in areas that are currently considered "high crime", and shifting them out of areas currently considered "low crime" — diverges under many sets of initial conditions and incentive structures. You don't even have to posit racism or classism to get these effects (although those may contribute to failing to recognize them as a problem); under the right (wrong) conditions, as t → ∞, the noise (that is, the error in the original believed distribution of crime) dominates the signal.
The ninth of Robert Peel's principles of ethical policing is surprisingly relevant: "To recognise always that the test of police efficiency is the absence of crime and disorder, and not the visible evidence of police action in dealing with them." [1]
comment by lukeprog · 2014-09-17T19:12:57.810Z · LW(p) · GW(p)
Seb, what kind of work do you "try to do" in this area? Do you have some blog posts somewhere or anything?
Replies from: sbenthall, sbenthall↑ comment by sbenthall · 2014-09-18T02:39:39.128Z · LW(p) · GW(p)
So there's some big problems of picking the right audience here. I've tried to make some headway into the community complaining about newsfeed algorithm curation (which interests me a lot, but may be more "political" than would interest you) here:
which is currently under review. It's a lot softer that would be ideal, but since I'm trying to convince these people to go from "algorithms, how complicated! Must be evil" to "oh, they could be designed to be constructive", it's a first step. More or less it's just opening up the idea that Twitter is an interesting testbed for ethically motivated algorithmic curation.
I've been concerned more generally with the problem of computational asymmetry in economic situations. I've written up something that's an attempt at a modeling framework here. It's been accepted only as a poster, because it's results are very slim. It was like a quarter of a semester's work. I'd be interested in following through on it.
http://arxiv.org/abs/1206.2878
The main problem I ran into was not knowing a good way to model relative computational capacity; the best tool I had was big-O and other basic computational theory stuff. I did a little sort of remote apprenticeship with David Wolpert as Los Alamos; he's got some really interesting stuff on level-K reasoning and what he calls predictive game theory.
http://arxiv.org/abs/nlin/0512015
(That's not his most recent version). It's really great work, but hard math to tackle on ones own. In general my problem is there isn't much of a community around this at Berkeley, as far as I can tell. Tell me if you know differently. There's some demand from some of the policy people--the lawyers are quite open-minded and rigorous about this sort of thing. And there's currently a ton of formal work on privacy, which is important but not quite as interesting to me personally.
My blog is a mess and doesn't get into formal stuff at all, at least not recently.
Replies from: Lumifer, lukeprogcomment by JoshuaMyer · 2014-09-19T19:20:26.114Z · LW(p) · GW(p)
They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades.
I'm fairly new here and would appreciate a brief informal survey of these tropes. Our brilliance aside, to predict which ideas will be new to you from context clues seems silly when you might be able to provide guidance.
Interesting to me, a friend who attempted to write a program capable of verifying mathematical proofs (all of them -- a tad ambitious) said he ran into the exact same problem with
not knowing a good way to model relative computational capacity.
comment by leplen · 2014-09-23T04:14:16.144Z · LW(p) · GW(p)
I'm pleased to see this posted as I think the idea of machine ethics represents an enormous opportunity for proponents of FAI, etc. Machine ethics allow us to start talking about the extent to which human values are encoded in algorithms and machines in the present day, and allows us to explain why tackling that problem here and now is worth doing. This is true even if the more apocalyptic claims in the Futurist/Superintelligence/Singularity meme-cluster never come to pass.
I would agree with you assessment that many of the people thinking about this are people with a humanities background, and that they don't seem particularly well-versed as a whole on some of the technical/mathematical issues underlying some of these problems.
I'm particularly interested in this issue because machines seem to operate better than humans at multi-dimensional optimization problems, which seems at least superficially similar to the complexity of value thesis.
comment by cameroncowan · 2014-09-20T18:19:26.907Z · LW(p) · GW(p)
I think its important to remember what ethics is and why humanity created it in the first place. Ethics is essentially a code of behavior using Morality as a guide to structure one's behavior to be both beneficial to yourself while not harming the greater society.
As early humans we decided that if we were going to work in a group we had to do two things: 1) Not kill each other) and 2) Behave in a manner that is beneficial.
This is essentially the Golden Rule. We had to decide that in order to use numbers and perseverance and ultimately reason to get to the top of the food chain.
Where is the machine's motivation to continue in this path? If its only desire is to propagate itself lots of things could get in the way of that, especially humanity. I think, for machines to get progressively more intelligent and for that to be beneficial we have to consider these problems and encode, at a very deep level accepted social norms into machines.
comment by slutbunwaller · 2014-09-18T12:53:36.576Z · LW(p) · GW(p)
What's a LessWrong?
Mildly harmful pseudo scientific movement with delusions of cultish grandeur.
Replies from: Nornagest, sbenthall, Lumifer↑ comment by Nornagest · 2014-09-22T21:39:23.840Z · LW(p) · GW(p)
Presumably the reason there is no mention of superintelligence or recursively self-modifying anything because those concepts don't exist with today's technology.
Superintelligence doesn't, but recursive self-modification has been a feature of AI research since the Seventies. As MIRI predicts, value stability proved to be a problem.
(Eurisko's authors solved it -- allegedly, there isn't much open data -- by walling off the agent's utility function from modification. This would be much harder to do to a superintelligent agent.)
↑ comment by Lumifer · 2014-09-18T15:28:50.536Z · LW(p) · GW(p)
Just a nitpick, but it should be y'all's not ya'll's.
Technically speaking, shouldn't it be all y'all's since it's plural? X-D
Replies from: fubarobfusco, cameroncowan↑ comment by fubarobfusco · 2014-09-18T21:57:22.026Z · LW(p) · GW(p)
↑ comment by cameroncowan · 2014-09-20T18:21:23.693Z · LW(p) · GW(p)
Its better than "yous guys" which would be totally acceptable in Brooklyn and Queens and most of Northern New Jersey. But it is y'all because its short for the 2nd person plural "you all."