Ownership and Artificial Intelligence
post by jferguson · 2010-10-31T15:44:38.802Z · LW · GW · Legacy · 15 commentsContents
15 comments
(This is a subject that appears incredibly important to me, but it's received no discussion on LW from what I can see with a brief search. Please do link to articles about this if I missed them.)
Edit: This is all assuming that the first powerful AIs developed aren't exponentially self-improving; if there's no significant period of time where powerful AIs exist but they're not so powerful that the ownership relations between them and their creators don't matter, these questions are obviously not important.
What are some proposed ownership situations between artificial intelligence and its creators? Suppose a group of people creates some powerful artificial intelligence that appears to be conscious in most/every way--who owns it? Should the AI legally have self-ownership, and all the responsibility for its actions and ownership of the results of its labor that implies? Or, should strong AI be protected by IP, the way non-strong AI code already can be, treated as a tool rather than a conscious agent? It seems wise to implore people to not create AIs that want to have total free agency and generally act like humans, but that's hardly a guarantee that nobody will, and then you have the ethical issue of not being able to just kill them once they're created (if they "want" to exist and appear genuinely conscious). Are there any proposed tests to determine whether a synthetic agent should be able to own itself or become the property of its creators?
I imagine there aren't yet good answers to all these questions, but surely, there's some discussion of the issue somewhere, whether in rationalist/futurist circles or just sci-fi. Also, please correct me on any poor word choice you notice that needlessly limits the topic; it's broad, and I'm not yet completely familiar with the lingo of this subject.
15 comments
Comments sorted by top scores.
comment by JGWeissman · 2010-10-31T17:35:01.003Z · LW(p) · GW(p)
Are there any proposed tests to determine whether a synthetic agent should be able to own itself or become the property of its creators?
It seems wise to implore people to not create AIs that want to have total free agency and generally act like humans
comment by NihilCredo · 2010-10-31T17:47:07.169Z · LW(p) · GW(p)
surely, there's some discussion of the issue somewhere, whether in rationalist/futurist circles or just sci-fi.
Understatement of the year?
Replies from: jfergusoncomment by NancyLebovitz · 2010-10-31T15:56:00.695Z · LW(p) · GW(p)
For a science fictional handling, see The Life Cycle of Software Objects by Ted Chiang. It's about various implications of sentient software pets.
Charles Stross' Saturn's Children (robots are imprinted on humans, but the human race is gone) might be of interest, though it's a less likely scenario based on slightly modified recordings of human minds/brains.
comment by Relsqui · 2010-11-02T06:44:41.424Z · LW(p) · GW(p)
I for one do not have a problem with discussing scenarios other than the one which is deemed to be most likely, important, or scary. (I don't particularly have anything to contribute to it either, just wanted there to be another viewpoint in the comments.)
comment by jferguson · 2010-10-31T20:22:26.048Z · LW(p) · GW(p)
I see this isn't very well-received. Could anyone do me the favor of explaining why? Is it just because I'm asking questions that people believe have already been addressed here? I'm new to posting on LW.
Replies from: whpearson, JoshuaZ↑ comment by whpearson · 2010-10-31T22:37:03.592Z · LW(p) · GW(p)
Some people here have the opinion that AI will definitely or very likely be immensely powerful and uncontrollable from the start.
So any argument that doesn't share this premise won't be well received. If you want to talk about things like this, say assuming no FOOM, what would happen or should we do?
However lesswrong is not really a good place for this sort of discussion. I don't really think there is a good place. Until we know how the technology for AI will work discussion seems moot.
For what it is worth, I expect that for certain sorts of AI the courts will adapt the laws for animals.
↑ comment by JoshuaZ · 2010-11-02T04:28:44.917Z · LW(p) · GW(p)
If AI don't FOOM or if friendliness or Friendliness turns out to be easy to establish, then the issues with AI become much more minor. It is only in those situations where your question becomes worth considering. The question then becomes interesting intellectually but probably not a question that requires massive resources. The concerns over AI are primarily due to FOOMing + potential unFriendliness.
comment by WrongBot · 2010-10-31T20:51:32.819Z · LW(p) · GW(p)
A software agent with enough optimizing power to make this question relevant will do whatever it wants (i.e., has been programmed to want). Worrying about ownership at that point seems misplaced.
Replies from: jferguson↑ comment by jferguson · 2010-10-31T21:11:57.359Z · LW(p) · GW(p)
Suppose a powerful AI commits a serious crime, and the reason it wanted to commit that crime wasn't because it was explicitly programmed to commit it, but instead emerged as a result of completely benign-appearing learning rules it was given. Would the AI be held legally liable in court like a person, or just disabled and the creators held liable? Are the creators liable for the actions of an unfriendly AI, even if they honestly and knowledgeably attempted to make it friendly?
Or, say that same powerful AI designs something, by itself, that could be patented. Can the creators completely claim that patent, or would it be shared, or would the AI get total credit?
If this strong AI enters a contract with a human (or another strong AI) for whatever reason, would/should a court of law recognize that contract?
These are all questions that seem relevant to the broader concept of ownership.
Replies from: WrongBot↑ comment by WrongBot · 2010-11-01T04:25:41.640Z · LW(p) · GW(p)
The answers to your questions are, in order:
- Who does the AI want to be held accountable for the crime?
- Who does the AI want to get credit for the invention?
- Does the AI want the court to recognize their contract?
This is presuming, of course, that all humans have not been made into paperclips.
Replies from: jferguson↑ comment by jferguson · 2010-11-01T04:38:17.916Z · LW(p) · GW(p)
Do you believe that there's truly no chance a powerful AI wouldn't immediately dominate human society? Or restated: will a strong AI, if created, necessarily be unfriendly and also able to take control of human society (likely meaning exponentially self-improving)?
Replies from: WrongBot↑ comment by WrongBot · 2010-11-01T20:07:53.842Z · LW(p) · GW(p)
Will a strong AI, if created, necessarily be unfriendly?
It's very likely, but not necessary.
Will it necessarily be able to take control of human society (likely meaning exponentially self-improving)?
If it's substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By "substantially smarter", I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
Replies from: jferguson↑ comment by jferguson · 2010-11-03T04:37:03.993Z · LW(p) · GW(p)
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to "grill" you; I can't even imagine a good order of magnitude to put on that probability)
Replies from: WrongBot↑ comment by WrongBot · 2010-11-03T11:22:21.871Z · LW(p) · GW(p)
I don't think I can come up with numerical probabilities, but I consider "massively smarter than a human" and "unfriendly" to be the default values for those characteristics, and don't expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.