[Linkpost] "Blueprint for an AI Bill of Rights" - Office of Science and Technology Policy, USA (2022)

post by Fer32dwt34r3dfsz (rodeo_flagellum) · 2022-10-05T16:42:37.471Z · LW · GW · 4 comments

This is a link post for https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Contents

  Summary of the press release 
  Structure of the Report 
  Other
    Before 2025, will laws be in place requiring that AI systems that emulate humans must reveal to people that they are AI?
None
4 comments

The PDF can be found here: https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

Some quick notes regarding this linkpost: 

My quick take: 

Summary of the press release 

Many technologies can or do pose a threat to democracy. There are plentiful cases where these technological "tools" limit their users more than help them. Examples include problems with the use of technology in patient care, algorithmic bias in credit and hiring, and widespread breaches of user data privacy. Of course, automation is, generally speaking, helpful (e.g., agricultural production, severe weather prediction, disease detection) but we need to ensure that "progress must not come at the price of civil rights...". 

"The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats"

The Office for Science and Technology proposes 5 design principles for minimizing harm from automated systems:

Structure of the Report 

(so you can have a sense of what's contained)

Other

The existence of this linkpost is due in part to casens commenting on the release of the report in the following Metaculus question, which this report is relevant to. 

AI-Human Emulation Laws before 2025 

Before 2025, will laws be in place requiring that AI systems that emulate humans must reveal to people that they are AI?

4 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2022-10-05T22:01:34.390Z · LW(p) · GW(p)

Phew! From the title I first thought it would be about some under-employed bureaucrats drawing up rights for the AIs themselves.

Replies from: MSRayne
comment by MSRayne · 2022-10-05T22:27:59.717Z · LW(p) · GW(p)

That actually would also be worthwhile. We will have AGI soon enough, after all, and I think it's hard to argue that it wouldn't be sentient and thus deserving of rights.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-10-06T00:06:04.138Z · LW(p) · GW(p)

AIXI contains sentient minds, but isn't itself sentient. I suspect there are designs of minds that are highly competent at many problems, and have a mental architecture totally different from humans. Such that if we had a clearer idea what we meant by "sentient", we would agree the AI wasn't sentient. 

Also, how long do we have sentient AI before singularity. If the first sentient AI is a paperclipper that destroys the world, any bill of "sentient AI rights" is pragmatically useless. 

comment by Charlie Steiner · 2022-10-05T18:53:39.248Z · LW(p) · GW(p)

SAFE AND EFFECTIVE SYSTEMS
[...]

Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.

It would be an interesting timeline if this language actually helped lobbyists shut down large AGI projects based on a lack of mitigation of foreseeable impacts.