The Office of Science and Technology Policy put out a request for information on A.I.

post by HiroSakuraba (hirosakuraba) · 2023-05-24T13:33:30.672Z · LW · GW · 4 comments

This is a link post for https://www.whitehouse.gov/wp-content/uploads/2023/05/OSTP-Request-for-Information-National-Priorities-for-Artificial-Intelligence.pdf

This request for information does cover some of the possible existential risks, in the first section.  I am going to submit a few responses of my own, and I am hopeful others will do the same.  

1. What specific measures – such as standards, regulations, investments, and improved trust and safety practices – are needed to ensure that AI systems are designed, developed, and deployed in a manner that protects people’s rights and safety? Which specific entities should develop and implement these measures?

 2. How can the principles and practices for identifying and mitigating risks from AI, as outlined in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, be leveraged most effectively to tackle harms posed by the development and use of specific types of AI systems, such as large language models? 

3. Are there forms of voluntary or mandatory oversight of AI systems that would help mitigate risk? Can inspiration be drawn from analogous or instructive models of risk management in other sectors, such as laws and policies that promote oversight through registration, incentives, certification, or licensing?

 4. What are the national security benefits associated with AI? What can be done to maximize those benefits? 

5. How can AI, including large language models, be used to generate and maintain more secure software and hardware, including software code incorporating best practices in design, coding and post deployment vulnerabilities? 

6. How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?

 7. What are the national security risks associated with AI? What can be done to mitigate these risks?

 8. How does AI affect the United States’ commitment to cut greenhouse gases by 50-52% by 2030, and the Administration’s objective of net-zero greenhouse gas emissions no later than 2050? How does it affect other aspects of environmental quality?

4 comments

Comments sorted by top scores.

comment by Akash (akash-wasil) · 2023-05-25T13:37:50.099Z · LW(p) · GW(p)

I've been working on a response to the NTIA request for comments on AI Accountability over the last few months. It's likely that I'll also submit something to the OSTP request.

I've learned a few useful things from talking to AI governance and policy folks. Some of it is fairly intuitive but still worth highlighting (e.g., try to avoid jargon, remember that the reader doesn't share many assumptions that people in AI safety take for granted, remember that people have many different priorities). Some of it is less intuitive (e.g., what actually happens with the responses? How long should your response be? How important is it to say something novel? What kinds of things are policymakers actually looking for?)

If anyone is looking for advice, feel free to DM me. 

comment by Olli Järviniemi (jarviniemi) · 2023-05-25T09:30:31.449Z · LW(p) · GW(p)

For coordination purposes, I think it would be useful for those who plan on submitting a response mark that they'll do so, and perhaps tell a little about the contents of their response. It would also be useful for those who don't plan on responding to explain why not.

Replies from: hirosakuraba
comment by HiroSakuraba (hirosakuraba) · 2023-06-02T15:08:47.156Z · LW(p) · GW(p)

The majority of my response is in reducing our systems exposure vulnerabilities.  As a believer in the power of strong cryptography no matter the intelligence involved, I am going to explain the value of removing or spinning down of the NSA/CIA program of back-doors, zero day exploits, and intentional cryptographic weaknesses that have been introduced into our hardware and software infrastructure.

comment by Evan R. Murphy · 2023-05-30T18:06:06.526Z · LW(p) · GW(p)

Great idea, we need to make sure there are some submissions raising existential risks.

Deadline for the RFI: July 7, 2023 at 5:00pm ET