Posts

Aligning AI Safety Projects with a Republican Administration 2024-11-21T22:12:27.502Z
AI Model Registries: A Foundational Tool for AI Governance 2024-10-07T19:27:43.466Z
Soft Nationalization: how the USG will control AI labs 2024-08-27T15:11:14.601Z
2024 State of the AI Regulatory Landscape 2024-05-28T11:59:06.582Z
What you really mean when you claim to support “UBI for job automation”: Part 1 2024-05-13T08:52:08.683Z
AI and Chemical, Biological, Radiological, & Nuclear Hazards: A Regulatory Review 2024-05-10T08:41:51.051Z
Reviewing the Structure of Current AI Regulations 2024-05-07T12:34:17.820Z
Open-Source AI: A Regulatory Review 2024-04-29T10:10:55.779Z
Cybersecurity of Frontier AI Models: A Regulatory Review 2024-04-25T14:51:20.272Z
Report: Evaluating an AI Chip Registration Policy 2024-04-12T04:39:45.671Z
AI Discrimination Requirements: A Regulatory Review 2024-04-04T15:43:58.008Z
AI Disclosures: A Regulatory Review 2024-03-29T11:42:10.754Z
AI Model Registries: A Regulatory Review 2024-03-22T16:04:15.295Z
AI Safety Evaluations: A Regulatory Review 2024-03-19T15:05:23.769Z
AI Incident Reporting: A Regulatory Review 2024-03-11T21:03:02.036Z
Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research 2024-03-07T21:37:00.526Z

Comments

Comment by Deric Cheng (deric-cheng) on Soft Nationalization: how the USG will control AI labs · 2024-08-27T23:24:04.119Z · LW · GW

I'd definitely agree with the perspective you're sharing!

Even in a fast-takeoff / total nationalization scenario, I don't think politicians will be so blindsided that the political discussion will go from "regulation as usual" to "total nationalization" in a couple of months. It's possible, but unlikely. 

I think it's equally / more likely that the amount of government involvement will scale over 2 - 5 years, and during that time a lot of these policy levers will be attempted. The success / failure of some of these policy levers to achieve US goals will probably determine if involvement proceeds all the way to "total nationalization". 

Comment by Deric Cheng (deric-cheng) on Soft Nationalization: how the USG will control AI labs · 2024-08-27T23:16:04.147Z · LW · GW

Thanks for the feedback, Akash!

Re: whether total nationalization will happen, I think one of our early takeaways here is that ownership of frontier AI is not the same as control of frontier AI, and also that the US government is likely interested in only certain types of control. 

That is, it seems like there's a number of plausible scenarios where the US government has significant control over AI applications that involve national security (cybersecurity, weapons development, denial of technology to China), but also little-to-no "ownership" of frontier AI labs, and relatively less control over commercial / civilian applications of superintelligent AI. Thereby achieving its goals with less involvement.

From this perspective, it's a bit more gray what the additional value-add of "ownership" would be for the US government, given the legal / political overhead. Certainly its still possible with significant motivation.

---

One additional thing I think is is that there's a wide gap between the worldviews of policymakers (focused on current national security concerns, not prioritizing superintelligence scenarios), and AI safety / capabilities researchers (highly focused on superintelligence scenarios, and consequently total nationalization). 

Even if "total / hard nationalization" is the end-state, I think its quite possible that this gap will take time to close! Political systems & regulation tend to move a lot slower than technological advances. In the case there's a 3 - 5 year period where policymakers are "ramping up" to the same level of concern / awareness as AI safety researchers, I expect some of these policy levers will happen during that "ramp-up" period.

Comment by Deric Cheng (deric-cheng) on Soft Nationalization: how the USG will control AI labs · 2024-08-27T22:51:27.210Z · LW · GW

That's a very good point! Technically he's retired, but I wonder how much his appointment is related to preparing for potential futures where OpenAI needs to coordinate with the US government on cybersecurity issues...

Comment by Deric Cheng (deric-cheng) on What you really mean when you claim to support “UBI for job automation”: Part 1 · 2024-05-13T16:10:04.789Z · LW · GW

Totally agree on the UBI being equivalent to a negative income tax in many ways! My main argument here is that UBI is a non-realistic policy when you actually practically implementing it, whereas NIT is the same general outcome but significantly more realistic. If you use the phrase UBI as the "high-level vision" and actually mean "implement it as a NIT" in terms of policy, I can get behind that.

Re: the simplicity idea, repeating what I left in a comment above: 

Personally, I really don't get the "easy to maintain" argument for UBI, esp. given my analysis above. You'd rather have a program that costs $4 trillion with zero maintenance costs, than a similarly impactful program that costs $~650 billion with maintenance costs? It's kind of a reductive argument that only makes sense when you don't look at the actual numbers behind implementing a policy idea.

Comment by Deric Cheng (deric-cheng) on What you really mean when you claim to support “UBI for job automation”: Part 1 · 2024-05-13T16:05:05.410Z · LW · GW

Re: "UBI in the context of automation", that's a great point and I can definitely see what you're getting at! The answer is that this is part 1 of a 2-part series - Part 1 is how to implement UBI realistically and Part 2 is how to pay for it. Paying for it is an equally or even more interesting problem. 

Re: penalizing productivity, it's pretty unclear from the research whether NIT actually reduces employment (the main side effect of penalizing productivity). Of course theoretically it should, but the data isn't really conclusive in either direction. Bunch of links above.

A modified EITC wouldn't have pressure to dismantle the current welfare system because it's a LOT cheaper than 40% of the US budget.  Adding a pure UBI on top of the existing welfare systems would make redistribution like 70-80% of the US budget, which is a pretty dicey political stance.

Personally, I really don't get the "easy to maintain" argument for UBI, esp. given my analysis above. You'd rather have a program that costs $4 trillion with zero maintenance costs, than a similarly impactful program that costs $~650 billion with maintenance costs? It's kind of a reductive argument that only makes sense when you don't look at the actual numbers behind implementing a policy idea.