Practical AI risk I: Watching large compute

post by Gustavo Ramires (gustavo-ramires) · 2022-12-24T13:25:25.848Z · LW · GW · 0 comments

Contents

No comments

This post is going to lay out a practical and simple strategy to diminish risk of AI catastrophe.

I have voiced my uncertainty (but certainly not dismissal) about AI risk several times in the community, which I will briefly summarize:

(1) Technology has likely physical limitations (some theoretical limits are far off, but we don't know if we'll make much more progress in process technology);

(2) Intelligence isn't magical: there are ways to constrain systems that have many sigmas of reliability, even under worst case adversaries. One example I've given is running a computer inside a Faraday cage with self-contained power.

(3) Thousands of beings more intelligent than any AI today are improving AI at a fast but not incomprehensible velocity. A human-level AI (or maybe even 100s of them) could not self-improve faster than it currently improves.

(4) Taking over the world takes some time. It will be visible if an AI tries to take over the world: maybe it will be smart enough to divide us (e.g. linking it to conspiratorial thinking, an ideology or a political party), but if we're monitoring things at a minimum level, it would be clear there's an entity accruing resources, forms of power, and computation.

(5) Ethics is in a way rational, logical and "natural" (in a formal an general sense), systems learning from human bodies of knowledge should be able to pick up ethical reasoning, and designing truth-seeking and ethics objectives should not be impossible. (Sidenote: I propose we formalize ethics for this goal)

The point (4) is what we're going to exploit in this post. (hopefully more on (5) in another post)

We need one or several agencies or bodies watching over large compute resources (with internal access to their infrastructure) checking for abnormal activity, in particular large activities linked to a single network or entity. This is just regular network engineering, you can look at traffic statistics, sever rollout statistics, just check who has the most servers keeping an eye out for 3rd party delegation (where someone buys compute in the name of another). We need laws that allow such representatives/engineers to peer into the system in privacy-preserving way (i.e. with help from the company engineers) to check there's nothing funny going on. Of course AI companies should be monitored specially for autonomous agents, as well as any large decision-making AI. 

This looks like a bare minimum oversight of what our new hopeful allies (I write here that AI feels like a new species arriving on Earth in 2022) are doing, and how they're going to relate to us and other lifeforms on this planet.

0 comments

Comments sorted by top scores.