Navigating AI Risks (NAIR) #1: Slowing Down AI

post by simeon_c (WayZ) · 2023-04-14T14:35:40.395Z · LW · GW · 3 comments

This is a link post for https://navigatingairisks.substack.com/p/slowing-down-ai-rationales-proposals

Contents

3 comments

Here’s the first edition (on slowdown) of Navigating AI Risks, a newsletter on AI governance that we're launching with some colleagues.

This newsletter is mostly aimed at policymakers but I expect that it might interest some of you to keep up with the ideas that are around in AI governance. 

Here's a bullet point summary of it

3 comments

Comments sorted by top scores.

comment by tchauvin (timot.cool) · 2023-04-14T15:07:57.731Z · LW(p) · GW(p)

The link of "this is a linkpost for" is not the correct one

comment by bvbvbvbvbvbvbvbvbvbvbv · 2023-04-15T11:56:36.179Z · LW(p) · GW(p)

Question : what do you think of the opinion of the chinese officials on easily accessible LLM to chinese citizens? As long as alignment is unsolved, I can imagine china being extremely leery of how citizens could somehow be exposed to ideas that go against official propaganda (human rights, genocide, etc).

But china can't accept being left out of this race either is my guess.

So in the end china is incentivized to solve alignment or to as least slow down its progress.

Have you thought about any of this? I'm extremely curious about anyone's opinion on the matter.

Replies from: WayZ
comment by simeon_c (WayZ) · 2023-04-16T20:52:40.096Z · LW(p) · GW(p)

Yes, I definitely think that countries with strong deontologies will try to solve some narrow versions of alignment harder than those that tolerate failures. 

I think it's quite reassuring and means that it's quite reasonable to focus on the US quite a lot in our governance approaches.