post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Donald Hobson (donald-hobson) · 2022-06-26T14:26:14.751Z · LW(p) · GW(p)

Technical AI safety people tend to see AI policy people as the ones who buy infinite time to solve alignment, which would give them a fair shot. From my perspective, it’s more like the opposite; if alignment were to be solved tomorrow, that would give the AI policy people a fair shot at getting it implemented.

Just a reminder that these statements aren't contradictions of each other.

comment by Lone Pine (conor-sullivan) · 2022-06-26T08:22:48.597Z · LW(p) · GW(p)

Well that’s just adorable, seeing little ones work out policy disputes like that. Absolutely heartwarming.

Wait no, wrong post.

(/s)

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2022-06-26T23:32:44.191Z · LW(p) · GW(p)

They're cute, sure, until they aren't.

This post was mainly made with wheel-reinventers [LW · GW] in mind, but it's definitely true that some people run the risk of becoming outright cancerous if they're left in the cold for too long without affordable access to policy knowledge.

comment by Logan Riggs (elriggs) · 2022-06-26T14:13:57.374Z · LW(p) · GW(p)

From my perspective, it’s more like the opposite; if alignment were to be solved tomorrow, that would give the AI policy people a fair shot at getting it implemented.

I’m unsure what the government can do that DeepMind or OpenAI (or someone else) couldn’t do in their own. Maybe you’re imagining a policy that forces all companies to building aligned AI’s according to the solution, but this won’t be perfect and an unaligned AGI could still kill everyone (or it could be built somewhere else)

The first thing you do with a solution to alignment is build an aligned AGI to prevent all x-risks. I don’t see routing through the government helps that process(?)

comment by Thomas Kwa (thomas-kwa) · 2022-07-13T04:15:09.743Z · LW(p) · GW(p)

How do I use the flowchart? Are the diamonds statements that people with common misconceptions agree with, and the rectangles what to read to correct those misconceptions?

comment by Eris Discordia (oren-montano) · 2022-09-26T17:24:06.080Z · LW(p) · GW(p)

LOL Rule 8, humans will generally compromise your safety protocols 😜 https://www.lesswrong.com/posts/toK4qHaqjqQsGh4CS/here-be-agi-dragons [LW · GW]

comment by Seth Herd · 2022-09-26T16:18:00.404Z · LW(p) · GW(p)

It seems to me that public outcry could lead to a small but not large delay in developing AGI, and potentially could provide more time and funding for alignment work.

This is a clear and compelling argument for why AI development will not be regulated, and those regulations honored by they military. I think this argument also applies to AGI.

However, we're more concerned with AGI pursued by private organizations. It seems those could be regulated, and those regulations enforced. It seems to me that the limiting factor is recognition that the Chinese government is unlikely to honor such an agreement, even if it is made, and unlikely to enforce it on Chinese companies. As far as I can tell, no other government has much likelihood of being first to AGI, even with a delay.

Therefore, I'd guess that Western citizens and governments might put in place regulations that would slow down our development of AGI by a little, but not ones that would slow it down by a lot.

comment by trevor (TrevorWiesinger) · 2022-06-26T03:14:45.128Z · LW(p) · GW(p)

If you have any other basic books to propose, or advanced books, please say so. I will probably read them.