What role should LW play in AI Safety?

post by Chris_Leong · 2021-10-04T02:21:53.190Z · LW · GW · 2 comments

This is a question post.

Contents

2 comments

Many people on LW consider AI Safety either the most, or one of the most, important issues that humanity has to deal with. Surprisingly, I've seen very little discussion about how the LW community slots in here. I'm sure that the Lightcone team has discussed this extensively, but very little of their discussions have made it onto the forum. I hope that they write up some more of their thoughts at some point, so that the community can engage with them, but since there hasn't been much written on this topic, I'll focus mostly on how I see this topic.

I think a good place to begin would be to list the different ways that the Less Wrong community contributes or has contributed towards this project. By the LW community, I mean the broader rationalsphere, although I wouldn't include people who have just posted on LW once or twice without reading it ir itherwise engaging with the community:

a) By being the community out of which MIRI arose
b) By persuading a significant number of people to pursue AI safety research either within academia or outside of it
c) By donating money to AI Safety organisations
d) By providing a significant number of recruits for EA
e) By providing an online space in which to explore self-development
f) By developing rationality tools and techniques useful for AI safety (incl. CFAR)
g) By improving communication norms and practices
h) By producing rationalist or rationalist-adjacent intellectuals who persuade people that AI Safety is important
i) By providing a location for discussing and sharing AI Safety research
j) By creating real-world communities that provide for the growth and development of participants
k) By providing people a real-world community of people who also believe that AI safety is important
g) By providing a discussion space free from some of the political incentives [LW · GW] affecting EA
h) More generally, by approaching the problem of AI safety with a different lens than other concerned communities

Some of these purposes seem to have been better served by the EA community. For example, I expect that the EA community is currently ahead in terms of the following:

a) Building new institutions that focus on AI safety
b) Donating money to AI Safety organisations
c) Recruiting people for AI Safety research

The rationality community may very well be ahead of EA in terms of having produced intellectuals who persuaded people that AI Safety is important, but I would expect EA and the academic community to be more important going forward.

I think that LW should probably focus more on the areas where it has a comparative advantage and which takes into account our strengths and weaknesses.

I would list the strengths of the LW community compared to EA as the following:

And I would list our weaknesses as:

I would list the strengths of the rationality community compared to the academic AI Safety community as the following:

And I would list our weakness as:

Given this situation, how should LW slot into the AI safety landscape?

(I know that the ontology of the posty is a bit weird as there is overlap between LW/EA/Academia, but despite its limitations, I still feel that this frame is useful)

Answers

2 comments

Comments sorted by top scores.

comment by steven0461 · 2021-10-04T19:34:28.688Z · LW(p) · GW(p)

By being the community out of which MIRI arose

I would say the LW community arose out of MIRI.

Replies from: Chris_Leong
comment by Chris_Leong · 2021-10-08T17:03:46.418Z · LW(p) · GW(p)

Thanks for pointing this out.