Posts

Comments

Comment by Shiri Dori-Hacohen (shiri-dori-hacohen) on [Closed] Prize and fast track to alignment research at ALTER · 2022-09-21T19:06:26.670Z · LW · GW

This sounds very cool! To clarify, would contributions that are submitted to and/or accepted for publication elsewhere (ie in academic venues) be considered for this prize?

Comment by Shiri Dori-Hacohen (shiri-dori-hacohen) on We Choose To Align AI · 2022-01-30T17:11:57.365Z · LW · GW

P.S. I am not a prude and use curses in my language quite liberally. The problem for me was not the usage of the coarse language in and of itself, but the fact that it was directed at the reader for no reason whatsoever.

Comment by Shiri Dori-Hacohen (shiri-dori-hacohen) on We Choose To Align AI · 2022-01-30T17:10:13.258Z · LW · GW

Actually,  I think you were spot on. The curse was completely uncalled for and not helpful in any way, as I mentioned in this Twitter thread. This was the first email broadcast I ever opened from LessWrong - and will be the last as well. Unsubscribed.

Comment by Shiri Dori-Hacohen (shiri-dori-hacohen) on Action: Help expand funding for AI Safety by coordinating on NSF response · 2022-01-21T01:04:24.762Z · LW · GW

AI safety research is receiving very little federal funding at this time, and is almost entirely privately funded, AFAIK. I agree with you that NSF funding leads to a field being perceived as more legitimate, which IMO is in fact one of the biggest benefits if we manage to get this through. If you ask me, the AI safety community tends to overplay the perverse incentives in academia and underplay the value of having many many more (on average) very intelligent people thinking about what is arguably one of the bigger problems of our time. Color me skeptic, but I don't see any universe in which having AI safety research go mainstream is a bad thing.