post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by GeneSmith · 2023-12-30T07:12:15.281Z · LW(p) · GW(p)

In Lex Fridman’s interview with Eliezer Yudkowsky, Eliezer presents no compelling path forward — and paints the future as almost non-existent.

It's worth pointing out that Eliezer's views on the relative hopelessness of the situation do not reflect those of the rest of the field. Nearly everyone else outside of MIRI is more optimistic than he is (though that is of course no guarantee he is wrong).

As an interested observer who has followed the field from a distance for about 6 years at this point, I don't think there has ever been a more interesting time with more things going on than now. When I talk to some of my friends that work in the field, many of their agendas sound kind of obvious to me, which is IMO an indication that there's a lot of low-hanging fruit in the field. I don't think you have to be a supergenius to make progress (unless perhaps you're working on agent foundations).

• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.

I'm not sure I understand what this means. Do you mean the "and" instead of "+"? Otherwise this statement is a little vague.

If you consider solving aging a high priority and are concerned that delays of AI might delay such a solution, here are a few things to cosider:

  • Probably over a hundred billion people have died building the civilization we live in today. It would be pretty disrespectful to their legacy if we threw all that away at the last minute just because we couldn't wait 20 more years to build a machine god we could actually control. Not to mention all the people who will live in the future if we get this thing right. In the grand scheme of the cosmos, one or two generations is nothing.
  • If you care deeply about this, you might consider working on cryonics both to make it cheaper for everyone and to increase the odds of personality and memory recovery following the revival process.

I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.

One potential answer here is "earn to give". If you have a chance to enter a lucrative career you can use your earnings from that career to help fund work done by others.

If that's not an option or doesn't sound like something you'd enjoy, perhaps you could move? There are programs like SERI MATS you could attempt to enroll in if you're a newcomer to the field of AI safety but have a relevant background in math or computer science (or are willing to teach yourself before the program begins).