AI Alignment and Recognition

post by Chris_Leong · 2022-04-08T05:39:36.015Z · LW · GW · 2 comments

Let's suppose we succeed in aligning a super-intelligence. We should expect that the super-intelligence will be able to provide a pretty good estimate of how impactful various people's actions were. So maybe there are some people toiling away on AI Safety who feel sad that their efforts aren't being recognised. I guess what I'm saying is that if we succeed you will be. I'm hoping that at least some people will find this encouraging.

2 comments

Comments sorted by top scores.

comment by Flaglandbase · 2022-04-08T06:27:45.033Z · LW(p) · GW(p)

Is that like when Dr. Who said in nine hundred years he's never met anyone who wasn't important

Replies from: Chris_Leong
comment by Chris_Leong · 2022-04-08T06:34:23.206Z · LW(p) · GW(p)

I think most people can make a difference if they really want to and if they're willing to set aside their ego and self-interest[1]. Of course, this would probably require a lot of hard work and some painful admissions about their own zone of competence.

  1. ^

    I don't mean to imply that I am completely successful in this myself.