post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2021-07-09T02:00:04.823Z · LW(p) · GW(p)

How do you intend to avoid creating very many conscious and suffering people?

(and have you read Crystal Nights?)

Replies from: rafaelCosman
comment by rafaelCosman · 2022-02-14T02:26:48.913Z · LW(p) · GW(p)

Hey Zac, I think that's a valid concern. There are various "god powers" that we could potentially use to alleviate suffering, but again, that's not a complete solution. I would claim though, that even given the suffering our universe contains, we should be glad it exists (as opposed to not existing at all).

I suppose this is also related to the debate between negative and classical utilitarians!

comment by Ofer (ofer) · 2021-07-09T07:05:26.959Z · LW(p) · GW(p)

If something along these lines is the fastest path to AGI, I think it needs to be in the right hands. My goal would be, some months or years from now, to get research results that make it clear we’re on the right track to building AGI. I’d go to folks I trust such as Eliezer Yudkowsky/MIRI/OpenAI, and basically say “I think we’re on track to build an AGI, can we do this together and make sure its safe?” Of course understanding that we may need to completely pause further capabilities research at some point if our safety team does not give us the OK to proceed.

If you "completely pause further capabilities research", what will stop other AI labs from pursuing that research direction further? (And possibly hiring your now frustrated researchers who by this point have a realistic hope for getting immense fame, a Turing Award, etc.).

Replies from: rafael-cosman-1
comment by Rafael Cosman (rafael-cosman-1) · 2021-07-10T06:33:16.794Z · LW(p) · GW(p)

Valid concern. I would say (1) keep our research results very secret (2) hire people that are fairly aligned? But I agree that’s not a sure fire solution at all.