post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by ChristianKl · 2022-05-02T01:59:08.185Z · LW(p) · GW(p)

You make it sound like Elon Musk founded OpenAI without speaking to anyone in X-risk. That's not the case. EA's talked with him back then. As far as my memory goes, at least Nick Bostrom did try to explain everything to Musk.

Later, I do remember someone speaking about talking to Musk during Neurolink's existence and being disappointed with Musk's inability to follow arguments related to why Neurolink is not a good plan to avoid AI risk.

If we managed to create a good social signal associated with AI-risk, it would be much easier to persuade policy makers to do something.

The goal isn't to get people "to do something" but to get people to take effective action. If you push AI safety to be something that's about signaling, you will unlikely get effective action related to it. 

Replies from: Dmitry Savishchev
comment by Catnee (Dmitry Savishchev) · 2022-05-02T04:14:55.155Z · LW(p) · GW(p)

Thank you for reply.

  1. You make it sound like Elon Musk founded OpenAI without speaking to anyone in X-risk

I didn't know about that, it was good move from EA, why don't try it again? Again, I don't say that we definitely need to make badge on twitter, first of all, we can try to change Elon's models, and after that we can think what to do next.

2.Musk's inability to follow arguments related to why Neurolink is not a good plan to avoid AI risk.

Well, if it is conditional on: "there are widespread concerns and regulations about AGI" and "neuralink is working and can significantly enhance human intelligence" then i can clearly see how it will decrease AI-risks. Imagine Yudkowsky with significantly enhanced capabilities working with several others AI safety researchers, communicating with speed of thought. Of course it will mean that no one else get their hands on that for a while, and we need to build it before AGI become a thing. But it still possible, and i can clearly see how anybody in 2016 is incapable of predicting current ML progress and therefore places their bets on something long-playing, like neuralink

  1. If you push AI safety to be something that's about signaling, you will unlikely get effective action related to it.

If you can't use signalling before you can pass "a really good exam that shows your understanding of topic" why it will be a bad signal? There are exams that didn't fall that badly for goodhart's law, like, you can't solve a test for calculating integrals, without actually good practical skill. My idea around badge was more like "trick people that it is easy and they can get another social signal, watch how they realize the problem after investigating it"

And the whole idea of post isn't about "badge", it's about "talk with powerful people to explain to them our models"

Replies from: abramdemski
comment by abramdemski · 2022-08-30T18:23:06.301Z · LW(p) · GW(p)

I didn't know about that, it was good move from EA, why don't try it again?

My low-evidence impression is that there was a fair amount of repeated contact at one time. If it's true that that contact hasn't happened recently, it's probably because it hit diminishing returns in comparison with other things. I doubt people were in touch with Elon and then just forgot about the idea. So I conclude that the remaining disagreements with Elon are probably not something that can be addressed within a short amount of time, and would require significantly longer discussions to make progress on.