Please help me sense-check my assumptions about the needs of the AI Safety community and related career plans
post by peterslattery · 2023-03-27T08:23:28.359Z · LW · GW · 4 commentsContents
Assumptions about the needs of the AI Safety community Assumption about the contribution of my series of posts Assumption about career paths to explore Feedback/Sense-checking None 4 comments
For background and context, see my related series of posts on an approach for AI Safety Movement Building [? · GW]. This is a quick and concise rewrite of the main points in the hope that it will attract better engagement and feedback.
Which of the following assumptions do you agree or disagree with? Follow the links to see some of the related content from my posts.
Assumptions about the needs of the AI Safety community
- A lack of people, inputs, and coordination is (one of several issues) holding back progress in AI Safety. [EA · GW] Only a small portion of potential contributors are focused on AI Safety, and current contributors face issues such as limited support, resources, and guidance.
- We need more (effective) movement builders to accelerate progress in AI Safety [EA · GW]. Utilising diverse professions and skills [EA · GW], effective movement builders can increase contributors, contributions, and coordination [EA · GW]within the AI Safety community, by starting, sustaining, and scaling useful [EA · GW] projects. They can do so while getting supervision and support from those doing direct work and/or doing direct work themselves [EA · GW].
- To increase the number of effective AI Safety movement builders we need to reduce movement building uncertainty. [EA · GW] Presently, it's unclear who should do what to help the AI Safety Community or how to prioritise between options for movement building. There is considerable disagreement between knowledgeable individuals in our diverse community [? · GW]. Most people are occupied with urgent object-level work, leaving no one responsible for understanding and communicating the community's needs.
- To reduce movement building uncertainty we need more shared understanding. [EA · GW] Potential and current movement builders need a sufficiently good grasp of key variables such as contexts, processes, outcomes, and priorities to be able to work confidently and effectively.
- To achieve more shared understanding we need shared language. [EA · GW] Inconsistencies in vocabulary and conceptualisations hinder our ability to survey and understand the AI Safety community's goals and priorities.
Assumption about the contribution of my series of posts
I couldn't find any foundation of shared language or understanding in AI Safety Movement building to work from, so I created this series of posts [? · GW] to share and sense-check mine as it developed and evolved. Based on this, I now assume:
- My post series offers a basic foundation for shared language and understanding in AI Safety Movement building, which most readers agree with [? · GW]. I haven't received much feedback but what I have received has generally been supportive. I could be making a premature judgement here so please share any disagreements you have.
Assumption about career paths to explore
If the above assumptions are valid then I have a good understanding of i) the AI Safety Community and what it needs, and ii) a basic foundation for shared language and understanding in AI Safety Movement building that I can build on. Given my experience with entrepreneurship, community building, and research, I therefore assume:
- It seems reasonable for me to explore if I can provide value by using the shared language and understanding to initiate/run/collaborate on projects that help to increase shared understanding & coordination within the AI Safety Community. For instance, this could involve evaluating progress in AI Safety Movement building [EA · GW] and/or surveying the community to determine priorities [EA · GW]. I will do this while doing Fractional Movement Building [EA · GW] (e.g., allocating some of my productive time to movement building and some of my time for direct work/self-education).
Feedback/Sense-checking
Do you agree or disagree with any of the above assumptions? If you disagree then please explain why.
Your feedback will be greatly valued and will help with my career plans.
To encourage feedback I am offering a bounty. I will pay up to 200USD in Amazon vouchers, shared via email, to up to 10 people who give helpful feedback on this post or my previous posts in the series by 15/4/2023. I will also consider rewarding anonymous feedback left here (but you will need to give me an email address). I will likely share anonymous feedback if it seems constructive, and I think other people will benefit from seeing it.
4 comments
Comments sorted by top scores.
comment by Hoagy · 2023-03-27T10:14:40.157Z · LW(p) · GW(p)
Your first link is broken :)
My feeling with the posts is that given the diversity of situations for people who are currently AI safety researchers, there's not likely to be a particular key set of understandings such that a person could walk into the community as a whole and know where they can be helpful. This would be great but being seriously helpful as a new person without much experience or context is just super hard. It's going to be more like here are the groups and organizations which are doing good work, what roles or other things do they need now, and what would help them scale up their ability to produce useful work.
Not sure this is really a disagreement though! I guess I don't really know what role 'the movement' is playing, outside of specific orgs, other than that it focusses on people who are fairly unattached, because I expect most useful things, especially at the meta level, to be done by groups of some size. I don't have time right now to engage with the post series more fully, so this is just a quick response, sorry!
there is uncertainty -> we need shared understanding -> we need shared language vs there is uncertainty -> what are organizations doing to bring people from individuals with potential together into productive groups making progress -> what are their bottlenecks to scaling up?
Replies from: peterslattery↑ comment by peterslattery · 2023-03-28T06:36:02.064Z · LW(p) · GW(p)
Hey Hoagy, thanks for replying, I really appreciate it!
I fixed that link, thanks for pointing it out.
Here is a quick response to some of your points:
My feeling with the posts is that given the diversity of situations for people who are currently AI safety researchers, there's not likely to be a particular key set of understandings such that a person could walk into the community as a whole and know where they can be helpful.
I tend to feel that things could be much better with little effort. As an analogy, consider the difference between trying to pick a AI safety project to work on now, versus before we had curation and evaluation posts like this [AF · GW].
I'll note that those posts seem very useful but they are now almost a year out of date and were only ever based on a small set of opinions. It wouldn't be hard to have something much better.
Similarly, I think that there is room for a lot more of this "coordination work' here and lots of low-hanging fruit in general.
It's going to be more like here are the groups and organizations which are doing good work, what roles or other things do they need now, and what would help them scale up their ability to produce useful work.
This is exactly what I want to know! From my perspective effective movement builders can increase contributors, contributions, and coordination [EA · GW]within the AI Safety community, by starting, sustaining, and scaling useful [EA · GW] projects.
Relatedly, I think that we should ideally have some sort of community consensus gathering process to figure out what is good and bad movement building (e.g., who are the good/bad groups, and what do the collective set of good groups need).
The shared language stuff and all of what I produced in my post is mainly a means to that end. I really just want to make sure that before I survey the community to understand who wants what and why, there is some sort of standardised understanding and language about movement building so that people don't just write it off as a particular type of recruitment done without supervision by non-experts.
comment by peterslattery · 2023-03-30T07:43:45.726Z · LW(p) · GW(p)
Anonymous submission: I have pretty strong epistemics against the current approach of “we’ve tried nothing and we’re all out of ideas”. It’s totally tedious seeing reasonably ideas get put forward, some contrarian position gets presented, and the community reverts to “do nothing”. That recent idea of a co-signed letter about slowing down research is a good example of the intellectual paralysis that annoys me. In some ways it feels built on perhaps a good analytical foundation, but a poor understanding of how humans and psychology and policy change actually work.
comment by peterslattery · 2023-03-28T06:13:31.753Z · LW(p) · GW(p)
Anonymous submission:
I only skimmed your post so I very likely missed a lot of critical info. That said, since you seem very interested in feedback, here are some claims that are pushing back against the value of doing AI Safety field building at all. I hope this is somehow helpful.
- Empirically, the net effects of spreading MIRI ideas seems to be squarely negative, both from the point of view of MIRI itself (increasing AI development, pointing people towards AGI), and from other points of views.
- The view of AI safety as expounded by MIRI, Nick Bostrom, etc is essentially an unsolvable problem. To put it in words that they would object it, they believe at some point humanity is going to invent a Godlike machine and this Godlike machine will then shape the future of the universe as it sees fit; perhaps according to some intensely myopic goal like maximizing paperclips. To prevent this from happening, we need to somehow make sure that AI does what we want it to do by formally specifying what we really want in math terms.
The reason MIRI have given up on making progress on this and don't see any way forward is because this is an unsolvable situation.
Eliezer sometimes talks about how the textbook from the future would have simple alignment techniques that work easily but he is simply imagining things. He has no idea what these techniques might be, and simply assumes there must be a solution to the problem as he sees it.
- There are many possibilities of how AI might develop that don't involve MIRI-like situations. The MIRI view essentially ignores economic and social considerations of how AI will be developed. They believe that the economic advantages of a super AI will lead to it eventually happening, but have never examined this belief critically, or even looked at the economic literature on this very big, very publicly important topic that many economists have worked on.
- A lot of abuse and bad behavior has been justified or swept under the rug in the name of 'We must protect unaligned AGI from destroying the cosmic endowment'. This will probably keep happening for the foreseeable future.
- People going into this field don't develop great option value.