Lessons learned from talking to >100 academics about AI safety

post by Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T13:16:38.036Z · LW · GW · 18 comments

Contents

  Executive summary
    Findings
    Takeaways
  Things we/I did - long version
  Findings - long version
    People are open to chat about AI safety
    I learned something from the discussions
    Intentional vs unintentional harms
    It depends on the career stage
    Misunderstandings and vague concepts
    People dislike alarmism
    People are interested in the technical aspects
    People want to know how they can contribute
    People know that doing AI safety research is a risk to their academic career
    Explain don’t convince
    It has gotten much easier
  Conclusion
None
18 comments

I’d like to thank MH, Jaime Sevilla and Tamay Besiroglu for their feedback.

During my Master's and Ph.D. (still ongoing), I have spoken with many academics about AI safety. These conversations include chats with individual PhDs, poster presentations and talks about AI safety. 

I think I have learned a lot from these conversations and expect many other people concerned about AI safety to find themselves in similar situations. Therefore, I want to detail some of my lessons and make my thoughts explicit so that others can scrutinize them.

TL;DR: People in academia seem more and more open to arguments about risks from advanced intelligence over time and I would genuinely recommend having lots of these chats. Furthermore, I underestimated how much work related to some aspects AI safety already exists in academia and that we sometimes reinvent the wheel. Messaging matters, e.g. technical discussions got more interest than alarmism and explaining the problem rather than trying to actively convince someone received better feedback.

Update: here is a link with a rough description of the pitch I used.

Executive summary

I have talked to somewhere between 100 and 200 academics (depending on your definitions) ranging from bachelor students to senior faculty. I use a broad definition of “conversations”, i.e. they include small chats, long conversations, invited talks, group meetings, etc. 

Findings

Takeaways

Things we/I did - long version

In total, depending on how you count, I had between 100 and 200 conversations about AI safety with academics, most of which were Ph.D. students. 

This is the poster we used. We put it together in ~60 minutes, so don’t expect it to be perfect. We mostly wanted to start a conversation. If you want to have access to the overleaf and make adaptions, let me know. Feedback is appreciated.

Findings - long version

People are open to chat about AI safety

If you present a half-decent pitch for AI safety, people tend to be curious. Even if they find it unintuitive or sci-fi, in the beginning, they will usually give you a chance to change their mind and explain your argument. Academics tend to be some of the brightest minds in society and they are curious and willing to be persuaded when presented with plausible evidence. 

Obviously, that won’t happen all the time, sometimes you’ll be dismissed right away or you’ll hear a bad response presented as a silver bullet answer. But in the vast majority of cases, they’ll give you a chance to make your case and ask clarifying questions afterward. 

I learned something from the discussions

There are many academics working on problems related to AI safety, e.g. robustness, interpretability, control and much more. This literature is often not exactly what people in AI safety are looking for but they are close enough to be really helpful. In some cases, these resources were also exactly what I was looking for. So just on a content level, I got lots of ideas, tips and resources from other academics. Informal knowledge such as “does this method work in practice?” or “which code base should I use for this method?” is also something that you can quickly learn from practitioners. 

Even in the cases where I didn’t learn anything on a content level, it was still good to get some feedback, pushback and scrutiny on my pitch on why AI safety matters. Sometimes I skipped steps in my reasoning and that was pointed out, sometimes I got questions that I didn’t know how to answer so I had to go back to the literature or think about them in more detail. I think this made both my own understanding of the problem and my explanation of it much better. 

Intentional vs unintentional harms

Most of the people I talked to thought that intentional harm was a much bigger problem than unintended side effects. Most of the arguments around incentives for misalignment, robustness failures, inner alignment, goal misspecification, etc. were new and sound a bit out there. Things like country X will use AI to create a surveillance state seemed much more plausible to most. After some back and forth, people usually agreed that the unintended side effects are not as crazy as they originally seemed. 

I think this does not mean people caring about AI alignment should not talk about unintended side effects for instrumental reasons. I think this mostly implies that you should expect people to have never heard of alignment before and simple but concrete arguments are most helpful. 

It depends on the career stage

People who were early in their careers, e.g. Bachelor’s and Master’s students were often the most receptive to ideas about AI safety. However, they are also often far away from contributing to research so their goals might drastically change over the years. Also, they sometimes lack some understanding of ML, Deep Learning or AI more generally so it is harder to talk about the details.

PhDs are usually able to follow most of the arguments on a fairly technical level and are mostly interested, e.g. they want to understand more or how they could contribute. However, they have often already committed to a specific academic trajectory and thus don’t see a path to contribute to AI safety research without taking substantial risks. 

Post-docs and professors were the most dismissive of AI safety in my experience (with high variance). I think there are multiple possible explanations for this including 
a) most of their status depends on their current research field and thus they have a strong motivation to keep doing whatever they are doing now, 
b) there is a clear power/experience imbalance between them and me and 
c) they have worked with ML systems for many years and are generally more skeptical of everyone claiming highly capable AI. Their lived experience is just that hype cycles die and AI is usually much worse than promised. 

However, this comes from a handful of conversations and I also talked to some professors who seemed genuinely intrigued by the ideas. So don’t take it this as strong evidence.

Misunderstandings and vague concepts

There are a lot of misunderstandings around AI safety and I think the AIS community has failed to properly explain the core ideas to academics until fairly recently. Therefore, I often encountered confusions like that AI safety is about fairness, self-driving cars and medical ML. And while these are components of a very wide definition of AI safety and are certainly important, they are not part of the alignment-focused narrower definition of AI safety. 

Usually, it didn’t take long to clarify this confusion but it mostly shows that when people hear you talking about AI safety, they often assume you mean something very different from what you intended unless you are precise and concrete. 

People dislike alarmism

If you motivate AI safety with X-risk people tend to think you’re pascal’s mugging them or that you do this for signaling reasons. I think this is understandable. If you haven’t thought about how AI could lead to X-risk, the default response is that this is probably implausible and there are also wildly varying estimates of X-risk plausibility within the AI safety community. 

When people claim that civilization is going to go extinct because of nuclear power plants or because of ecosystem collapse from fertilizer overuse, I tend to be skeptical. This is mostly because I can’t think of a detailed mechanism of how either of those leads to actual extinction. If people are unaware of the possible mechanisms of advanced AI leading to extinction, they think you just want attention or don’t do serious research. 

In general, I found it easier just not to talk about X-risk unless people actively asked me to. There are enough other failure modes you can use to motivate your research that they are already familiar with that range from unintended side-effects to intended abuse. 

People are interested in the technical aspects

There are many very technical pitches for AI safety that never talk about agency, AGI, consciousness, X-risk and so on. For example, one could argue that

Most of the time, a pitch like “think about how good GPT-3 is right now and how fast LLMs get better; think about where a similar system could be in 10 years; What could go wrong if we don’t understand this system or if it became uncontrollable?” is totally fine to get an “Oh shit, someone should work on this” reaction even if it is very simplified. 

People want to know how they can contribute

Once you have conveyed the basic case for why AI safety matters, people tend to be naturally curious about how they can contribute. Most of the time, their current research is relatively far away from most AI safety research and people are aware of that. 

I usually tried to show a path between their research and research that I consider core AI safety research. For example, when people work on RL, I suggested working on inverse RL or reward design or when people work on NNs, I suggested working on interpretability. In many instances, this path is a bit longer, e.g. when someone works on some narrow topic in robotics. However, most of the time you can just present many different options, see how they respond to them and then talk about those that they are most excited about.

In general, AI safety comes with lots of hard problems and there are many ways in which people can contribute if they want to.

One pitfall of this strategy is that people sometimes want to get credit for “working on safety” without actually working on safety and start to rationalize how their research is somehow related to safety (I was guilty of this as well at some point). Therefore, I think it is important to point this out (in a nice way!) whenever you spot this pattern. Usually, people don’t actively want to fool themselves but we sometimes do that anyway as a consequence of our incentives and desires. 

People know that doing AI safety research is a risk to their academic career

If you want to get a Ph.D. you need to publish. If you want to get into a good post-doc position you need to publish even more. Optimally, you publish in high-status venues and collect lots of citations. Academics often don’t like this system but they know that this is “how it’s done”.

They are also aware that the AI safety community is fairly small in academia and is often seen as “not serious” or “too abstract”. Therefore, they are aware that working more on AI safety is a clear risk to their academic trajectory. 

Pointing out that the academic AI safety community has gotten much bigger, e.g. through the efforts of Dan HendrycksJacob SteinhardtDavid Kruger, Sam Bowman and others, makes it a bit easier but the risk is still very present. Taking away this fear by showing avenues to combine AI safety with an academic career was often the thing that people cared most about. 

Explain don’t convince

When I started talking to people about AI safety some years ago, I tried to convince them that AI safety matters a lot and that they should consider working on it. I obviously knew that this is an unrealistic goal but the goal was still to “convince them as much as possible”. I think this is a bad framing for two reasons. First, most of your discussions feel like a failure since people will rarely change their life substantially based on one conversation. Second, I was less willing to engage with questions or criticism because my framing assumed that my belief was correct rather than just my best working hypothesis. 
I think switching this mental model to “explain why some people believe AI safety matters” is a much better approach because it solves the problems outlined before but also feels much more collaborative. I found this framing to be very helpful both in terms of getting people to care about the issue but also in how I felt about the conversation later on.

I think there is also a vibes-based explanation to this. When you’re confronted with a problem for the first time and the other person actively tries to convince you, it can feel like being bothered by Jehova’s Witnesses or someone trying to sell you a fake Gucci bag. When the other person explains their arguments to you, you have more agency and control over the situation and “are allowed to” generate your own takeaways. This might seem like a small difference but I think it matters much more than I originally anticipated. 

It has gotten much easier

I think my discussions today are much more fruitful than, e.g. 3 years ago. There are multiple plausible explanations for this. a) I might have gotten better at giving the pitch, b) I’m now a Ph.D. student and thus my default trust might be higher, or c) I might just have lowered my standards. 

However, I think there are other factors at work that contribute to the fact that I can have better discussions. First, I think the AI alignment community has actually gotten better at explaining the risk in a more detailed fashion and in ways that can be explained in the language of the academic community, e.g. with more rigor and less hand-waving. Secondly, there are now some people in academia who take these risks seriously who have academic standing and whose work you can refer to in discussions (see above for links). Thirdly, capabilities have gotten good enough that people can actually envision the danger. 

Conclusion

I have had lots of chats with other academics about AI safety. I think academics are sometimes seen as “a lost cause” or “focusing on publishable results” by some people in the AI safety community and I can understand where this sentiment is coming from. However, most of my conversations were pretty positive and I know that some of them made a difference both for me and the person I was talking to. I know of people who got into AI safety because of conversations with me and I know of people who have changed their minds about AI safety because of these conversations. I also have gotten more clarity about my own thoughts and some new ideas due to these conversations. 

Academia is and will likely stay the place where research is done for a lot of people in the foreseeable future and it is thus important that the AI safety community interacts with the academic world whenever it makes sense. Even if you personally don’t care about academia, the people who teach the next generation, who review your papers and who set many research agendas should have a basic understanding of why you think AI safety is a cause worth working on even if they will not change their own research direction. Academia is a huge pool of smart and open-minded people and it would be really foolish for the AI safety community to ignore that. 

18 comments

Comments sorted by top scores.

comment by Marius Hobbhahn (marius-hobbhahn) · 2023-12-16T20:45:26.135Z · LW(p) · GW(p)

I haven't talked to that many academics about AI safety over the last year but I talked to more and more lawmakers, journalists, and members of civil society. In general, it feels like people are much more receptive to the arguments about AI safety. Turns out "we're building an entity that is smarter than us but we don't know how to control it" is quite intuitively scary. As you would expect, most people still don't update their actions but more people than anticipated start spreading the message or actually meaningfully update their actions (probably still less than 1 in 10 but better than nothing).

comment by Karl von Wendt · 2022-10-11T10:36:23.274Z · LW(p) · GW(p)

Thank you very much for sharing this - it is very helpful to me! I agree that academics, in particular within the EU, but probably also everywhere else, are an underutilized and potentially very valuable resource, especially with respect to AI governance. Your post seems to support my own view that we should be talking about "uncontrollable AI" instead of "misaligned AGI/superintelligence", which I have explained here: https://www.lesswrong.com/posts/6JhjHJ2rdiXcSe7tp/let-s-talk-about-uncontrollable-ai

comment by trevor (TrevorWiesinger) · 2022-10-11T00:03:11.989Z · LW(p) · GW(p)

Was this funded? These sorts of findings are clearly more worthy of funding than 90% of the other stuff I've seen, and most the rest of the 10% is ambiguous.

Forget the "Alignment Textbook from 100 years in the future"; if we had this 6 years ago, things would have gone very differently.

Replies from: marius-hobbhahn
comment by Marius Hobbhahn (marius-hobbhahn) · 2022-10-11T07:46:31.632Z · LW(p) · GW(p)

I don't think these conversations had as much impact as you suggest and I think most of the stuff funded by EA funders has decent EV, i.e. I have more trust in the funding process than you seem to have.  

I think one nice side-effect of this is that I'm now widely known as "the AI safety guy" in parts of the European AIS community and some people have just randomly dropped me a message or started a conversation about it because they were curious.

I was working on different grants in the past but this particular work was not funded. 

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2022-10-11T23:31:37.837Z · LW(p) · GW(p)

I was thinking about this post, not the conversations at all. If this post had existed 6 years ago, your conversations probably would have had much more impact than they did. It's a really good idea for people to know how to succeed instead of fail at explaining AI safety to someone for the first time, even if this was just academics.

This post should be distilled so even more people can see it, and possibly even distributed as zines or a guidebook, e.g. at some of the AI safety events such as in DC.

comment by Collin (collin-burns) · 2022-10-10T21:39:55.092Z · LW(p) · GW(p)

Thanks for writing this up! I basically agree with most of your findings/takeaways. 

In general I think getting the academic community to be sympathetic to safety is quite a bit more tractable (and important) than most people here believe, and I think it's becoming much more tractable over time. Right now, perhaps the single biggest bottleneck for most academics is having long timelines. But most academics are also legitimately impressed by recent progress, which I think has made them much more open to considering AGI than they used to be at least, and I think this trend will likely accelerate over the next few years as we see much more impressive models.

comment by Nisan · 2022-10-11T02:32:59.862Z · LW(p) · GW(p)

You poster talks about "catastrophic outcomes" from "more-powerful-than-human" AI. Does that not count as alarmism and x-risk? This isn't meant to be a gotcha, I just want to know what counts as too alarmist for you.

Replies from: marius-hobbhahn
comment by Marius Hobbhahn (marius-hobbhahn) · 2022-10-11T07:50:08.571Z · LW(p) · GW(p)

Maybe I should have stated this differently in the post. Many conversations end up talking about X-risks at some point but usually only after it went through the other stuff. I think my main learning was just that starting with X-risk as the motivation did not seem very convincing.

Also, there is a big difference in how you talk about X-risk. You could say stuff like "there are plausible arguments why X-risk could lead to extinction but even experts are highly uncertain about this" or "We're all gonna die" and the more moderate version seems clearly more persuasive. 

comment by Algon · 2022-10-10T17:34:29.323Z · LW(p) · GW(p)

"... Post-docs and professors were the most dismissive of AI safety in my experience (with high variance)."

What subset of senior academics took your seriously? Do you have data on them, on your talks etc.? Figuring out characteristics of these people seems like it could have high returns. 

Replies from: marius-hobbhahn
comment by Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T19:35:52.744Z · LW(p) · GW(p)

I think taking safety seriously was strongly correlated with whether or not they believed AI will be transformative in the near-term (as opposed to being just another hype cycle). But not sure. My sample is too small to make any general inferences. 

comment by jacob_cannell · 2022-10-10T16:30:56.292Z · LW(p) · GW(p)

Current LLMs are already hooked up to the internet, have access to millions of individual machines and are allowed to take actions on those machines

That seems pretty unlikely to me as worded - source?

Replies from: marius-hobbhahn
comment by Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T17:23:11.018Z · LW(p) · GW(p)

This seems to be what https://www.adept.ai/act is working on if I understood their website correctly. They probably don't have a million users yet though so I agree that it is not accurate at the moment. 

Also, https://openai.com/blog/webgpt/ is an LLM with access to the internet, right? 

But yeah, probably an overstatement. 

comment by Gurkenglas · 2022-10-10T14:55:11.073Z · LW(p) · GW(p)

People know that doing AI safety is a risk to their academic career

Would it help to point out that 2040 is unlikely to be a real year, so they should stop having 20-year plans?

Replies from: marius-hobbhahn
comment by Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T15:01:44.337Z · LW(p) · GW(p)

Not really. You can point out that this is your reasoning but whenever you talk about short timelines you can bet that most people will think you're crazy. To be fair, even most people in the alignment community are more confident that 2040 will exist than not, so this is even a controversial statement within AI safety. 

Replies from: Gurkenglas
comment by Gurkenglas · 2022-10-10T15:13:59.461Z · LW(p) · GW(p)

Huh, I misremembered Cotra's update, and wrt that Metaculus question that got retitled due to the AI effect, I can see most people thinking it resolves long before history ends.

comment by harfe · 2022-10-10T15:44:42.911Z · LW(p) · GW(p)

Great post! I agree that academia is a resource that could be very useful for AI safety.

There are a lot of misunderstandings around AI safety and I think the AIS community has failed to properly explain the core ideas to academics until fairly recently. Therefore, I often encountered confusions like that AI safety is about fairness, self-driving cars and medical ML.

I think these misunderstandings are understandable based on the term "AI safety". Maybe it would be better to call the field AGI safety or AGI alignment? This seems to me like a more honest description of the field.

You also write that you find it easier to not talk about xrisk. If we avoid talking about xrisk while presenting AI safety, then some misunderstandings about AI safety will likely persist in the future.

Replies from: marius-hobbhahn
comment by Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T17:20:11.532Z · LW(p) · GW(p)

Firstly, I don't think the term matters that much. Whether you use AGI safety,  AI safety, ML safety, etc. doesn't seem to have as much of an effect compared to the actual arguments you make during the conversation (at least that was my impression). 

Secondly, I don't say you should never talk about x-risk. I mostly say you shouldn't start with it. Many of my conversations ended up in discussions of X-risk but only after 30 minutes of back and forth. 

comment by ZY (AliceZ) · 2024-11-09T16:59:39.182Z · LW(p) · GW(p)

I highly agree with almost all of these points, and those are very consistent with my observation. As I am still relatively new to lesswrong, one big observation (based on my experience) I still see today, is disconnected concepts, definitions, and or terminologies with the academic language. Sometimes I see terminology that already exists in academia and introducing new concepts with the same name may be confusing without using channels academics are used to. There are some terms that I try to search on google for example, but the only relevant ones are from lesswrong or blogposts (which I still then read personally). I think this is getting better - in one of the recent conference reviews, I saw significant increase in submissions in AI safety working on X risks. 

Another point as you have mentioned is the reverse ingestion of papers from academia; there are rich papers in interpretability as you have mentioned for example, and some concrete confusion I saw from professors or people already in that field is that why there is feels like a lack of connection with these papers or concepts, even though they seems to be pretty related.

About actions - many people that I see are concerned about AI safety risks in my usual professional group are people who are concerned about or working in current intentional risks like misuse. Those are actually also real risks and have already started (CSAM, deep fake porn with real people's faces, privacy, potential bio/chem weapons), and needs to be worked on as well. It is hard to stop working on them, and transition directly to X risks.

However, I do think it is beneficial to keep merging the academic and AI safety groups, which I see are already underway with examples like more papers, and some PhD positions on AI Safety, industry positions etc; This will increase awareness of AI safety, and as you have mentioned the interests in the technical parts are shared, as they could be applied potentially to many kinds of safety, and hopefully not that much on capabilities (though sometimes not separable).