Why does LW not put much more focus on AI governance and outreach?

post by Severin T. Seehrich (sts), Benjamin Schmidt (sliqz) · 2025-04-12T14:24:54.197Z · LW · GW · 28 comments

Contents

    1. MIRI's shift in strategy
    2. Even if we solve technical alignment, Gradual Disempowerment seems to make catastrophe the default outcome
    3. We have evidence that the governance naysayers are badly calibrated
  Conclusion
None
28 comments

Epistemic status: Noticing confusion

There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in or gradual disempowerment don't become the default outcome of deploying aligned AGI.

Here's three reasons why we think we might want to shift much more resources towards governance and outreach:

1. MIRI's shift in strategy

The Machine Intelligence Research Institute (MIRI), traditionally focused on technical alignment research, has pivoted to broader outreach. They write in their 2024 end of year update:

Although we continue to support some AI alignment research efforts, we now believe that absent an international government effort to suspend frontier AI research, an extinction-level catastrophe is extremely likely.

As a consequence, our new focus is on informing policymakers and the general public about the dire situation, and attempting to mobilize a response.

In the same vein, EY said already in 2023 that "Pausing AI Developments Isn't Enough. We Need to Shut it All Down [LW · GW]".

2. Even if we solve technical alignment, Gradual Disempowerment seems to make catastrophe the default outcome

​The "Gradual Disempowerment" report warns that even steady, non-hostile advancements in AI could gradually erode human influence over key societal systems like the economy, culture, and governance. As AI systems become more efficient and cost-effective, they may increasingly replace human roles, leading institutions to prioritize AI-driven processes over human participation. This transition could weaken both explicit control mechanisms, like democratic participation, and implicit alignments that have historically ensured societal systems cater to human interests. The authors argue that this subtle shift, driven by local incentives and mutual reinforcement across sectors, could lead to an irreversible loss of human agency, constituting an existential risk. They advocate for proactive measures, including developing metrics to monitor human influence, implementing regulations to limit AI's autonomy in critical areas, and fostering international cooperation to ensure that AI integration doesn't compromise human agency. ​

This concern is not new; it expands on concerns Paul Christiano voiced already in 2019 in "What failure looks like [LW · GW]".

3. We have evidence that the governance naysayers are badly calibrated

In a recent interview, Conjecture's Gabriel Alfour reports that Control AI simply cold mailed British MPs and Lords, got 60 meetings, and had 20 of them sign a statement to take extinction risks from AI seriously. That's a 33% conversion rate - without an existing network in politics do draw on. This came as a great surprise to the AI safety people he told about it.

Conclusion

Given these considerations, we find it surprising that LessWrong still barely addresses governance and outreach. We wonder whether it makes sense to throw much more resources at governance and outreach than is currently happening.

Or, is as much effort going into governance as into technical alignment, but we just don't manage to find the platforms where the relevant conversations happen?

28 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2025-04-14T21:30:31.640Z · LW(p) · GW(p)

IDK, like 15% of submissions to LW seem currently AI governance and outreach focused. I don't really get the premise of the post. It's currently probably tied for the most popular post category on the site (my guess is a bit behind more technical AI Alignment work, but not by that much).

Here is a quick spreadsheet where I classify submissions in the "AI" tag by whether they are substantially about AI Governance: 

https://docs.google.com/spreadsheets/d/1M5HWeH8wx7lUdHDq_FGtNcA0blbWW4798Cs5s54IGww/edit?usp=sharing 

Overall results: 34% of submissions under the AI tag seem pretty clearly relevant to AI Governane or are about AI governance, and additional 12% are "maybe". In other words, there really is a lot of AI Governance discussion on LW.

Replies from: sts
comment by Severin T. Seehrich (sts) · 2025-04-15T01:11:23.298Z · LW(p) · GW(p)

Huh, interesting. We seem to understand different things under the term "policy" there. Going by the titles alone, I'd only have counted lines 13, 18-20 as "yes" and 33 as "maybe". So 12-15%. 9-13% if I exclude this post, which has been made after observing the ratio.

Still curious what percentage they make up among total submissions, not only under the AI tag.

Replies from: habryka4
comment by habryka (habryka4) · 2025-04-15T01:27:37.856Z · LW(p) · GW(p)

Yeah, I don't the classification is super obvious. I categorized all things that had AI policy relevance, even if not directly about AI policy. 

23 is IMO also very unambiguously AI policy relevant (ignoring quality). Zvi's analysis almost always includes lots of AI policy discussion, so I think 33 is also a clear "Yes". The other ones seem like harder calls.

Sampling all posts also wouldn't be too hard. My guess is you get something in the 10-20%-ish range of posts by similar standards.

comment by Charbel-Raphaël (charbel-raphael-segerie) · 2025-04-13T07:23:50.708Z · LW(p) · GW(p)

Strong upvoted. I consider this topic really important.

My guess is that most of the reasons are historical ones that shouldn't hold today. In the past, politics was the mind-killer on this platform, and it might still be, but progress can be made, and I think this progress is almost necessary for us to be saved:

  • The AI Act and its code of practice
  • SB1047 was very close to being a major success
  • The Seoul Summit during which major labs committed to publishing their safety and security frameworks

What's the plan otherwise? Have a pivotal act from OpenAI, Anthropic, or Google? I don't want this approach; it seems completely undemocratic honestly, and I don't think it's technically feasible.

I think the good Schelling point is a treaty of non-development of superintelligence (like advocated at aitreaty.org or this one [LW · GW]). That's the only reasonable option.

I think the real argument is that there are very few technical people willing to reconsider their careers, or they don't know how to do it, or that there isn't enough training available. Beyond entry level courses like BlueDot or AI Safety Colab, good advanced training is limited. Only Horizon Institute, Talos Network, and MATS (which accepts approximately 10 people per cohort), plus ML4Good (which is soon transitioning from a technical bootcamp to a governance one) offer resources to become proficient in AI Governance.

Here's more detail on my position: https://x.com/CRSegerie/status/1907433122624622824

Happy to have a dialogue on the subject with someone who disagrees.

Replies from: katalina-hernandez
comment by Katalina Hernandez (katalina-hernandez) · 2025-04-14T12:12:27.048Z · LW(p) · GW(p)

@Charbel-Raphaël [LW · GW]- as you've mentioned the European AI Act. 

Did you know that it actually mentions "alignment with human intent" as a key factor for regulation of systemic risks?

I do not know of any other law that frames alignment this way and makes it a key impact area. 

It also mentions alignment as part of the Technical documentation that AI developers must make publicly available.

I feel like this already merits acknowledgment by this community. This can enable research (and funds) if cited correctly by universities and non-profits in Europe.  

comment by mako yass (MakoYass) · 2025-04-13T00:23:02.537Z · LW(p) · GW(p)

Rationalist discourse norms require a certain amount of tactlessness, saying what is true even when the social consequences of saying it are net negative. Politics (in the current arena) requires some degree of deception or at least complicity with bias (lies by ommision, censorship/nonpropagation of inconvenient counterevidence). 

Rationalist forum norms essentially forbid speaking in ways that're politically effective. Those engaging in political outreach would be best advised to read lesswrong but never comment under their real name. If they have good political instincts, they'd probably have no desire to.

It's conceivable that you could develop an effective political strategy in a public forum under rationalist discourse norms, but if it is true it's not obviously true, because it means putting the source code of a deceptive strategy out there in public, and that's scary.

Replies from: sliqz
comment by Benjamin Schmidt (sliqz) · 2025-04-13T02:49:34.567Z · LW(p) · GW(p)

I don't think that effective politics in this case requires deception and deception often backfires in unexpected ways.

Gabriel and Connor suggest in their interview that radical honesty - genuinely trusting politicians, advisors and average people to understand your argument and recognizing that they also don't want to die from ASI - can be remarkably effective. The real problem may be that this approach is not attempted enough. I remember this as a slightly less but still positive datapoint https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speaking-to-congressional-staffers-about-ai-risk [LW · GW] .

> If they have good political instincts, they'd probably have no desire to.

I can see that critique. I can also see something in the opposite direction where there is a giant "ugh field" around politics which we can dissolve for a lot of people who could be active in the space. We can both be honest and effective.

Luckily, we are in a world where most people already don't like AI, they don't like transhumanist ideas where they get killed to be replaced by AI, they don't like to get killed in an ASI race, and they instinctively think intelligence smarter than them is dangerous. 

Building the necessary coordination becomes significantly harder when deception is involved. MIRI's public strategy as a reference:

1. Many other organizations are attempting the coalition-building, horse-trading, pragmatic approach. In private, many of the people who work at those organizations agree with us, but in public, they say the watered-down version of the message. We think there is a void at the candid end of the communication spectrum that we are well positioned to fill.
2. We think audiences are numb to politics as usual. They know when they’re being manipulated. We have opted out of the political theater, the [kayfabe](https://en.wikipedia.org/wiki/Kayfabe), with all its posing and posturing. We are direct and blunt and honest, and we come across as exactly what we are.
3. Probably most importantly, we believe that “pragmatic” political speech won’t get the job done. The political measures we’re asking for are a big deal; nothing but the full unvarnished message will motivate the action that is required.
 

Replies from: MakoYass
comment by mako yass (MakoYass) · 2025-04-14T01:16:17.618Z · LW(p) · GW(p)

In watching interactions with external groups, I'm... very aware of the parts of our approach to the alignment problem that the public, ime, due to specialization being a real thing, actually cannot understand, so success requires some amount of uh, avoidance. I think it might not be incidental that the platform does focus (imo excessively) on more productive, accessible common enemy questions like control and moratorium, ahead of questions like "what is CEV and how do you make sure the lead players implement it". And I think to justify that we've been forced to distort some of our underlying beliefs about how relatively important the common enemy questions still are relative to the CEV questions.

I'm sure that many at MIRI disagree with me on the relative importance of those questions, but I'm increasingly suspecting it's not because they understand something about the trajectory of AI that I don't, and that it's really because they've been closer to the epicenter of an avoidant discourse.

In my root reply I implied that lesswrong is too open/contrarian/earnest to entertain that kind of politically expedient avoidance, on reflection, I don't think that ever could have been true[1]. I think some amount of avoidance may have been inside the house for a long time.

And this isn't a minor issue because I'm noticing that most external audiences, when they see us avoiding those questions, freak out immediately, and assume we're doing it for sinister reasons (which is not the case[2], at least so far!) and then they start painting their own monsters into that void.

It's a problem you might not encounter much as long as you can control the terms of the conversation, but as you gain prominence, you lose more and more control over the kinds of conversations you have to engage in, the world will pick at your softest critical parts. And from our side of things it might seem malicious for them to pick at those things. I think in earlier cases it has been malicious. But at this point I'm seeing the earnest ones start to do it too.

  1. ^

    "Just Tell The Truth" wasn't ever really a principle anyone could implement. Bayesians don't have access to ultimate truths, ultimate truths are for logical omnisciences, when bayesians talk to each other, the best we can do is convey part of the truth. We make choices about which parts to convey and when. If we're smart, we limit ourselves to conveying truths that we believe the reader is ready to receive. That inherently has a lot of tact to it, and looking back, I think a worrying amount of tact has been exercised.

  2. ^

    The historical reasons were good, generalist optimizers seemed likelier as candidates for the first superintelligences and the leading research orgs all seemed to be earnest utopian cosmopolitan humanists. I can argue that the first assumption is no longer overwhelmingly likely (shall I?) and the latter assumption is obviously pretty dubious at this point.

Replies from: sts
comment by Severin T. Seehrich (sts) · 2025-04-14T10:01:24.112Z · LW(p) · GW(p)

So you think the alignment problem is solvable within the time we appear to have left? I'm very sceptical about that, and that makes me increasingly prone to believe that CEV, at this point in history, genuinely is not a relevant question. Which appears to be a position a number of people in PauseAI hold.

Replies from: MakoYass, sts
comment by mako yass (MakoYass) · 2025-04-14T18:48:28.194Z · LW(p) · GW(p)

I'm saying they (at this point) may hold that position for (admirable, maybe justifiable) political rather than truthseeking reasons. It's very convenient. It lets you advocate for treaties against racing. It's a lovely story where it's simply rational for humanity to come together to fight a shared adversary and in the process somewhat inevitably forge a new infrastructure of peace (an international safety project, which I have always advocated for and still want) together. And the alternative is racing and potentially a drone war between major powers and all of its corrupting traumas, so why would any of us want to entertain doubt about that story in a public forum?

Or maybe the story is just true, who knows.

(no one knows, because the lens through which we see it has an agenda, as every loving thing does, and there don't seem to be any other lenses of comparable quality to cross-reference it against)


To answer: Rough outline of my argument for tractability: Optimizers are likely to be built first as cooperatives of largely human imitation learners, techniques to make them incapable of deception seem likely to work and that would basically solve the whole safety issue. This has been kinda obvious for like 3 years at this point and many here haven't updated on it. It doesn't take P(Doom) to zero, but it does take it low enough that the people in government who make decisions about AI legislation, and a certain segment of the democrat base[1] are starting to wonder if you're exaggerating your P(Doom), and why that might be. And a large part of the reasons you might be doing that are things they will never be able to understand (CEV), so they'll paint paranoia into that void instead (mostly they'll write you off with "these are just activist hippies"/"These are techbro hypemen" respectively, and eventually it could get much more toxic, "these are sinister globalists"/"these are omelasian torturers").

  1. ^

    All metrics indicate that it's probably small but for some reason I encounter this segment everywhere I go online and often in person. I think it's going to be a recurring pattern. There may be another democratic term shortly before the end.

Replies from: sts
comment by Severin T. Seehrich (sts) · 2025-04-14T19:24:07.795Z · LW(p) · GW(p)

Huh, that's a potentially significant update for me. Two questions:

1. Can you give me a source for the claim that making the models incapable of deception seems likely to work? I managed to miss that so far.

2. What do you make of Gradual Disempowerment? Seems to imply that even successful technical alignment might lead to doom.

comment by Severin T. Seehrich (sts) · 2025-04-14T11:27:01.352Z · LW(p) · GW(p)

On a more general note, it's certainly possible that I vastly overestimate how well the median LessWronger will be at presenting the case for halting AI progress to non-rationalists.

After all, I've kept up considerable involvement with my normie family and non-rationalist communities over the past years and put a bunch of skill points into bridging the worlds. To the point that by now, I find it easier to navigate leftist than rationalist spaces despite my more gray tribe politics - because I know the local norms from the olden days, and expect leftists to be more fluent at guess culture so I don't need to verbalize so many things. In addition, I'm unusually agnostic on the more controversial LW pet topics like transhumanism compared to others here.

At the same time, having constructive conversations with normies is a learnable skill. I suspect that many LWers have about as much learned helplessness around that as I had two or three years ago. I admit that it might make sense for super technical people to stay in their lane and just keep building on their existing skill trees. Still, I suspect that for more rationalists than are currently doing it, investing more skill points into being normie-compatible and helping with Control AI-style outreach might be a high-leverage thing to do.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2025-04-14T19:10:25.119Z · LW(p) · GW(p)

I'm also hanging out a lot more with normies these days and I feel this.

But I also feel like maybe I just have a very strong local aura (or like, everyone does, that's how scenes work) which obscures the fact that I'm not influencing the rest of the ocean at all.

I worry that a lot of the discourse basically just works like barrier aggression in dogs. When you're at one of their parties, they'll act like they agree with you about everything, when you're seen at a party they're not at, they forget all that you said and they start baying for blood. Go back to their party, they stop. I guess in that case, maybe there's a way of rearranging the barriers so that everyone comes to see it as one big party. Ideally, make it really be one.

comment by Chris_Leong · 2025-04-13T04:13:07.332Z · LW(p) · GW(p)

There was talk before about creating a new forum for AI policy discussion and honestly, I suspect that would be a better idea. Policy folks would be pretty reluctant to comment here because it doesn't really match their vibe and also because of how it could be painted by bad faith actors.

Replies from: katalina-hernandez
comment by Katalina Hernandez (katalina-hernandez) · 2025-04-14T12:06:44.886Z · LW(p) · GW(p)

The siloed approach really worries me, though. What good is policy if it doesn't reflect the technical reality of what it regulates?

And even if we solve alignment in the short term, how do we make it implementable by as many institutions as possible, without policy?

If we had a separate policy forum, would you be willing to also comment there, if only to keep us accountable?

My fear is that policy folks (mainly non-technical people) do not get the main problems right, and there's no one to offer a technical safety perspective. 

Replies from: Chris_Leong
comment by Chris_Leong · 2025-04-14T12:23:07.667Z · LW(p) · GW(p)

Yeah, I definitely have worries about this as well. Nonetheless, I would prefer for discussion to happen somewhere rather than nowhere at all.

I might comment there, but it's hard to know how busy I am.

Replies from: katalina-hernandez
comment by Katalina Hernandez (katalina-hernandez) · 2025-04-14T12:39:09.563Z · LW(p) · GW(p)

Of course! But it's good to know that we wouldn't be completely siloed :).

comment by WillPetillo · 2025-04-12T22:31:19.322Z · LW(p) · GW(p)

Selection bias.  Those of us who were inclined to consider working on outreach and governance have joined groups like PauseAI, StopAI, and other orgs.  A few of us reach back on occasion to say "Come on in, the water's fine!"  The real head-scratcher for me is the lack of engagement on this topic.  If one wants to deliberate on a much higher level of detail than the average person, cool--it takes all kinds [LW · GW] to make a world.  But come on, this is obviously high stakes enough to merit attention.

comment by Screwtape · 2025-04-14T13:26:45.966Z · LW(p) · GW(p)

It might be useful for you to taboo "LessWrong" at least briefly.

I have a spiel that may turn into a post someday about how communities aren't people, the short version being that if you ask "why doesn't the community do X?" the answer is usually that no individual in the community took it upon themselves to be the hero. Other times, someone did, but the result didn't look like the community doing X it looks like individuals doing X.

Is the question "why does the average user on this website not put much more focus on AI Governance and outreach?" Half of LessWrong users don't make their own posts, just comment or lurk. Yes, they could do more, but I could say that of Twitter users too.

Is the question "why does the formal organization behind this website not put much more focus on AI Governance and outreach?" They just helped put out ai-2027.com. They run Alignment Forum. They host conferences like The Curve. Yes, they could do more, but at some point you look at the employee hours they have and what projects they've done and it balances out.

Is the question "why do the most prominent users on this website not put much more focus on AI Governance and outreach?" By karma, the top ten users are Eliezer Yudkowsky, Gwern, Raemon, John Wentworth, Kaj Sotala, Zvi, Scott Alexander, Wedrifrid, Habryka, and Vaniver. I think Eliezer, Scott, and Zvi do lots of outreach actually, and it's not like John and Vaniver do none. Yes, they could do more Governance and outreach, but- no, wait, I take it back, I don't think Zvi realistically has much more marginal AI outreach he can do, let the man rest I think his keyboard is smoking from all that typing, his children miss him while he's away in the posting mines.

I'm not exactly a disinterested observer here. I do a lot of rationality outreach, and I make a deliberate choice not to push an AI angle because I think that would be worse both for AI and for my sub-branch of the rationality community. If your argument is more people on the margin should do AI governance and outreach, especially if they have comparative advantage, sure, I'll agree with that. If you think the front page of LessWrong should be completely full of AI discussion, I disagree, with the core of my disagreement stemming from The Common Interest of Many Causes. [LW · GW]

Replies from: sts, katalina-hernandez
comment by Severin T. Seehrich (sts) · 2025-04-14T14:18:33.163Z · LW(p) · GW(p)

Good catch! My implicit question was about what ends up on the frontpage, i.e. some mix of version 1 and 3. A friend of mine answered the sociological side of that question to my satisfaction: Many of the most competent people already pivoted to governance/outreach. But they don't have much use for in-group signalling, so they have quantitatively much less posts on the frontpage than others.

Replies from: Screwtape
comment by Screwtape · 2025-04-14T15:19:26.197Z · LW(p) · GW(p)

Frontpage is mostly what the admins and mods think is worth frontpaging, plus what users upvote. It's also a positional good, there can only be so many things on the front page. This is a more specific and useful question though! Yeah, if the LW team frontpaged more AI governance and less of everything else, and the average user upvoted more AI governance and less of everything else, the frontpage would have more AI governance on it. I wouldn't be a fan [LW · GW], but I'd understand the move that was the goal. My understanding is that's not the goal.

Not having a use for in-group signaling seems accurate but maybe overly cynical or something? I think it's that having lots of posts on LessWrong is not a constructive part of their plan. Look at Situational Awareness and ai-2027: great writing, great outreach, obviously applicable to governance. Would either of those have been better as LessWrong posts? I think no, they're more impactful as freestanding websites with a short url and a convenient PDF button.

What's the actual game plan, and what intervening steps benefit from the average LessWrong reader knowing the information you want to tell them or having a calling card that leads the right LessWrong readers to reach out to you? Look at the rash of posting around SB-1047, particularly the PauseAI leader's post [LW · GW]. There's a game plan that benefited from a bunch of LessWrong readers knowing some extra information.

comment by Katalina Hernandez (katalina-hernandez) · 2025-04-14T14:04:31.283Z · LW(p) · GW(p)

I would argue that it is people in AI Governance (the corporate "Reponsible AI" kind) that should also make an effort to learn more about AI Safety. I know, because I am one of them, and I do not know of many others that have AI Safety as a key research topic in their agenda.

I am currently working on resources to improve AI Safety literacy amongst policy people, tech lawyers, compliance teams etc. 

Stress-Testing Reality Limited | Katalina Hernández | Substack

My question to you is: any advice for the rare few in AI Governance that are here? I sometimes post with the hope of getting technical insights from AI Safety researchers. Do you think it's worth the effort?

Replies from: Screwtape
comment by Screwtape · 2025-04-14T14:54:57.762Z · LW(p) · GW(p)

I don't have the technical AI Safety skillset myself. My guess is to show up with specific questions if you need a technical answer, try and make a couple of specific contacts you can run big plans past or reach out to if you unexpectedly get traction, and use your LessWrong presence to establish a pointer to you and your work so people looking for what you're doing can find you. That seems worthwhile. After that, maybe crosspost when it's easy? Zvi might be a good example, where it's relatively easy to crosspost between LessWrong and Substack, though he's closer to keeping up with incoming news and less building resource posts for the long term. 

If I type "lawyer AI safety" into LessWrong's search, your post comes up, which I assume is something you want.

Replies from: katalina-hernandez
comment by Katalina Hernandez (katalina-hernandez) · 2025-04-14T15:05:47.272Z · LW(p) · GW(p)

Thank you very much for your advice! Actually helps, and thanks for running that search too :).

comment by testingthewaters · 2025-04-12T20:19:20.900Z · LW(p) · GW(p)

The simple answer is related to the population and occupation of the modal lesswrong viewer, and hence the modal lesswrong commenter, and upvoter. The site culture also tends towards skepticism and pessimism of institutions (I do not make a judgement on whether this valence is justified). I however also agree that this is important to at least discuss.

comment by Katalina Hernandez (katalina-hernandez) · 2025-04-14T12:02:14.053Z · LW(p) · GW(p)

You voiced the same concern I have, I am very grateful for this! 

Yes, politics and regulations are not the focus of most LessWrongers. But we wouldn't have the advances that we're seeing without the contributions of those who care.

For example: How is it possible that, for the first time, "alignment with human intent" has been explicitly mentioned by a law, framed as a key concern for regulation of systemic risks, and most people in this community do not know? 

This is a massive victory for the AI Safety / alignment community. 

European Artificial Intelligence Act, Recital 110

The full text of the recital is very long. It describes what the law understand "systemic risks" to be, and how they can impact society.

See the full text here:https://artificialintelligenceact.eu/recital/110/ 

Here is where alignment (with the meaning that this community gives it) is mentioned:

"International approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent".

The EU AI Act also mentions alignment as part of the Technical documentation that AI developers must make publicly available.

I am particularly concerned about your point "3. We have evidence that the governance naysayers are badly calibrated".

Last month, I attended a meeting by an European institution tasked with drafting the General Purpose AI Codes of Practice (the documents that companies like OpenAI can use to "prove compliance" with the law). 

As the document puts a lot of emphasis on transparency, I raised a question to the panel about incentivizing mechanistic interpretability.

The majority of the experts didn't know what I was talking about, and had never heard of such thing as "mechanistic interpretability"...

This was a personal wake up call for me, as a Lawyer and AI Safety researcher.

@Severin T. Seehrich [LW · GW], @Benjamin Schmidt [LW · GW]- feel free to connect separately if you want! I am creating resources for AI Governance professionals to gain better understanding of AI Safety, and its potential to inform policymakers and improve the regulatory landscape.

comment by Gabriel Alfour (gabriel-alfour-1) · 2025-04-14T12:05:40.222Z · LW(p) · GW(p)

Happy to have a public conversation on the topic. Just DM me on Twitter if you are interested.

comment by Beckeck · 2025-04-12T19:36:28.223Z · LW(p) · GW(p)

upvoted for topic importance.