AXRP Episode 38.4 - Shakeel Hashim on AI Journalism

post by DanielFilan · 2025-01-05T00:20:05.096Z · LW · GW · 0 comments

Contents

  The AI media ecosystem 
  Why not more AI news? 
  Disconnects between journalists and AI researchers 
  Tarbell 
  The Transformer newsletter 
None
No comments

YouTube link

AI researchers often complain about the poor coverage of their work in the news media. But why is this happening, and how can it be fixed? In this episode, I speak with Shakeel Hashim about the resource constraints facing AI journalism, the disconnect between journalists’ and AI researchers’ views on transformative AI, and efforts to improve the state of AI journalism, such as Tarbell and Shakeel’s newsletter, Transformer.

Topics we discuss:

Daniel Filan (00:09): Hello everyone. This is one of a series of short interviews that I’ve been conducting at the Bay Area Alignment workshop, which is run by FAR.AI. Links to what we’re discussing, as usual, are in the description. A transcript is, as usual, available at axrp.net. And as usual, if you want to support the podcast, you can do so at patreon.com/axrpodcast. Well, let’s continue to the interview.

Daniel Filan (00:28): I’m now chatting with Shakeel Hashim. Hello Shakeel.

Shakeel Hashim (00:32): Hi.

Daniel Filan (00:33): So for people who don’t know who you are, can you say a little bit about what you do?

Shakeel Hashim (00:37): So I work at Tarbell, which is a nonprofit that supports high-quality AI journalism. I am a grants director there, and I am also a journalist in residence. So I do my own AI journalism through Transformer, which is a weekly newsletter that summarizes AI news. And then I also do my own reporting and analysis and commentary.

Daniel Filan (01:04): Before we really dig into it: we’re currently at this Alignment workshop being run by FAR.AI. How are you finding it?

Shakeel Hashim (01:10): Yeah, super interesting. I’m mostly focused on AI policy stuff in my day-to-day work, so I spend less time on the technical side. And the thing I found really interesting here is meeting lots of more technical researchers, getting a sense of what they’re up to, what their focuses are. Yeah, it’s super interesting.

The AI media ecosystem

Daniel Filan (01:31): So I guess you’re in a better position than most to talk about the AI media ecosystem. I’m wondering: what’s your high-level take about it?

Shakeel Hashim (01:43): Probably two things. I think number one is that there aren’t nearly enough resources going into AI journalism as there ought to be, given the scale of the topic and its potential impact and importance. The second is that I think there’s still quite a big disconnect between what journalists think about AI and what people in the industry think about AI. Some of that is very warranted; it’s a job of journalists to be skeptical. But I worry sometimes, if journalists don’t engage a little bit more with the ideas that are held by lots of AI researchers, that journalists might not be able to keep up with what’s happening.

Why not more AI news?

Daniel Filan (02:34): I guess it’s kind of a strange situation. So my understanding is that in a lot of industries there is news for that industry. So for instance, animal farming, right? My understanding is that there’s “Pig: the Newsletter”, and every week, if you’re a farmer, you can get the newsletter about pork farming and it’ll just have just a bunch of stats about pork farming written roughly from the perspective of people who are very into the pork farming scene. I assume that Bloomberg does something sort of similar with finance, or at least the Terminal. In some sense, naively it might seem surprising that there wouldn’t be more AI news journalism. Do you have a feel for why that is?

Shakeel Hashim (03:26): There is a lot. There is more than I would, I think, have expected a couple of years ago. It’s mostly concentrated in tech publications, that’s where it lives - although lots of national news desks now have AI reporters, which is great to see. The New York Times has some people dedicated to this, Washington Post, Wall Street Journal. I think it’s still not on the scale I would like it to be. And when I talk to AI reporters, the impression I get is that there is so much more they’d like to do than they can do just because there’s so much happening in AI all the time that there aren’t enough people and there isn’t enough time in the day to cover everything you’d want to cover.

Shakeel Hashim (04:15): The reasons for that… I think the main one is the economic state of the journalism industry. It’s not a great time to be in journalism. There are all sorts of structural reasons why the industry is struggling. So there just simply aren’t enough resources to be able to put into this kind of stuff. I also think that the pace of change here is somewhat unique, and though media organizations are responding, it’s hard to respond as quickly as things are changing, I think. It takes time to build up an AI desk.

Shakeel Hashim (05:01): As for why there aren’t trade publications, that’s a good question. I mean, there are tech trade publications. The Information is one -

Daniel Filan (05:08): which does do AI reporting -

Shakeel Hashim (05:10): does a lot of AI reporting. I mean, there haven’t really been… actually, I don’t know if this is true. I want to say that there haven’t been trade publications for a bunch of stuff in the tech industry. You definitely see them around more semiconductor stuff and hardware. For software, I feel like less so. I’m not entirely sure why that is. I think it’s because the tech media just kind of fills that role.

Daniel Filan (05:42): Right. So what do you mean when you say the tech media?

Shakeel Hashim (05:45): So organizations like the Verge, Tech Crunch, Ars Technica, Venture Beat, and then the tech sections at all the big news outlets.

Daniel Filan (06:04): I guess some of this might just be because if you work in tech, there’s a good chance that you’re also a tech consumer. And so I see the Verge and Ars Technica as being stuff for tech enthusiasts or people who want to talk about the newest iPhone or the newest laptop. And if that’s the same people who are working in the industry, maybe it is just sort of under that umbrella. Do you think that could be it?

Shakeel Hashim (06:27): Yeah, I think there definitely is something there. I think there is also a thing where tech has become so important that it almost outgrows the need for a more specialized industry covering it, if that makes sense. So for instance, going back to your analogy of farming, there isn’t enough demand for there to be multiple full-time agriculture reporters at the New York Times. Tech is big enough and important enough that there is demand for there to be loads. I don’t know how many tech reporters the New York Times has, but it’s a lot. And so it gets subsumed into the more traditional media, kind of in the way politics does.

Disconnects between journalists and AI researchers

Daniel Filan (07:18): So the second thing you said about your high-level take on the industry is that the journalism community was disconnected from the beliefs of the people working in the field in a way that you thought was detrimental. What specific disagreements are you thinking of?

Shakeel Hashim (07:34): So I think the big one is the notion of transformative AI or AGI or whatever you want to call it: extremely powerful AI that can do all or very close to all of what a human can do. I think in the industry there is a pretty strong sense that this is possible and imminent. That’s one thing I found in conversations here. You’ve got people talking about the possibility of us having this by the end of next year - not that that’s likely, but that there is a non-negligible chance of that happening.

Shakeel Hashim (08:17): I think in the journalism community, those claims are still very… I think most people really don’t buy that as an idea. I think people are very, very skeptical that this is possible at all, and certainly skeptical that it’s imminent. And I think that’s a very justified skepticism because lots of technologists have made these claims over the years: that their technology will change the world. And lots of the times it is bullshit. So I get why you would be skeptical, but I think the difficulty arises in that if these claims are true, if the AI companies and AI researchers are right, really crazy stuff is going to start happening.

Shakeel Hashim (09:12): And it feels to me that it would be good for journalism to engage with those possibilities a bit more and treat them as hypotheticals, but engage with those hypotheticals, I guess. So if we do have AGI two years from now, what does that mean? What does that mean for the economy? What does that mean for politics? What does that mean for climate? What does that mean for catastrophic risks? What does that mean for non-catastrophic risks? I think that’s worth engaging with a bit more. And I think that part of the disconnect is I still see lots of journalists who (I think) think that the AGI timelines discussion is marketing hype. I think it would be good for people to realize that this is actually a much more sincere belief than that. I think this isn’t just marketing hype. I think these people think they’re going to do it, and I think that there are lots of good reasons to believe that they will do it.

Daniel Filan (10:25): Yeah. One thing I find interesting… From my own perspective of how AI is covered in the news media: you’ll have outlets like the New Yorker that do… you’ll have these profiles. Somebody will do a profile on Katja Grace or Scott Alexander or Jaime Sevilla. I think in these profiles you often see the profile basically being very sympathetic, at least to the sincereness and to some degree the basic rationality of people who think that AI is coming really soon and it’s really scary, but somehow this feels disconnected from coverage. I don’t see the New York Times having a big… I don’t read the New York Times, but I really bet that at no point in the last month has there been a big column being like, will Harris or Trump be better for artificial general intelligence catastrophes? [NB: a few days after recording, such an opinion column did appear] I wonder if you have a take about why there’s that disconnect between these two different bits of it.

Shakeel Hashim (11:39): Yeah, I think many journalists treat the ideas that people in the AI ecosystem have as being kooky interesting ideas. And I think they’re willing to accept that some people believe them, like you say in the profiles that you see, but I think they treat them as almost similar to how you would treat other weird beliefs. It’s like, “there are these people who think this crazy thing. Isn’t that interesting?”

Daniel Filan (12:14): Yeah. I guess you have profiles of Christian Dispensationalists, but there’s not a column about whether Harris or Trump will bring in the second coming sooner.

Shakeel Hashim (12:24): Yeah, I think to do the latter, there does need to be some internalization, and I think that most journalists at least just don’t buy it.

Tarbell

Daniel Filan (12:42): So maybe this gets to your efforts at Tarbell. Can you say a bit more what you’re trying to do?

Shakeel Hashim (12:50): We are trying to encourage and create a community of journalists who cover AI with the seriousness we think it deserves. We’re doing that in a few ways. We have a fellowship program where we take early career or aspiring journalists, we teach them about AI (with the help of lots of experts), we teach them journalism skills (again with the help of lots of experts), and then we have placement programs for them, where they then go and work in a media organization and report on AI with a hope that they build up their skills. It’s great because it means we end up with more AI journalists and hopefully they’re well-informed and well-equipped to do that work really well.

Shakeel Hashim (13:44): We also have a journalist-in-residence program where we take mid-career or experienced journalists and we help support them so that they can dive deep in something. So we’ve had one person who was transitioning from being a crypto reporter to being an AI reporter, and so they just spent a bunch of time building up their sources, learning about AI, getting really deep in this to try and understand it. We’ve got someone else who is going to join to work on China AI reporting because that feels like a really neglected area where there’s scope for really good reporting to be done.

Shakeel Hashim (14:27): And then we just launched this grants program where we will fund freelancers and staff journalists to pursue impactful AI reporting projects. So that’s the kind of thing that requires more time and more resources than a journalist can typically get. And in that we’re interested in funding work on AI harms, the kind of stuff that’s going on inside AI companies, policy efforts that AI companies are making, how regulators are struggling to regulate AI because of budgetary concerns or other things. And also just general explainers of complicated topics that we wish more people in the world understood.

Daniel Filan (15:09): How much do you think the difficulty is… Do you see your mission as just getting people in who are interested, or just building up skills, having some journalists who are interested in AI, but literally there are just some facts that you might not know and if you don’t know them, you can’t report as well?

Shakeel Hashim (15:31): Yeah, I think it’s a mix. I do think the main thing is funding, which is why most of our programs are built around that. I think there are lots of people who want to and are capable of doing really good work on this, but there just isn’t the money to support them. I do think there’s some education element to this. I mean, we spend a lot of time in the fellowship on connecting our fellows with really great experts who they might not otherwise come across: basically so that they can learn from them during the fellowship curriculum, but then also so that they can have them as resources going forward, and if they’re writing on a topic, they know who are the right people to reach out to who have really deep knowledge on this.

Shakeel Hashim (16:28): And I think that there’s definitely something I’m interested in exploring further: are there ways we can help bridge the gaps between what the experts think and are working on and what journalists know about? Because I think there is probably scope to do a bunch of that. There’s been some really good work in the climate space on this, where there are a few organizations, who I think we take some inspiration from, who try to connect journalists with experts to help journalists dive deeper into a topic than they might otherwise be able to.

Daniel Filan (17:06): What are these organizations?

Shakeel Hashim (17:07): I can’t remember the names of them off my head. I think the Climate Journalism Network is one, but I can’t remember if they’re the one I’m thinking of: they all have very similar acronyms and names. Hard to keep track of, unfortunately.

Daniel Filan (17:21): Actually, speaking of names, a thing that has just been bugging me: Tarbell strikes me as an unusual name. I feel like in the EA artificial intelligence space, every org has the roughly the same name format. You’re either a Center or you’re an Institute, it’s probably the future of something and it’s either humanity or AI or life or something. But yeah, Tarbell is not the same name scheme. Do you know what the name is about?

Shakeel Hashim (17:53): Yeah. So I can’t take credit for it. Cillian [Crosson], the executive director of the organization came up with it, but it’s named after Ida Tarbell, who was one of the first… some people credit her with pioneering modern investigative journalism. So she did a bunch of work into Standard Oil back in the late 1800s, and her work ended up resulting in the breakup of Standard Oil and breaking the oil monopoly and was just super important and super impactful. So yeah, we’re inspired by the work she did however many hundred years ago, and we think that we’d love to see more work like that in other areas.

The Transformer newsletter

Daniel Filan (18:44): Gotcha. And so closing up, I’d like to talk a little bit about your Substack: Transformer, it’s called. For those who haven’t checked it out, what are you trying to do with it?

Shakeel Hashim (18:55): A few things. So the first is: there is so much AI news every week. So much stuff happens and it is basically impossible for anyone to keep track of it.

Shakeel Hashim (19:08): And there are lots of good AI newsletters out there, but the ones from media organizations in particular tend to focus mostly on the content that has come out of that media organization, so they’re slightly less comprehensive. Then there are some that are more focused on specific areas. So you get some that are more focused on fundraising deals, some that are more focused on policy, but there isn’t really one place where you can go to get everything. And so with my weekly digest, the aim is to be as comprehensive as possible, but as fast as possible. So there’s only really a sentence on each thing. I elaborate a bit on the bigger stories, but the majority is a few words that tell you more or less what you need to know, but you can click through to learn more.

Shakeel Hashim (20:01): So with that, I’m aiming to just try and make everyone more informed. Lots of people in the AI ecosystem read it, which I’m delighted by. Lots of journalists read it, which I’m delighted by. Quite a lot of policymakers read it. And it’s just an attempt to make sure that people are keeping up with what’s happening, because it’s such a hard field to stay abreast of.

Shakeel Hashim (20:24): The other [thing] is, then, with my own reporting and analysis, I want to try and draw attention to things that I think aren’t getting the attention they deserve, highlight arguments I think are good arguments… There’s often quite a lot of tacit knowledge and arguments I guess in this space, I find. Compute thresholds are a good example, which I wrote about this week, where I think lots of people have very good reasons for why they think compute thresholds are good, but I think lots of them haven’t been elucidated as well as they could be, and especially not in a really short-form fashion.

Shakeel Hashim (21:16): So for stuff like that, where I think there’s a really good argument to be made, but I’ve not seen the good version of the argument be made simply, I hope to be able to do some of that drawing attention to stuff. So I did quite a lot of reporting on SB-1047 and the lobbying campaigns against that. I think I worked with Garrison Lovely on a piece about the very misleading claims that Andreessen Horowitz and Fei-Fei Li had made about the bill. That got quite a lot of attention, which I was excited about because again, that’s one thing where lots of people were talking about it on Twitter, but I wanted to have a more concrete, well-reported thing explaining exactly what was going on there. So yeah, I guess the aim is to just improve people’s understanding of AI and AI policy in particular.

Daniel Filan (22:16): That kind of reminds me of… I’ve also found this in the AI safety ecosystem. There’s just a bunch of stuff that people talk about but have not necessarily written or published anywhere. And this is less true now than I think it used to be, in part because more people are just, I don’t know, spending their free time getting in arguments in the comments section on the Alignment Forum or something, which genuinely I think is a great public service. But yeah, so often there’s stuff that people will just say in conversation, and if you can record it that’s great. And it doesn’t surprise me that the same thing is true in AI somewhat more generally.

Shakeel Hashim (22:58): Yeah.

Daniel Filan (22:59): So thanks for chatting with me, and if people are interested in AI or found Shakeel interesting, you should definitely check out Shakeel’s newsletter, Transformer.

Shakeel Hashim (23:10): Thank you very much.

Daniel Filan (23:11): This episode was edited by Kate Brunotts, and Amber Dawn Ace helped with transcription. The opening and closing themes are by Jack Garrett. Financial support for this episode was provided by the Long-Term Future Fund, along with patrons such as Alexey Malafeev. To read a transcript of the episode, or to learn how to support the podcasts yourself, you can visit axrp.net. Finally, if you have any feedback about this podcast, you can email me at feedback@axrp.net.

0 comments

Comments sorted by top scores.