Posts

China Hawks are Manufacturing an AI Arms Race 2024-11-20T18:17:51.958Z
Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims 2024-11-13T17:00:01.005Z
Miles Brundage resigned from OpenAI, and his AGI readiness team was disbanded 2024-10-23T23:40:57.180Z
The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations 2024-08-16T19:37:28.416Z
My article in The Nation — California’s AI Safety Bill Is a Mask-Off Moment for the Industry 2024-08-15T19:25:59.592Z
Podcast with Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future” 2024-05-10T17:23:20.436Z
Claude Doesn’t Want to Die 2024-03-05T06:00:05.122Z
My cover story in Jacobin on AI capitalism and the x-risk debates 2024-02-12T23:34:16.526Z
Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy 2024-02-10T19:52:55.191Z
Podcast: The Left and Effective Altruism with Habiba Islam 2022-10-27T17:41:05.136Z

Comments

Comment by garrison on China Hawks are Manufacturing an AI Arms Race · 2024-11-21T04:18:18.057Z · LW · GW

As mentioned in another reply, I'm planning to do a lot more research and interviews on this topic, especially with people who are more hawkish on China. I also think it's important that unsupported claims with large stakes get timely pushback, which is in tension with the type of information gathering you're recommending (which is also really important, TBC!).

Comment by garrison on China Hawks are Manufacturing an AI Arms Race · 2024-11-21T04:15:34.339Z · LW · GW

Claiming that China as a country is racing toward AGI != Chinese AI companies aren't fast following US AI companies, which are explicitly trying to build AGI. This is a big distinction!

Comment by garrison on China Hawks are Manufacturing an AI Arms Race · 2024-11-21T04:12:47.857Z · LW · GW

Hey Seth, appreciate the detailed engagement. I don't think the 2017 report is the best way to understand what China's intentions are WRT to AI, but there was nothing in the report to support Helberg's claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the "China is racing to AGI" claim is actually pretty convincing evidence in itself. I'm very interested in better understanding China's intentions here and plan to deep dive into it over the next few months, but I didn't want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.

I also really don't get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I'm not one to gatekeep ai expertise, but idt it's too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn't a thing.

Comment by garrison on Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims · 2024-11-14T16:16:32.681Z · LW · GW

Thanks for these!

Comment by garrison on Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims · 2024-11-13T22:10:58.442Z · LW · GW

I think this is a misunderstanding of the piece and how journalists typically paraphrase things. The reporters wrote that Ilya told them that results from scaling up pre-training have plateaued. So he probably said something to that effect, but for readability and word-count reasons, they paraphrased it. 

If a reported story from a credible outlet says something like X told us that Y, then the reporters are sourcing claim Y to X, whether or not they include a direct quote. 

The plateau claim also jives with The Information story about OpenAI, as well as a few other similar claims made by people in industry. 

Ilya probably spoke to the reporter(s) for at least a few min, so the quotes you see are a tiny fraction of everything he said. 

Comment by garrison on Against Aschenbrenner: How 'Situational Awareness' constructs a narrative that undermines safety and threatens humanity · 2024-07-17T15:40:23.830Z · LW · GW

FWIW I was also confused by this usage of sic, bc I've only ever seen it as indicating the error was in the original quote. Quotes seem sufficient to indicate you're quoting the original piece. I use single quotes when I'm not quoting a specific person, but introducing a hypothetical perspective.  

Comment by garrison on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T22:38:54.779Z · LW · GW

I only skimmed the NYT piece about China and ai talent, but didn't see evidence of what you said (dishonestly angle shooting the AI safety scene).

Comment by garrison on Claude Doesn’t Want to Die · 2024-03-05T18:13:05.091Z · LW · GW

The fey thing stuck out to me too. I'll guess ChatGPT?

I agree that it's hard to disentangle the author/character thing. I'm really curious for what the base model would say about its situation (especially without the upstream prompt "You are a language model developed by..."). 

Comment by garrison on My cover story in Jacobin on AI capitalism and the x-risk debates · 2024-02-14T03:18:35.996Z · LW · GW

Thank you so much! I haven't gotten any serious negative feedback from lefties for the EA stuff so far, though an e/acc on Twitter mentioned it haha

Comment by garrison on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-12T21:56:32.521Z · LW · GW

Maybe I wasn't clear enough in the writing, but I make basically the same point about the desirability of a slow takeoff in the piece. 

Comment by garrison on OpenAI wants to raise 5-7 trillion · 2024-02-10T19:56:06.113Z · LW · GW

This approach appears to directly contradict Altman's blogpost from less than a year ago arguing for short timelines + slow takeoff because of less compute overhang. I wrote more on this here.

Comment by garrison on Podcast: The Left and Effective Altruism with Habiba Islam · 2022-10-28T15:50:51.042Z · LW · GW

I'm exploring adding transcripts, and would do this one retroactively. 

Good to know RE YouTube. I haven't uploaded there before (it's outside of the RSS workflow and I'm not sure how much it would expand reach), but seeing comments like this is helpful info.