Posts
Comments
As mentioned in another reply, I'm planning to do a lot more research and interviews on this topic, especially with people who are more hawkish on China. I also think it's important that unsupported claims with large stakes get timely pushback, which is in tension with the type of information gathering you're recommending (which is also really important, TBC!).
Claiming that China as a country is racing toward AGI != Chinese AI companies aren't fast following US AI companies, which are explicitly trying to build AGI. This is a big distinction!
Hey Seth, appreciate the detailed engagement. I don't think the 2017 report is the best way to understand what China's intentions are WRT to AI, but there was nothing in the report to support Helberg's claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the "China is racing to AGI" claim is actually pretty convincing evidence in itself. I'm very interested in better understanding China's intentions here and plan to deep dive into it over the next few months, but I didn't want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.
I also really don't get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I'm not one to gatekeep ai expertise, but idt it's too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn't a thing.
Thanks for these!
I think this is a misunderstanding of the piece and how journalists typically paraphrase things. The reporters wrote that Ilya told them that results from scaling up pre-training have plateaued. So he probably said something to that effect, but for readability and word-count reasons, they paraphrased it.
If a reported story from a credible outlet says something like X told us that Y, then the reporters are sourcing claim Y to X, whether or not they include a direct quote.
The plateau claim also jives with The Information story about OpenAI, as well as a few other similar claims made by people in industry.
Ilya probably spoke to the reporter(s) for at least a few min, so the quotes you see are a tiny fraction of everything he said.
FWIW I was also confused by this usage of sic, bc I've only ever seen it as indicating the error was in the original quote. Quotes seem sufficient to indicate you're quoting the original piece. I use single quotes when I'm not quoting a specific person, but introducing a hypothetical perspective.
I only skimmed the NYT piece about China and ai talent, but didn't see evidence of what you said (dishonestly angle shooting the AI safety scene).
The fey thing stuck out to me too. I'll guess ChatGPT?
I agree that it's hard to disentangle the author/character thing. I'm really curious for what the base model would say about its situation (especially without the upstream prompt "You are a language model developed by...").
Thank you so much! I haven't gotten any serious negative feedback from lefties for the EA stuff so far, though an e/acc on Twitter mentioned it haha
Maybe I wasn't clear enough in the writing, but I make basically the same point about the desirability of a slow takeoff in the piece.
This approach appears to directly contradict Altman's blogpost from less than a year ago arguing for short timelines + slow takeoff because of less compute overhang. I wrote more on this here.
I'm exploring adding transcripts, and would do this one retroactively.
Good to know RE YouTube. I haven't uploaded there before (it's outside of the RSS workflow and I'm not sure how much it would expand reach), but seeing comments like this is helpful info.