Posts

NeuroAI for AI safety: A Differential Path 2024-12-16T13:17:12.527Z
Language models can explain neurons in language models 2023-05-09T17:29:35.207Z
The Quantization Model of Neural Scaling 2023-03-31T16:02:47.257Z
GPT-4 2023-03-14T17:02:02.276Z

Comments

Comment by nz on GPT-4 · 2023-03-14T17:31:10.804Z · LW · GW

I believe it will be made available to ChatGPT Plus subscribers, but I don't think it's available yet

 

EDIT: as commenters below mentioned, it is available now (and it had already been for some at the time of this message)

Comment by nz on Who are some prominent reasonable people who are confident that AI won't kill everyone? · 2022-12-06T09:48:59.966Z · LW · GW

+1 for Quintin. I would also suggest this comment here.

Comment by nz on Who are some prominent reasonable people who are confident that AI won't kill everyone? · 2022-12-06T09:35:04.517Z · LW · GW

When it comes to "accelerating AI capabilities isn't bad" I would suggest Kaj Sotala and Eric Drexler with his QNR and CAIS. Interestingly, Drexler has recently left AI safety research and gone back to atomically precise manufacturing due to him now worrying less about AI risk more generally. Chris Olah also believes that interpretability-driven capabilities advances are not bad in that the positives outweight the negatives for AGI safety

 

For more general AI & alignment optimism I would suggest also Rohin Shah. See also here.