New Blog Post Against AI Doom
post by Noah Birnbaum (daniel-birnbaum) · 2024-07-29T17:21:29.633Z · LW · GW · 5 commentsThis is a link post for https://substack.com/home/post/p-147019742
Contents
5 comments
I'm curious as to what y'all think of the points made in this post against AI risk from 2 AI researchers at Princeton. If you have reason to think any points made are particularly good or bad, write it in the comments below!
5 comments
Comments sorted by top scores.
comment by harfe · 2024-07-29T18:00:13.676Z · LW(p) · GW(p)
This was already referenced here: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to [LW · GW]
I think it would be better to comment there instead of here.
Replies from: oleg-trott, daniel-birnbaum↑ comment by Oleg Trott (oleg-trott) · 2024-07-29T19:34:12.316Z · LW(p) · GW(p)
That post was completely ignored here: 0 comments and 0 upvotes during the first 24 hours.
I don't know if it's the timing or the content.
On HN, which is where I saw it, it was ranked #1 briefly, as I recall. But then it got "flagged", apparently.
↑ comment by Noah Birnbaum (daniel-birnbaum) · 2024-07-29T18:37:42.206Z · LW(p) · GW(p)
Good point!
comment by Seth Herd · 2024-07-30T23:00:24.906Z · LW(p) · GW(p)
This post was worth looking at, although its central argument is deeply flawed.
I commented on the other linkpost: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to?commentId=fBsrSQBgCLZd4zJHj [LW(p) · GW(p)]