LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
For which parts do you feel cringe?
owencb on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"I think point 2 is plausible but doesn't super support the idea that it would eliminate the biosphere; if it cared a little, it could be fairly cheap to take some actions to preserve at least a version of it (including humans), even if starlifting the sun.
Point 1 is the argument which I most see as supporting the thesis that misaligned AI would eliminate humanity and the biosphere. And then I'm not sure how robust it is (it seems premised partly on translating our evolved intuitions about discount rates over to imagining the scenario from the perspective of the AI system).
localdeity on Should I Finish My Bachelor's Degree?On the angle of demonstrating that you can learn the material and the skills and generally proving your math mettle: Can you study the books, do a sampling of the problems in the back of each chapter until you think you've mastered it, and then take the tests directly, without being signed up for a class? Maybe find old exams, perhaps from other institutions (surely someone somewhere has published an exam on each subject)? Or, for that matter, print out copies of old Putnam contests, set a timer, and see how well you do?
As someone who never entered college in the first place, I consider it a prosocial thing to make college degrees less correlated with competence. Don't add to the tragedy of that commons!
robo on Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom)(Boring meta note) Since this is a post, not a comment, agreement karma votes and regular karma votes are conflated.
yair-halberstadt on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"I'm taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
owencb on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"Wait, how does the grabby aliens argument support this? I understand that it points to "the universe will be carved up between expansive spacefaring civilizations" (without reference to whether those are biological or not), and also to "the universe will cease to be a place where new biological civilizations can emerge" (without reference to what will happen to existing civilizations). But am I missing an inferential step?
johannes-c-mayer on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversityI have this description but it's not that good, because it's very unfocused. That's why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
stephen-fowler on Stephen Fowler's ShortformThat's a good point. You have pushed me towards thinking that this is an unreasonable statement and "predicted this problem at the time" is better.
thomas-kwa on robo's ShortformAs recently as early 2023 Eliezer was very pessimistic about AI policy efforts amounting to anything, to the point that he thought anyone trying to do AI policy was hopelessly naive and should first try to ban biological gain-of-function research just to understand how hard policy is. Given how influential Eliezer is, he loses a lot of points here (and I guess Hendrycks wins?)
Then Eliezer updated and started e.g. giving podcast interviews. Policy orgs spun up and there are dozens of safety-concerned people working in AI policy. But this is not reflected in the LW frontpage. Is this inertia, or do we like thinking about computer science more than policy, or is it something else?
jaan on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"three most convincing arguments i know for OP’s thesis are:
atoms on earth are “close by” and thus much more valuable to fast running ASI [LW · GW] than the atoms elsewhere.
(somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.
robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.
that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.