LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
(Boring meta note) Since this is a post, not a comment, agreement karma votes and regular karma votes are conflated.
yair-halberstadt on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"I'm taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
owencb on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"Wait, how does the grabby aliens argument support this? I understand that it points to "the universe will be carved up between expansive spacefaring civilizations" (without reference to whether those are biological or not), and also to "the universe will cease to be a place where new biological civilizations can emerge" (without reference to what will happen to existing civilizations). But am I missing an inferential step?
johannes-c-mayer on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversityI have this description but it's not that good, because it's very unfocused. That's why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
stephen-fowler on Stephen Fowler's ShortformThat's a good point. You have pushed me towards thinking that this is an unreasonable statement and "predicted this problem at the time" is better.
thomas-kwa on robo's ShortformAs recently as early 2023 Eliezer was very pessimistic about AI policy efforts amounting to anything, to the point that he thought anyone trying to do AI policy was hopelessly naive and should first try to ban biological gain-of-function research just to understand how hard policy is. Given how influential Eliezer is, he loses a lot of points here (and I guess Hendrycks wins?)
Then Eliezer updated and started e.g. giving podcast interviews. Policy orgs spun up and there are dozens of safety-concerned people working in AI policy. But this is not reflected in the LW frontpage. Is this inertia, or do we like thinking about computer science more than policy, or is it something else?
jaan on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"three most convincing arguments i know for OP’s thesis are:
atoms on earth are “close by” and thus much more valuable to fast running ASI [LW · GW] than the atoms elsewhere.
(somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.
robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.
that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.
tsvibt on robo's ShortformThe science behind nuclear weapons is well known -- what keeps them from being built is access to nuclear materials.
This is not true for AGI.
robo on robo's ShortformI think this is the sort of conversation we should be having! [Side note: I think restricting compute is more effective than restricting research because you don't need 100% buy in.
The analogy to nuclear weapons is, I think, a good one. The science behind nuclear weapons is well known -- what keeps them from being built is access to nuclear materials.
(Restricting compute also seriously restricts research. Research speed on neural nets is in large part bounded by how many experiments you run rather than ideas you have.)]
it-is on Feeling RationalSuppose that, as a Buddhist or a lobotomy patient or just a very phlegmatic person, finding your shoelaces tied together didn’t make you angry.
Buddhist, lobotomy patients and phlegmatic people all have things in their closets, they all have things to get angry, upset or confused about. If you are a Buddhist, lobotomy patient and phlegmatic you still see the particular narrative worldview. What you see does not change because after all there will always be something to get tied up on. It is about shaping the things that you do get tied up on and further controlling your reaction to them.
The goblin is the writer
The closet is your map and territory of your mind
The shoelaces tie together connections in your mind
The shoes are the words
If you read an article here that makes you angry it is okay. The writer has simply made a connection in your mind that you believe imposes upon you a negative feeling. You feel angry because of the map and territory of your mind, not what is written.
Perception of the writer as the globin disappears when you can change the shape of the map or territory.
Who is a globin to steal your gold, but a leprechaun at the end of the rainbow.