LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
As recently as early 2023 Eliezer was very pessimistic about AI policy efforts amounting to anything, to the point that he thought anyone trying to do AI policy was hopelessly naive and should first try to ban biological gain-of-function research just to understand how hard policy is. Given how influential Eliezer is, he loses a lot of points here (and I guess Hendrycks wins?)
Then Eliezer updated and started e.g. giving podcast interviews. Policy orgs spun up and there are dozens of safety-concerned people working in AI policy. But this is not reflected in the LW frontpage. Is this inertia, or do we like thinking about computer science more than policy, or is it something else?
jaan on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"three most convincing arguments i know for OP’s thesis are:
atoms on earth are “close by” and thus much more valuable to fast running ASI [LW · GW] than the atoms elsewhere.
(somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.
robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.
that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.
tsvibt on robo's ShortformThe science behind nuclear weapons is well known -- what keeps them from being built is access to nuclear materials.
This is not true for AGI.
robo on robo's ShortformI think this is the sort of conversation we should be having! [Side note: I think restricting compute is more effective than restricting research because you don't need 100% buy in.
The analogy to nuclear weapons is, I think, a good one. The science behind nuclear weapons is well known -- what keeps them from being built is access to nuclear materials.
(Restricting compute also seriously restricts research. Research speed on neural nets is in large part bounded by how many experiments you run rather than ideas you have.)]
it-is on Feeling RationalSuppose that, as a Buddhist or a lobotomy patient or just a very phlegmatic person, finding your shoelaces tied together didn’t make you angry.
Buddhist, lobotomy patients and phlegmatic people all have things in their closets, they all have things to get angry, upset or confused about. If you are a Buddhist, lobotomy patient and phlegmatic you still see the particular narrative worldview. What you see does not change because after all there will always be something to get tied up on. It is about shaping the things that you do get tied up on and further controlling your reaction to them.
The goblin is the writer
The closet is your map and territory of your mind
The shoelaces tie together connections in your mind
The shoes are the words
If you read an article here that makes you angry it is okay. The writer has simply made a connection in your mind that you believe imposes upon you a negative feeling. You feel angry because of the map and territory of your mind, not what is written.
Perception of the writer as the globin disappears when you can change the shape of the map or territory.
Who is a globin to steal your gold, but a leprechaun at the end of the rainbow.
I would like to make a recommendation to Johannes that he should try to write and post content in a way that invokes less feelings of cringe in people. I know it does invoke that because I personally feel cringe.
Still, I think that there isn’t much objectively bad about this post. I’m not saying the post is very good or convincing. I think its style is super weird but that should be considered to be okay in this community. These thoughts remind me of something Scott Alexander once wrote - that sometimes he hears someone say true but low status things - and his automatic thoughts are about how the person must be stupid to say something like that, and he has to consciously remind himself that what was said is actually true.
Also, all these thoughts about this social reality sadden me a little - why oh why is AI safety such a status-concerned and “serious business” area nowadays?
it-is on Why Truth?Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world
Science is our tool to manipulate the world. Science is the instrument of truth. If you can not manipulate the meaning of reality through definition of words such as 'rational', what it is and it isn't. Where science is the substitute for truth and rationality is the substitute for truth also. As self-evident the definition of rationality is substituted for science and this forms our basic definition for rationality moving forward. Care not to define science as in rationality, science is a tool as is rationality.
In rationality, as in science. 'Curiosity, pragmatism, and quasi-moral injunctions' are injected into research questions and colour our understanding of both the world and truth.
To use science as a tool is to obtain the truth, to use rationality is to obtain the truth. We must apply a scientific approach and method to our rationality.
Great post, important concepts. Sharing it everywhere.
There was one piece, though, that I couldn't intuitively grasp, so maybe one of you could help me understand: What is it about video games that are out to get you? ("So do video games.") Elsewhere, Zvi speaks about F2P games, is it about this and their addiction-inducing skinner boxes? If it's about video games in general, I would love to learn how they are more out to get us than, say, novels.
tsvibt on robo's ShortformWell I asked this https://www.lesswrong.com/posts/X9Z9vdG7kEFTBkA6h/what-could-a-policy-banning-agi-look-like [LW · GW] but roughly no one was interested--I had to learn about "born secret" https://en.wikipedia.org/wiki/Born_secret from Eric Weinstein in a youtube video.
FYI, while restricting compute manufacture is I would guess net helpful, it's far from a solution. People can make plenty of conceptual progress given current levels of compute https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce [LW · GW] . It's not a way out, either. There are ways possibly-out. But approximately no one is interested in them.
localdeity on On PrivilegeI grew up knowing "privilege" to mean a special right that was granted to you based on your job/role (like free food for those who work at some restaurants) or perhaps granted by authorities due to good behavior (and would be taken away for misusing it). Note also that the word itself, "privi"-"lege", means "private law": a law that applies to you in particular.
Rights and laws are social things, defined by how others treat you. To say that your physical health is a privilege therefore seems like either a category error, or a claim that other people treated you better in a way that gave you your better physical health, which then raises questions like "What made you deserve that treatment?" or perhaps "Is it really because of how other people treated you, or other reasons like genetics or having made healthier life choices?". The latter may then lead to "Yeah, but you grew up being taught better and/or in a situation where healthy choices were more affordable, which are probably partly caused by wealth and are privilege", both of which might be counter-argued in the specific person's case or in general, and so on. Social justice arguments ensue.
"Advantage" seems like a more neutral term, one that doesn't inherently imply fairness-laden claims about how you got it. I would recommend it.