LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
iarwain1 · 2016-04-28T12:10:35.773Z · comments (0)
Stuart_Armstrong · 2016-04-28T14:46:43.528Z · comments (2)
I think this post might suffer from the lack of distinction between karma and agreement/disagreement on the level of posts. I don't think it deserves negative karma, but with this range of topics, it is certain to elicit a lot of disagreement.
Of course, one meta-issue is the diversity of opinion, both in the AI community and in the AI existential safety community.
The diversity of opinion in the AI community is huge, but it is somewhat obfuscated by "money, compute, and SOTA success" effects, which tend to create an artificial impression of consensus when one looks from the outside. But people often move from leading orgs to pursue less standard approaches, in particular, because large orgs are often not so friendly to those non-standard approaches.
The diversity of opinion in the AI existential safety community is at least as big (and is probably even larger, which is natural given that the field is much younger, and its progress being much less certain), but, in addition to that, the diversity is less obfuscated, because it does not have anything resembling the Transformer-based LLM highly successful center around which people can consolidate.
I doubt that the diversity of opinion in the AI existential safety community is likely to decrease, and I doubt that such a decrease would be desirable.
Another meta-issue is how much we should agree on the super-importance of compute. On this meta-issue, the consensus in the AI community and in the AI existential safety community is very strong (and in the case of the AI existential safety community, the reason for this consensus is that compute is, at least, the lever one could plausibly hope to regulate).
But is it actually that unquestionable? Even with Microsoft backing OpenAI, Google should have always been ahead of OpenAI, if it were just the matter of raw compute.
The Llama-3-70B training run is only in millions of GPU hours, so the cost of training can't much exceed 10 million dollars, and it is a model roughly equivalent to early GPT-4 in its power.
I think that non-standard architectural and algorithmic breakthroughs can easily make smaller players competitive, especially as inertia of adherence to "what has been proven before" will inhibit the largest players.
Then, finally, there is all this focus of conversations around "AGI", both in the AI community and in the AI existential safety community.
But for the purpose of existential safety we should not focus on "AGI" (whatever that might be). We should focus on a much more narrow ability of AI systems to accelerate AI research and development.
Here we are very close. E.g. John Schulman in his latest podcast with Dwarkesh said
Even in one or two years, we'll find that the models can do a lot more involved tasks than they can do now. For example, you could imagine having the models carry out a whole coding project instead of it giving you one suggestion on how to write a function. You could imagine the model taking high-level instructions on what to code and going out on its own, writing any files, and testing it, and looking at the output. It might even iterate on that a bit. So just much more complex tasks.
OK, so we are likely to have that (I don't think he is over-optimistic here), and the models are already very capable of discussing AI research papers and exhibit good comprehension of those papers (that's one of my main use cases for LLMs: to help me understand an AI research paper better and faster). And they will get better at that as well.
This combination of the coming ability of LLMs to do end-to-end software projects on their own and the increasing competence of LLMs in the comprehension of AI research sounds like a good reason to anticipate rapidly intensifying phenomenon of AI systems accelerating AI research and development faster and faster in a very near future. Hence the anticipation of very short timelines by many people (although this is still a minority view, even in the AI existential safety circles).
johannes-c-mayer on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversityFor which parts do you feel cringe?
owencb on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"I think point 2 is plausible but doesn't super support the idea that it would eliminate the biosphere; if it cared a little, it could be fairly cheap to take some actions to preserve at least a version of it (including humans), even if starlifting the sun.
Point 1 is the argument which I most see as supporting the thesis that misaligned AI would eliminate humanity and the biosphere. And then I'm not sure how robust it is (it seems premised partly on translating our evolved intuitions about discount rates over to imagining the scenario from the perspective of the AI system).
localdeity on Should I Finish My Bachelor's Degree?On the angle of demonstrating that you can learn the material and the skills and generally proving your math mettle: Can you study the books, do a sampling of the problems in the back of each chapter until you think you've mastered it, and then take the tests directly, without being signed up for a class? Maybe find old exams, perhaps from other institutions (surely someone somewhere has published an exam on each subject)? Or, for that matter, print out copies of old Putnam contests, set a timer, and see how well you do?
As someone who never entered college in the first place, I consider it a prosocial thing to make college degrees less correlated with competence. Don't add to the tragedy of that commons!
robo on Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom)(Boring meta note) Since this is a post, not a comment, agreement karma votes and regular karma votes are conflated.
yair-halberstadt on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"I'm taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
owencb on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"Wait, how does the grabby aliens argument support this? I understand that it points to "the universe will be carved up between expansive spacefaring civilizations" (without reference to whether those are biological or not), and also to "the universe will cease to be a place where new biological civilizations can emerge" (without reference to what will happen to existing civilizations). But am I missing an inferential step?
johannes-c-mayer on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversityI have this description but it's not that good, because it's very unfocused. That's why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
stephen-fowler on Stephen Fowler's ShortformThat's a good point. You have pushed me towards thinking that this is an unreasonable statement and "predicted this problem at the time" is better.
thomas-kwa on robo's ShortformAs recently as early 2023 Eliezer was very pessimistic about AI policy efforts amounting to anything, to the point that he thought anyone trying to do AI policy was hopelessly naive and should first try to ban biological gain-of-function research just to understand how hard policy is. Given how influential Eliezer is, he loses a lot of points here (and I guess Hendrycks wins?)
Then Eliezer updated and started e.g. giving podcast interviews. Policy orgs spun up and there are dozens of safety-concerned people working in AI policy. But this is not reflected in the LW frontpage. Is this inertia, or do we like thinking about computer science more than policy, or is it something else?