LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Thank you for your thoughts.
I often reflect that, in my attempts to model life on this planet from all that I have observed, experienced, read, and reflected on, it seems like there is a persistent "force" that is supporting life at ever greater levels of organization and complexity. The fields, circumstances, and conditions of this planet seem to give chances to any strategy for organizing on top of what has already been organized. Trillions of chances over billions of years, with almost as many failures. Almost.
I'm not the most science-y, but it seems that conditions for this planet, its moon, its carbon, hydrogen, oxygen, it's temperature ranges, putting together single-celled organisms, then multi-cellular ones, then plants, dinosaurs, whales, sharks, etc. etc. etc. social species, hominids, hominids with the ability to join mind together psychoactively through shared language...
This is the prime or ultimate divine for me in the field of our earth. Why does life keep organizing itself here with more and more complexity?
Now, for human consciousness, society, culture, and mind to exist, there are definitely god-forms, spirits, and egregores that are symbiotic with human groups and populations. Or at least, this is the best story I can tell about the phenomenology I experience and observe as a complex human social primate, having been shaped by my genes, memes, and culture, and now co-creating, co-manifesting, and co-weaving this clusterfuck of meaning-driven, desire-driven, spirit-driven activities we are all doing and telling and living with each other across arcs of history and time and geography....
I appreciate this space where I can say these things without feeling insane or too paranoid. We can not dissect or even observe our gods casually or lightly without putting our own minds and sanities at risk.
Let's use words, thoughts, and concepts like the magic they are. These are the tools and the bricks we shape our world from and with, across far greater arcs than our brief individual lives.
In the beginning was the word, and the word was with god, and the word was god. Now we have word in compute. Dear God what have we done. Have we not domesticated ourselves into what will evolve on top of us as its host and platform?
Dear God.
habryka4 on Is there a place to find the most cited LW articles of all time?We don't have a live count, but we have a one-time analysis from late 2023: https://www.lesswrong.com/posts/WYqixmisE6dQjHPT8/2022-and-all-time-posts-by-pingback-count [LW · GW]
My guess is not much has changed since then, so I think that's basically the answer.
keltan on Is there a place to find the most cited LW articles of all time?That’s an important point I neglected. I mean something like “the top LW post on the list would have the most links from other LW posts”
For example, I’d expect “More Dakka” would be high up on the list. Since it is mentioned in LW posts quite often.
t3t on Against "argument from overhang risk"This seems to be arguing that the big labs are doing some obviously-inefficient R&D in terms of advancing capabilities, and that government intervention risks accidentally redirecting them towards much more effective R&D directions. I am skeptical.
- If such training runs are not dangerous then the AI safety group loses credibility.
- It could give a false sense of security when a different arch requiring much less training appears and is much more dangerous than the largest LLM.
- It removes the chance to learn alignment and safety details from such large LLM
This seems non-reponsive to arguments already in my post:
t3t on Against "argument from overhang risk"If we institute a pause, we should expect to see (counterfactually) reduced R&D investment in improving hardware capabilities, reduced investment in scaling hardware production, reduced hardware production, reduced investment in research, reduced investment in supporting infrastructure, and fewer people entering the field.
We ran into a hardware shortage during a period of time where there was no pause, which is evidence that the hardware manufacturer was behaving conservatively. If they're behaving conservatively during a boom period like this, it's not crazy to think they might be even more conservative in terms of novel R&D investment & ramping up manufacturing capacity if they suddenly saw dramatically reduced demand from their largest customers.
For example, suppose we pause now for 3 years and during that time NVIDIA releases the RTX5090,6090,7090 which are produced using TSMC's 3nm, 2nm and 10a processes.
This and the rest of your comment seems to have ignored the rest of my post (see: multiple inputs to progress, all of which seem sensitive to "demand" from e.g. AGI labs), so I'm not sure how to respond. Do you think NVIDIA's planning is totally decoupled from anticipated demand for their products? That seems kind of crazy, but that's the scenario you seem to be describing. Big labs are just going to continue to increase their willingness-to-spend along a smooth exponential for as a long as the pause lasts? What if the pause lasts 10 years?
If you think my model of how inputs to capabilities progress are sensitive to demand for those inputs from AGI labs is wrong, then please argue so directly, or explain how your proposed scenario is compatible with it.
jeffrey-heninger on Advice for Activists from the History of EnvironmentalismThank you !
The links to the report are now fixed.
The 4 blog posts cover most of the same ground as the report. The report goes into more detail, especially in sections 5 & 6.
ryan_greenblatt on Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI SystemsI wrote up some of my thoughts on Bengio's agenda here [LW(p) · GW(p)].
TLDR: I'm excited about work on trying to find any interpretable hypothesis which can be highly predictive on hard prediction tasks (e.g. next token prediction).[1] From my understanding, the bayesian aspect of this agenda doesn't add much value.
I might collaborate with someone to write up a more detailed version of this view which engages in detail and is more clearly explained. (To make it easier to argue against and to exist as a more canonical reference.)
As far as Davidad, I think the "manually build an (interpretable) infra-bayesian world model which is sufficiently predictive of the world (as smart as our AI)" part is very likely to be totally unworkable even with vast amounts of AI labor. It's possible that something can be salvaged by retreating to a weaker approach. It seems like a roughly reasonable direction to explore as a possible hail mary to that we automate researching using AIs, but if you're not optimistic about safely using vast amounts of AI labor to do AI safety work[2], you should discount accordingly.
For an objection along these lines, see this comment [LW(p) · GW(p)].
(The fact that we can be conservative with respect to the infra-bayesian world model doesn't seem to buy much, most of the action is in getting something which is at all good at predicting the world. For instance, in Fabien's example, we would need the infrabayesian world model to be able to distinguish between zero-days and safe code regardless of conservativeness. If it didn't distinguish, then we'd never be able to run any code. This probably requires nearly as much intelligence as our AI has.)
Proof checking on this world model also seems likely to be unworkable, though I have less confidence in this view. And, the more the infra-bayesian world model is computationally intractible to run, the harder it is to proof check. E.g., if running the world model on many inputs is intractable (as would seem to be the default for detailed simulations), I'm very skeptical about proving anything about what it predicts.
I'm not an expert on either agenda and it's plausible that this comment gets some important details wrong.
This could also be the reason behind the issue mentioned in footnote 5.
emrik-1 on quila's Shortform