LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Misc. prelim notes:
This work is very exciting to me, and I'm curious to hear the authors' thoughts on whether we could verify specific predictions made by this model in real models.
I have a more detailed write-up on model organisms of superposition here: https://docs.google.com/document/d/1hwI30HNNB2MkOrtEzo7hppG9X7Cn7Xm9a-1LBqcttWc/edit?usp=sharing
Would love to discuss this more!
daniel-kokotajlo on AI Regulation is UnsafeGood point, I guess I was thinking in that case about people who care a bunch about a smaller group of humans e.g. their family and friends.
jeffreycaruso on Exploring the Esoteric Pathways to AI Sentience (Part One)Good list. I think I'd use a triangle to organize them. Have consciousness at the base, then sentience, then drawing from your list, phenomenal consciousness, followed by Intentionality?
javier-2 on Madrid ACX MeetupWe're at the wooden table with benches that seats 6-8 people.
mondsemmel on So What's Up With PUFAs Chemically?In case you haven't seen it, you might like dynomight's recent post Thoughts on seed oil [LW · GW].
steve2152 on Spatial attention as a “tell” for empathetic simulation?I think I would feel characteristic innate-fear-of-heights sensations (fear + tingly sensation for me, YMMV) if I were standing on an opaque bridge over a chasm, especially if the wood is cracking and about to break. Or if I were near the edge of a roof with no railings, but couldn’t actually see down.
Neither of these claims is straightforward rock-solid proof that the thing you said is wrong, because there’s a possible elaboration of what you said that starts with “looking down” as ground truth and then generalizes that ground truth via pattern-matching / learning algorithm—but I still think that elaborated story doesn’t hang together when you work through it in detail, and that my “innate ‘center of spatial attention’ constantly darting around local 3D space” story is much better.
seth-herd on We are headed into an extreme compute overhangThe big question here, it s ema like, is: does intelligence stack? Does a hundred thousand instances of GPT4 working together make an intelligence as smart as GPT7?
This far the answer seems to be no. There are some intelligence improvements from combining multiple calls in tree of thought type setups, but not much. And those need carefully hand-structured algorithms.
So I think the limitation is in scaffolding techniques, not the sheer number of instances you can run. I donexoect.scaffolding LLMs into cognitive architectures to achieve human level fully general AGI, but how and when we get there is tricky to predict.
When we have that, I expect it to stack a lot like human organizations. They can do a lot more work at once, but they're not much smarter than a single individual because it's really hard to coordinate and stack all of that cognitive work.
gunnar_zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One)Sentience is one facet of consciousness, but it is not the only one and plausibly not the one responsible for "observe and compare", which requires high cognitive function. See my list of facets here:
https://www.lesswrong.com/posts/8szBqBMqGJApFFsew/gunnar_zarncke-s-shortform#W8XBDmjvbhzszEnrJ [LW(p) · GW(p)]
seth-herd on Andrew Burns's ShortformAre you saying that China will use Llama 3 400B weights as a basis for improving their research on LLMs? Or to make more tools from? Or to reach real AGI? Or what?