LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
you're making a token-predicting transformer out of a virtual system with a human emulation as a component.
Should it make a difference? Same iterative computation.
In the system, the words "what's your earliest memory?" appearing on the paper are going to trigger all sorts of interesting (emulated) neural mechanisms that eventually lead to a verbal response, but the token predictor doesn't necessarily need to emulate any of that.
Yes, I talked about optimizations a bit. I think you are missing a point of this example. The point is that if you are trying to conclude from the fact that this system is doing next token prediction then it's definitely not conscious, you are wrong. And my example is an existence proof, kind of.
review-bot on Paul Christiano named as US AI Safety Institute Head of AI SafetyThe LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
weightt-an on LLMs could be as conscious as human emulations, potentially>It seems you are arguing that anything that presents like it is conscious implies that it is conscious.
No? That's definitely not what I'm arguing.
>But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn't worse than the one that evolution engineered with more granularity. Doesn't matter if this human was assembled atom by atom on molecular assembler, it's still a conscious human.
Look, here I'm talking about pathways to acquire that "structure" inside you. Not outlook of it.
avturchin on Magic by forgettingnon-disease copies do not need to perform any changes in their meditation routine in this model, assuming that they naturelly forget their disease status during meditation.
mondsemmel on Thoughts on seed oilYou might appreciate the perspective in the short post Statistical models & the irrelevance of rare exceptions [LW · GW]. (I previously commented [LW(p) · GW(p)] something similar on a post by Duncan.)
ablue on LLMs could be as conscious as human emulations, potentiallyI don't think that in the example you give, you're making a token-predicting transformer out of a human emulation; you're making a token-predicting transformer out of a virtual system with a human emulation as a component. In the system, the words "what's your earliest memory?" appearing on the paper are going to trigger all sorts of interesting (emulated) neural mechanisms that eventually lead to a verbal response, but the token predictor doesn't necessarily need to emulate any of that. In fact, if the emulation is deterministic, it can just memorize whatever response is given. Maybe gradient descent is likely to make the LLM conscious in order to efficiently memorize the outputs of a partly conscious system, but that's not obvious.
If you have a brain emulation, the best way to get a conscious LLM seems to me like it would be finding a way to tokenize emulation states and training it on those.
gunnar_zarncke on LLMs could be as conscious as human emulations, potentiallyOk. It seems you are arguing that anything that presents like it is conscious implies that it is conscious. You are not arguing whether or not the structure of LLMs can give rise to consciousness.
But then your argument is a social argument. I'm fine with a social definition of consciousness - after all, our actions depend to a large degree on social feedback and morals (about what beings have value) at different times have been very different and thus been socially construed.
But then why are you making a structural argument about LLMs in the end?
PS. In fact, I commented on the filler symbol paper when Xixidu posted about it and I don't think that's a good comparison.
seth-herd on The Prop-room and Stage Cognitive ArchitectureI agree with all of that. Even being sceptical that LLMs plus search will reach AGI. The lack of constraint satisfaction as the human brain does it could be a real stumbling block.
But LLMs have copied a good bit of our reasoning and therefore our semantic search. So they can do something like constraint satisfaction.
Put the constraints into a query, and the answer will satisfy those constraints. The process used is different than a human brain, but for every problem I can think of, the results are the same.
Now, that's partly because every problem I can think of is one I've already seen solved. But my ability to do truly novel problem solving is rarely used and pretty limitted. So I'm not sure the LLM can't do just as good a job if it had a scaffolded script to explore its knowledge base from a few different angles.
metachirality on What is the easiest/funnest way to build up a comprehensive understanding of AI and AI Safety?Vanessa Kosoy has a list specifically for her alignment agenda but is probably applicable to agent foundations in general: https://www.alignmentforum.org/posts/fsGEyCYhqs7AWwdCe/learning-theoretic-agenda-reading-list [AF · GW]
review-bot on Express interest in an "FHI of the West"The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?