LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
>It seems you are arguing that anything that presents like it is conscious implies that it is conscious.
No? That's definitely not what I'm arguing.
>But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn't worse than the one that evolution engineered with more granularity. Doesn't matter if this human was assembled atom by atom on molecular assembler, it's still a conscious human.
Look, here I'm talking about pathways to acquire that "structure" inside you. Not outlook of it.
avturchin on Magic by forgettingnon-disease copies do not need to perform any changes in their meditation routine in this model, assuming that they naturelly forget their disease status during meditation.
mondsemmel on Thoughts on seed oilYou might appreciate the perspective in the short post Statistical models & the irrelevance of rare exceptions [LW · GW]. (I previously commented [LW(p) · GW(p)] something similar on a post by Duncan.)
ablue on LLMs could be as conscious as human emulations, potentiallyI don't think that in the example you give, you're making a token-predicting transformer out of a human emulation; you're making a token-predicting transformer out of a virtual system with a human emulation as a component. In the system, the words "what's your earliest memory?" appearing on the paper are going to trigger all sorts of interesting (emulated) neural mechanisms that eventually lead to a verbal response, but the token predictor doesn't necessarily need to emulate any of that. In fact, if the emulation is deterministic, it can just memorize whatever response is given. Maybe gradient descent is likely to make the LLM conscious in order to efficiently memorize the outputs of a partly conscious system, but that's not obvious.
If you have a brain emulation, the best way to get a conscious LLM seems to me like it would be finding a way to tokenize emulation states and training it on those.
gunnar_zarncke on LLMs could be as conscious as human emulations, potentiallyOk. It seems you are arguing that anything that presents like it is conscious implies that it is conscious. You are not arguing whether or not the structure of LLMs can give rise to consciousness.
But then your argument is a social argument. I'm fine with a social definition of consciousness - after all, our actions depend to a large degree on social feedback and morals (about what beings have value) at different times have been very different and thus been socially construed.
But then why are you making a structural argument about LLMs in the end?
PS. In fact, I commented on the filler symbol paper when Xixidu posted about it and I don't think that's a good comparison.
seth-herd on The Prop-room and Stage Cognitive ArchitectureI agree with all of that. Even being sceptical that LLMs plus search will reach AGI. The lack of constraint satisfaction as the human brain does it could be a real stumbling block.
But LLMs have copied a good bit of our reasoning and therefore our semantic search. So they can do something like constraint satisfaction.
Put the constraints into a query, and the answer will satisfy those constraints. The process used is different than a human brain, but for every problem I can think of, the results are the same.
Now, that's partly because every problem I can think of is one I've already seen solved. But my ability to do truly novel problem solving is rarely used and pretty limitted. So I'm not sure the LLM can't do just as good a job if it had a scaffolded script to explore its knowledge base from a few different angles.
metachirality on What is the easiest/funnest way to build up a comprehensive understanding of AI and AI Safety?Vanessa Kosoy has a list specifically for her alignment agenda but is probably applicable to agent foundations in general: https://www.alignmentforum.org/posts/fsGEyCYhqs7AWwdCe/learning-theoretic-agenda-reading-list [AF · GW]
review-bot on Express interest in an "FHI of the West"The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
gwern on Andrew Burns's ShortformAltman made a Twitter-edit joke about 'gpt-2 i mean gpt2', so at this point, I think it's just a funny troll-name related to the 'v2 personality' which makes it a successor to the ChatGPT 'v1', presumably, 'personality'. See, it's gptv2 geddit not gpt-2? very funny, everyone lol at troll
drake-morrison on What is the easiest/funnest way to build up a comprehensive understanding of AI and AI Safety?If you can code, build a small AI with the fast.ai course. This will (hopefully) be fun while also showing you particular holes in your knowledge to improve, rather than a vague feeling of "learn more".
If you want to follow along with more technical papers, you need to know the math of machine learning: linear algebra, multivariable calculus, and probability theory. For Agent Foundations work, you'll need more logic and set theory type stuff.
MIRI has some recommendations for textbooks here. There's also the Study Guide [LW · GW] and this sequence on leveling up [? · GW].
3blue1brown's Youtube has good videos for a lot of this, if that's the medium you like.
If you like non-standard fiction, some people like Project Lawful.
At the end of the day, it's not a super well-defined field that has clear on-ramps into the deeper ends. You just gotta start somewhere, and follow your curiosity. Have fun!