Posts

Comments

Comment by Isopropylpod on AI-enabled coups: a small group could use AI to seize power · 2025-04-17T03:10:13.669Z · LW · GW

Your cynical world is just doing a coup before someone else does.

Comment by Isopropylpod on Why Does It Feel Like Something? An Evolutionary Path to Subjectivity · 2025-04-15T19:09:56.221Z · LW · GW

I largely agree with other comments - this post discusses the soft problem much more than the hard, and never really makes any statement on why the things it describes lead to qualia. It's great to know what in the brain is doing it, but why does *doing it* cause me to exist?

 

Additionally, not sure if it was, but this post gives large written-by-LLM 'vibes', mainly the 'Hook - question' headers constantly, as well as the damning "Let's refine, critique, or dismantle this model through rigorous discussion." At the end. I get the idea a human prompted this post of of some model, given the style I think 4o? 

Comment by Isopropylpod on The Pando Problem: Rethinking AI Individuality · 2025-04-02T04:02:20.623Z · LW · GW

(Other than the thoughts on the consequences of said idea) This idea largely seems like a rehash of https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators (and frankly, so does the three layer model, but that does go into more mechanistic territory and I think it complements simulator theory well)

Comment by Isopropylpod on Third-wave AI safety needs sociopolitical thinking · 2025-03-28T00:58:27.423Z · LW · GW

https://www.theverge.com/news/618109/grok-blocked-elon-musk-trump-misinformation

https://www.businessinsider.com/grok-3-censor-musk-trump-misinformation-xai-openai-2025-2?op=1

The explanation that it was done by "a new hire" is a classic and easy scapegoat. It's much more straight forward to believe Musk himself wanted this done, and walked it back when it was clear it was more obvious than intended. 

Comment by Isopropylpod on Third-wave AI safety needs sociopolitical thinking · 2025-03-27T21:45:25.959Z · LW · GW

So how do you prevent that? Well, if you're Elon or somebody who thinks similarly, you try and prevent it using decentralization. You’re like: man, we really don't want AI to be concentrated in the hands of a few people or to be concentrated in the hands of a few AIs. (I think both of these are kind of agnostic as to whether it's humans or AIs who are the misaligned agents, if you will.) And this is kind of the platform that Republicans now (and West Coast elites) are running on. It's this decentralization, freedom, AI safety via openness. Elon wants xAI to produce a maximally truth-seeking AI, really decentralizing control over information.

No offense, but how do you not realize that they are just straight up lying to you? You're clearly right-wing yourself so it may be hopeless trying to get you to see through the obvious lies here, but you do realize that Musk and the other 'west coast elites' are the exact ones who benefit from the centralization, right? There is evidence, literal written evidence, of Musk trying to censor Grok from saying bad things about him, what the hell makes you think he wants a "maximally truth-seeking AI"? You are drinking the kool-aid without question.

What happened to rationality? How did it get co-opted by the wealthy right so quickly? What is so damn enticing about these insane libertarian politics as to make people completely drop their defenses and believe these increasingly untrustworthy people so readily?

EDIT: This comment on the EA forum does a great job of expressing a lot of what I mean (in fact, all of the comments there seem to be weirdly sane for rationalist spaces I've been lately) https://forum.effectivealtruism.org/posts/vbdvkPXozfzTRB7FC/third-wave-ai-safety-needs-sociopolitical-thinking?commentId=ZcexbTpqpgBycnxxi

Comment by Isopropylpod on Go home GPT-4o, you’re drunk: emergent misalignment as lowered inhibitions · 2025-03-18T19:23:32.692Z · LW · GW

I think you might've gotten a bit too lost in the theory and theatrics of the model having a "superego". It's been known for awhile now that fine tuning instruct or chat tuned models tends to degrade performance and instruction following - pretty much every local LLM tuned for "storytelling" or other specialized tasks gets worse (sometimes a lot worse) at most benchmarks. This is a simple case of (not very, in this case) catastrophic forgetting, standard neural network behavior.

Comment by Isopropylpod on AI Control May Increase Existential Risk · 2025-03-14T20:30:57.832Z · LW · GW

I agree with the statement (AI control in increasing risk) but moreso because I believe that the people currently in control of frontier AI development are, themselves, deeply misaligned against the interests of humanity overall. I see it often here that there is little considering of what goals the AI would be aligned to.

Comment by Isopropylpod on It's time for a self-reproducing machine · 2024-08-11T01:43:55.740Z · LW · GW

I do not intend to be rude by saying this, but I firmly believe you vastly overestimate how capable modern VLMs are and how capable LLMs are at performing tasks in a list, breaking down tasks into sub-tasks, and knowing when they've completed a task. AutoGPT and equivalents have not gotten significantly more capable since they first arose a year or two ago, despite the ability for new LLMs to call functions (which they have always been able to do with the slightest in-context reasoning), and it is unlikely they will ever get better until a more linear, reward loop, agentic focused learning pipeline is developed for them and significant amount of resources are dedicated to the training of new models with a higher causal comprehension.