Posts

2+2: Ontological Framework 2022-02-01T01:07:00.397Z

Comments

Comment by Lyrialtus (lyrialtus) on LLM chatbots have ~half of the kinds of "consciousness" that humans believe in. Humans should avoid going crazy about that. · 2024-11-22T07:13:49.405Z · LW · GW

Are you asking for a p-zombie test? It should be theoretically possible, for any complex system and using appropriate tools, to tell what pattern recognition and word prediction is happening underneath, but I'm not sure it's possible to go beyond that.

Comment by Lyrialtus (lyrialtus) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T22:23:35.945Z · LW · GW

Thank you for writing this. I usually struggle to find resonating thoughts, but this indeed resonates. Not all of it, but many key points have a reflection that I'm going to share:

  • Biological immortality (radical life extension) without ASI (and reasonably soon) looks hardly achievable. It's a difficult topic, but for me even Michael Levin's talks are not inspiring enough. (I would rather prefer to become a substrate-independent mind, but, again, imagine all the R&D without substantial super-human help.)
  • I'm a rational egoist (more or less), so I want to see the future and have pretty much nothing to say about the world without me. Enjoying not being alone on the planet is just a personal preference. (I mean, the origin system is good, nice planets and stuff, but what if I want to GTFO?) Also, I don't trust imaginary agents (gods, evolution, future generations, AGIs), however creating some of them may be rational.
  • Let's say that early Yudkowsky has influenced my transhumanist views. To be honest, I feel somewhat betrayed. Here my position is close to what Max More says. Basically, I value the opportunities, even if I don't like all the risks.
  • I agree that AI progress is really hard to stop. The scaling leaves possible algorithmic breakthroughs underexplored. There is so much to be done, I believe. The tech world will still be working on it even with mediocre hardware. So we are going to ASI anyway.
  • And all the alignment plans... Well, yes, they tend to be questionable. For me, creating human-like agency in AI (to negotiate with) is more about capabilities, but that's a different story.
Comment by Lyrialtus (lyrialtus) on 2+2: Ontological Framework · 2022-02-02T07:28:51.865Z · LW · GW

Thank you for your feedback. I guess this is not too surprising that my post is more on the unreadable nonsense side. I wish I could do better than that, of course. But "more detail" requires a lot more time and effort. I'm not that invested in developing these ideas. I'm going to leave it as is, just a single post from an outsider.