LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
To answer the question to pose as a precision in your comment [LW · GW], if there are structures that could be analogous to intelligence, without being literal biological? - The simple answer to that is 'yes'.
What we call 'consciousness' is not a 'neutral' lens - and there is no issue with imagining and understanding that there could be types of 'consciousness' that are shaped by very different processes than our own.
Personally I want to be part of a conscious universe, where there is communication going in all directions, and there is a shared goal and purpose. Though, since the structures might be so different, even reaching the step where they are able to differentiate themselves, and even communicate anywhere close to effectively, won't be easy. Considering how hard it is to understand ourselves, aka the signals from cells, bacteria and viruses, it might not be much easier for, say, the Earth to communicate with us.
Ideas/theories that are similar:
Panpsychism, but an idea/theory that might also fit would be Analytical Idealism.
A theory that explores this in a much more general way, looking at it from the perspective of values and paradigms, would be Spiral Dynamics.
I also don't see anything wrong with going in this direction, as an exploration. Complexity theory and emergence duly point out that there is much more to our reality, even to biology, than meets the eye.
Is there an o3 update yet?
knight-lee on Power Lies Trembling: a three-book review:) thank you so much for your thoughts.
Unfortunately, my model of the world is that if AI kills "more than 10%," it's probably going to be everyone and everything, so the insurance won't work according to my beliefs.
I only defined AI catastrophe as "killing more than 10%" because it's what the survey by Karger et al. asked the participants.
I don't believe in option 2, because if you asked people to bet against AI risk with unfavourable odds, they probably won't feel too confident against AI risk.
daniel-kokotajlo on AI 2027: What Superintelligence Looks LikeThat's part of it, but also, over the course of 2027 OpenBrain works hard to optimize for data-efficiency, generalization and transfer learning ability, etc. and undergoes at least two major paradigm shifts in AI architecture.
michaeldickens on What Makes an AI Startup "Net Positive" for Safety?I think the statement in the parent comment is too general. What I should have said is that every generalist frontier AI company has been net negative. Narrow AI companies that provide useful services and have ~zero chance of accelerating AGI are probably net positive.
lc on Three Months In, Evaluating Three Rationalist Cases for TrumpThe indexes above seem to be concerned only with state restrictions on speech. But even if they weren't, I would be surprised if the private situation was any better in the UK than it is here.
gurkenglas on What Makes an AI Startup "Net Positive" for Safety?They did the opposite, incentivizing themselves to reach the profit cap. I'm talking about making sure that any net worth beyond a billion goes to someone else.
chris_leong on Chris_Leong's ShortformI believe those are useful frames for understanding the impacts.
jay95 on Consequentialists should have a comprehensive set of deontological beliefs they adhere toIt is, but I'm specifically saying a form of rule consequentialism that serves personal happiness about as well as it could be served is in fact rational (for anyone who is trying to maximize impersonal happiness and probably for anyone who is a consequentialist of any kind).
cubefox on jenn's Shortformi kinda thought that ey's anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao
He actually cites reflective equilibrium here [? · GW]:
Closest antecedents in academic metaethics are Rawls and Goodman's reflective equilibrium, Harsanyi and Railton's ideal advisor theories, and Frank Jackson's moral functionalism.