Posts

Comments

Comment by Jeff White (jeff-white) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-30T02:52:02.611Z · LW · GW

I have been working on value alignment from systems-neurology and especially adolescent development for many years, sort of in parallel with ongoing discussions here, but in terms of moral isomorphisms and autonomy and so on. Here, a brief paper from a presentation for the Embodied Intelligence conference 2023 about development of purpose in life and spindle neurons in context of self-association with religious ideals, such as we might like a religious robot to pursue, disregarding corrupting social influences and misaligned human instruction, and so on: https://philpapers.org/rec/WHIAAA-8  I think that this is the sort of fundamental advance necessary.

Comment by jeff-white on [deleted post] 2023-04-06T10:56:50.790Z

The best argument that we are in a simulation is mine, I think: https://link.springer.com/article/10.1007/s00146-015-0620-9