Posts

Comments

Comment by aditya-prasad-1 on [deleted post] 2024-12-10T12:58:31.806Z

I find most peop

 

would be nice if the transition was smoother here

Comment by Aditya Prasad (aditya-prasad-1) on We're already in AI takeoff · 2024-09-21T09:37:04.179Z · LW · GW

predictable

 

damn that hit me

Comment by Aditya Prasad (aditya-prasad-1) on The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables · 2022-10-07T18:20:43.882Z · LW · GW

human values are over the “true” values of the latents, not our estimates - e.g. I want other people to actually be happy, not just to look-to-me like they’re happy.

 

But this is not what our current value system is, we did not evolve such a pointer. Humans will be happy if their senses are deceived. The value system we have is currently over our estimates and that is exactly why we can be manipulated. It is just that till now we did not have an intelligence trying to adversarially fool us. So the value function we need to imbibe is one we don't even have an existence proof of.

 

I found this post really useful to clarify what the outer alignment problem really was. Like others mentioned in the comments I think we should give up predictive power for the AI adopting our world model, there would be a lot of value to be unpacked and the predictive power will still be far better than anything humans have seen now, maybe later one day we can figure out how to align an AI which is allowed to form their own more powerful world model. 

Current methods seem to be applying optimisation pressure to maximise predictive power which will push the AI away from adopting human like world models. 

It seems to come down to how do you traverse the ladder of abstraction, when some things you value are useful rather than true beliefs.

Comment by aditya-prasad-1 on [deleted post] 2022-08-25T10:55:26.796Z

Where is the source for the quote - by Douglas Hofstadter?