Posts

Utilitarianism is the only option 2022-12-03T17:14:19.532Z
A stubborn unbeliever finally gets the depth of the AI alignment problem 2022-10-13T15:16:28.644Z
A hierarchy of truth 2020-05-11T09:23:56.181Z
The ways of knowing 2020-05-05T20:35:11.445Z

Comments

Comment by aelwood on A stubborn unbeliever finally gets the depth of the AI alignment problem · 2022-10-14T12:00:49.918Z · LW · GW

This is a great comment, but you don't need to worry that I'll be indoctrinated! 

I was actually using that terminology a bit tongue in cheek, as I perceive exactly what you say about the religious fervour of some AI alignment proponents. I think the general attitude and vibe of Yudkowsky etc is one of the main reasons I was suspicious about their arguments for AI takeoff in the first place.

Comment by aelwood on A stubborn unbeliever finally gets the depth of the AI alignment problem · 2022-10-13T21:57:39.950Z · LW · GW

I actually agree that it's likely an AGI will at least start thinking in a way kind of similar to a human, but that in the end this will still be very difficult to align. I actually really recommend that you checkout Understand by Ted Chiang, which basically plays out the exact scenario you mentioned -- a normal guy gets super human intelligence and chaos ensues. 

Comment by aelwood on A stubborn unbeliever finally gets the depth of the AI alignment problem · 2022-10-13T21:47:25.716Z · LW · GW

Thanks for the comment, I'll read some more on the distinction of inner and outer alignment, that sounds interesting.

I don't think you would need to get anywhere near perfect simulation in order to begin to have extremely good predictive power over the world. We're already seeing this in graphics and physics modeling.

I think this is a good point, although these are cases where lots of data is available. So I guess any case in which you don't have the data ready would still have more difficulties. Off the top of my head I don't know how limiting this would be in practice, but it should be in lots of cases.