Posts

Comments

Comment by Jeremy Bentham (jeremy-bentham) on The Orthogonality Thesis is Not Obviously True · 2023-04-06T10:48:14.897Z · LW · GW

I am sympathetic to this argument, though I’m less credent than you in moral realism (I still assign the most credence to it out of all meta-ethical theories and think it’s what we should act on). My main worry is that an AI system won’t have access to the moral facts, because it won’t be able to experience pleasure and suffering at all. And like you, I’m not fully credent in moral realism or the realist’s wager, which means that even if an AI system were to be sentient, there’s still a risk that it’s amoral.

Comment by jeremy-bentham on [deleted post] 2023-02-08T16:57:21.598Z

Yes, many of us do. But a good Utilitarian should adhere to commonly accepted negative duties, even when they are really confident that they’re right in any particular instance. There are numerous reasons why this is the case. As Eliezer said in one of his posts: if you cannot spontaneously and unprompted see all those other (so-called second order) considerations in the formal form of "consequences" and "utilities", then you shouldn’t be a consequentialist or a utilitarian. They aren’t merely side-constraints: they’re real consequences that you’re likely and predictably failing to take into consideration.