Utilitarianism and the idea of a "rational agent" are fundamentally inconsistent with reality

post by banev · 2022-11-16T00:19:14.649Z · LW · GW · 1 comments

Contents

1 comment

The currently unsolvable problem with the ethical branch of consequentialism and its subtype utilitarianism, on the basis of which the whole concept of effective altruism and the theory of rational agents are built, is that since the world is an extremely complex tangle of interwoven systems, you cannot predict the consequences of any change even a few steps ahead. Not only because you can't adequately model them, but also because any measurement error is enough to make it impossible to predict even the 0/1 (true/false) status of these systems and their parts after very few steps. So to assume that you can estimate the maximal utility of any action by any criteria over any considerable period of time is an overconfidence that is characteristic of many of those who consider themselves rationalists. 

This is not to say that one should do nothing. It is to say that it is necessary to act, and to plan action, on principles other than consequentialism or utilitarianism. 

And to be a little more humble. Less wrong, you know. 

1 comments

Comments sorted by top scores.

comment by TAG · 2022-12-15T16:04:43.787Z · LW(p) · GW(p)

That's a well known problem, although Rationalists might not be taking enough notice of it