Rawls's Veil of Ignorance Doesn't Make Any Sense

post by Arjun Panickssery (arjun-panickssery) · 2024-02-24T13:18:46.802Z · LW · GW · 9 comments

Contents

9 comments

John Rawls suggests the thought experiment of an "original position" where people decide the political system of a society under a "veil of ignorance" by which they lose the knowledge of certain information about themselves. Rawls's veil of ignorance doesn't justify the kind of society he supports.

It seems to fail at every step individually:

  1. At best, the support of people in the OP provides necessary but probably insufficient conditions for justice, unless he refutes all the other proposed conditions involving whatever rights, desert, etc.
  2. And really the conditions of the OP are actively contrary to good decision-making. For example, in the OP, you don't know your particular conception of the good (??) and you're essentially self-interested. . .
  3. There's no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks
  4. There's no reason to think, specifically, that people would have the literally infinite risk aversion required to support the maximin principle.
  5. Even given everything, the best social setup could easily be optimized for the long-term (in consideration of future people) in a way that makes it very different (e.g. harsher for the poor living today) from the kind of egalitarian society I understand Rawls to support.

More concretely:

Another frame is that his argument involves a bunch of provisions that seem designed to avoid common counterarguments but are otherwise arbitrary (utility monsters, utilitarianism, etc).


 

9 comments

Comments sorted by top scores.

comment by StartAtTheEnd · 2024-02-24T23:15:29.308Z · LW(p) · GW(p)

I think it just forces people to choose a policy which is best for the whole of society rather than just a subset of it (as people tend to choose policies which benefit whatever subset they're part of)

If you're X kind of person you might want human rights for all X. By applying the veil of ignorance, you'd have to argue "Human rights should extent to all groups, even those I now consider to be bad people" (i.e. for all X), which actually is how human rights currently work (and isn't that what makes them good?)

It's simply neutrality and equality under the law. The act of making a policy which is objective rather than subjective. It's essentially the opposite of assuming that the majority is always correct, letting them dominate and bully the minorities, and calling this process "fair" or "democracy".

It's easy for the majority to say "We're correct and whoever disagrees is a terrible person", or for a minority to say "We're being treated unfairly because the majority is evil". By not knowing which group you will belong to, you're forced to come up with a policy which considers a scope large enough to be a superset of both groups, for instance "We will decide what's correct through the scientific model, and let everyone have a voice".

I think it works well for what it does (creating a fair, universal set of rules). It's not perfect, but I don't think a more perfect method is possible in reality. Maybe the idea generalizes poorly, maybe most people are incapable of applying the method? I'm not sure, I can't understand your arguments very well, so I'm just communicating my own intuition.

But (A) is possibly true, and (B) would be true until the information is updated. Would I buy lottery tickets for 20$ and sell them at 100$ before knowing if they were winning ones? Of course, this is the superior strategy every time. Would I sell a winning lottery ticket for less than the winning price? I would not, this is a losing strategy. I don't think this conflicts with the above intuition about fairness, it's a seperate and somewhat unintuitive math problem in my eyes.

comment by TAG · 2024-02-27T17:57:34.846Z · LW(p) · GW(p)

And really the conditions of the OP are actively contrary to good decision-making, e.g. that you don’t know your particular conception of the good (??) or that you’re essentially self-interested. . .

Well, they're inimical to good personal self-interested decision making, but why would that matter? Do you think justice and self interested rationality are the same? If they are differerent, what's the problem? Rawl's theory would not necessarily predict the behaviour of a self interested agent , but it's not supposed to. It's a normative theory: justly is how people should behave, not how they invariably do. If they have their own theories of ethics, well they are theories and not necessarily correct. Mere disagreement between the front-of-the-veil and behind-the-veil versions of a person doesn't tell you much.

There’s no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks

They might have a well constructed case against him, he might have a well constructed case against them.

comment by Dagon · 2024-02-25T05:18:43.236Z · LW(p) · GW(p)

His argument is also founded in dualism - the idea that there can exist a preference or consistent preference-haver outside of specific embodied individuals.  There is no agent who can be ignorant in the way he proposes.

Beliefs and preferences are contingent on specific existence.  If you're a different person, you have different beliefs and preferences.

comment by Oskar Mathiasen (oskar-mathiasen) · 2024-02-26T10:58:51.369Z · LW(p) · GW(p)

You might be interested in John Harsanyi on the topic.
He argues that the conclusion achieved in the original position is (average) utilitarianism.

I agree that behind the veil one shouldn't know the time (and thus can't care differently about current vs future humans). This actually causes further problems for Rawls conception when you project back in time, what if the worst life that will ever be lived has already been lived? Then the maximin principle gives no guidance at all, and in positions of uncertainty it recommends putting all effort in preventing a new minimum from being set.

comment by alex.herwix · 2024-02-24T14:35:19.320Z · LW(p) · GW(p)

I downvoted this post because the whole set up is straw manning Rawls work. To claim that a highly recognized philosophical treatment of justice that has inspired countless discussions and professional philosophers doesn’t “make any sense” is an extraordinary claim that should ideally be backed by a detailed argument and evidence. However, to me the post seems handwavey and more like armchair philosophizing than detailed engagement. Don’t get me wrong, feel free to do that but please make clear that this is what you are doing.

Regarding your claim that the veil of ignorance doesn’t map to decision making in reality, that’s obvious. But that’s also not the point of this thought experiment. It’s about how to approach the ideal of justice and not how to ultimately implement it in our non-ideal world. One can debate the merits of talking and thinking about ideals but calling it “senseless” without some deeper engagement seems pretty harsh.

Replies from: shankar-sivarajan
comment by Shankar Sivarajan (shankar-sivarajan) · 2024-02-25T05:30:42.325Z · LW(p) · GW(p)

Many (perhaps most) famous "highly recognized" philosophical arguments are nonsensical (zombies, as an example). If one doesn't make sense to you, it is far more likely that it doesn't make sense at all than it is that you're missing something.

Replies from: alex.herwix
comment by alex.herwix · 2024-02-27T15:54:16.947Z · LW(p) · GW(p)

Since a lot of arguments on internet forums are nonsensical, the fact that your comment doesn’t makes sense to me, means that it is far more likely that it doesn’t make sense at all than it is that I am missing something.

That’s pretty ironic.

Replies from: shankar-sivarajan
comment by Shankar Sivarajan (shankar-sivarajan) · 2024-02-27T16:05:59.354Z · LW(p) · GW(p)

This is what you sound like: link. You display a perfectly sound understanding of my argument.

Replies from: alex.herwix
comment by alex.herwix · 2024-02-28T12:45:51.319Z · LW(p) · GW(p)

To be honest, I am pretty confused by your argument and I tried to express one of those confusions with my reply. I think you probably also got what I wanted to express but chose to ignore the content in favor of patronizing me. As I don't want to continue to go down this road, here is a more elaborate comment that explains where I am coming from:

First, you again make a sweeping claim that you do not really justify: "Many (perhaps most) famous "highly recognized" philosophical arguments are nonsensical". What is your ground for this claim? Do you mean that it is self-evident that much (perhaps most) of philosophy is bullshit? Or do you have a more nuanced understanding of nonsensical? Are you referring to Wittgenstein here? 

Then you position this unjustified claim as a general prior to justify that your own position in a particular situation is much more likely to be valid than the alternative. Doesn't that seem a little bit like cherry picking to you? 

My critique of the post and your comments boils down to the fact that both are very quick to dismiss other positions as nonsensical and by doing so claim their own perspective/position to be superior. This is problematic because although certain positions may seem nonsensical to you, they may make perfect sense from another angle. While this problem cannot be solved in principle, in practice it calls for investing at least some effort and resources into recognizing potentially interesting/valid perspectives and, in particular, staying open minded to the recognition that one may not have consider all relevant aspects and to reorient accordingly. I will list a couple of resources that you can check out if you are interested in a more elaborate argument on this matter. 

* Stegmaier, W. (2019). What Is Orientation? A Philosophical Investigation. De Gruyter.
* Ulrich, W. (2000). Reflective Practice in the Civil Society: The contribution of critically systemic thinking. Reflective Practice, 1(2), 247–268. https://doi.org/10.1080/713693151
* Ulrich, W., & Reynolds, M. (2010). Critical Systems Heuristics. In M. Reynolds & S. Holwell (Eds.), Systems Approaches to Managing Change: A Practical Guide (pp. 243–292). Springer London. https://doi.org/10.1007/978-1-84882-809-4_6