James Camacho's Shortform

post by James Camacho (james-camacho) · 2023-07-22T01:55:40.819Z · LW · GW · 7 comments

Contents

7 comments

7 comments

Comments sorted by top scores.

comment by James Camacho (james-camacho) · 2024-08-29T08:09:49.351Z · LW(p) · GW(p)

Religious freedoms are a subsidy to keep the temperature low. There's the myth that societies will slowly but surely get better, kind of like a gradient descent. If we increase the temperature too high, an entropic force would push us out of a narrow valley, so society could become much worse (e.g. nobody wants the Spanish Inquisition). It's entirely possible that the stable equilibrium we're being attracted to will still have religion.

Replies from: robo
comment by robo · 2024-08-29T15:31:10.325Z · LW(p) · GW(p)

I want to love this metaphor but don't get it at all.  Religious freedom isn't a narrow valley; it's an enormous Shelling hyperplane.  85% of people are religious, but no majority is Christian or Hindu or Kuvah'magh or Kraẞël or Ŧ̈ř̈ȧ̈ӎ͛ṽ̥ŧ̊ħ or Sisters of the Screaming Nightshroud of Ɀ̈ӊ͢Ṩ͎̈Ⱦ̸Ḥ̛͑..  These religions don't agree on many things, but they all pull for freedom of religion over the crazy *#%! the other religions want.

comment by James Camacho (james-camacho) · 2023-07-22T01:55:40.896Z · LW(p) · GW(p)

Graph Utilitarianism:

People care about others, so their utility function naturally takes into account utilities of those around them. They may weight others' utilities by familiarity, geographical distance, DNA distance, trust, etc. If every weight is nonnegative, there is a unique global utility function (Perron-Frobenius).

Some issues it solves:

  • Pascal's mugging.
  • The argument "utilitarianism doesn't work because you should care more about those around you".

Big issue:

  • In a war, people assign negative weights towards their enemies, leading to multiple possible utility functions (which say the best thing to do is exterminate the enemy).
Replies from: Dagon
comment by Dagon · 2023-07-24T01:21:18.735Z · LW(p) · GW(p)

This is a very imprecise use of “utility”. Caring about others does not generally take their utility into account.

It takes one’s model of the utility that one thinks the others should have into account.

And, as you note, even this isn’t consistent across people or time.

comment by James Camacho (james-camacho) · 2024-11-25T06:44:20.058Z · LW(p) · GW(p)

Is there a difference between utilitarianism and selfish egoism?

For utilitarianism, you need to choose a utility function. This is entirely based on your preferences: what you value, and who you value get weighed and summed to create your utility function. I don't see how this differs from selfish egoism: you decide what and who you value, and take actions that maximize these values.

Each doctrine comes with a little brainwashing. Utilitarianism is usually introduced as summing "equally" between people, but we all know some arrangements of atoms are more equal than others. However, introducing it this way naturally leads people to look for cooperation and value others more, both of which increase their chance of surviving.

Ayn Rand was rather reactionary against religion and its associated sacrificial behavior, so selfish egoism is often introduced as a reaction:

  • When you die, everything is over for you. Therefore, your survival is paramount.
  • You get nothing out of sacrificing your values. Therefore, you should only do things that benefit you.

Kant claimed people are good only by their strength of will. Wanting to help someone is a selfish action, and therefore not good. Rand takes the more individually rational approach: wanting to help someone makes you good, while helping someone against your interests is self-destructive. To be fair to Kant, when most agents are highly irrational your society will do better with universal laws than moral anarchy. This is also probably why selfish egoism gets a bad rapport: even if you are a selfish egoist, you want to influence your society to be more Kantian. Or, at the very least, like those utilitarians. They at least claim to value others.

However, I think rational utilitarians really are the same as rational selfish egoists. A rational selfish egoist would choose to look for cooperation. When they have fundamental disagreements with cooperative others, they would modify their values to care more about their counterpart so they both win. In the utilitarian bias, it's more difficult to realize when to change your utility function, while it's a little easier with selfish egoism. After all, the most important thing is survival, not utility.

I think both philosophies are slightly wrong. You shouldn't care about survival per se, but expected discounted future entropy (i.e. how well you proliferate). This will obviously drop to zero if you die, but having a fulfilling fifty years of experiences is probably more important than seventy years in a 2x2 box. Utility is merely a weight on your chances of survival, and thus future entropy. ClosedAI is close with their soft actor-critic, though they say it's entropy-regularized reinforcement learning. In reality, all reinforcement learning is maximizing energy-regularized entropy.

Replies from: Viliam
comment by Viliam · 2024-11-25T14:12:02.650Z · LW(p) · GW(p)

For utilitarianism, you need to choose a utility function. This is entirely based on your preferences: what you value, and who you value get weighed and summed to create your utility function. I don't see how this differs from selfish egoism: you decide what and who you value, and take actions that maximize these values.

I see a difference in the word "summed". In practice this would probably mean things like cooperating in the Prisoner's Dilemma (maximizing the sum of utility, rather than the utility of an individual player).

Replies from: james-camacho
comment by James Camacho (james-camacho) · 2024-11-25T15:50:51.693Z · LW(p) · GW(p)

Utilitarianism is usually introduced as summing "equally" between people, but we all know some arrangements of atoms are more equal than others.

How do you choose to sum the utility when playing a Prisoner's Dilemma against a rock?