Posts

Carioca Petrov Day 2023-09-26T00:30:36.906Z
Rio de Janeiro, RJ, Brazil – ACX Meetups Everywhere Fall 2023 2023-08-25T23:48:32.864Z

Comments

Comment by Giskard (tiago-macedo) on Rio de Janeiro, RJ, Brazil – ACX Meetups Everywhere Fall 2023 · 2023-09-19T13:48:35.747Z · LW · GW

Huge success!

Comment by Giskard (tiago-macedo) on Why am I Me? · 2023-06-29T17:11:18.054Z · LW · GW

I think I don't understand what makes you say that anthropic reasoning requires "reasoning from a perspective that is impartial to any moment". The way I think about this is the following:

  • If I imagine how an omnitemporal, omniscient being would see me, I imagine they would see me as a randomly selected sample from all humans, past present and future (which don't really exist for the being).
  • From my point of view, it does feel weird to say that "I'm a randomly selected sample", but I certainly don't feel like there is anything special about the year I was born. This, combined with the fact that I'm obviously human, is just a from-my-point-of-view way of saying the same thing. I'm a human and I have no reason to belive the year I was born is special == I'm a human whose birth year is a sample randomly taken from the population of all possible humans.

What changes when you switch perspectives is just the words, not the point. I guess you're thinking about this differently? Do you think you can state where we're disagreeing?

Comment by Giskard (tiago-macedo) on Why am I Me? · 2023-06-28T20:17:51.026Z · LW · GW

I don't think the Doomsday argument claims to be time-independent. It seems to me to be specifically time-dependent -- as is any update. And there's nothing inherently wrong with that: we are all trying to be the most right that we can be given the information we have access to, our point of view.

Comment by Giskard (tiago-macedo) on Why am I Me? · 2023-06-28T20:13:23.482Z · LW · GW

For now, I see no reason to deviate from the simple explanations to the problems OP posited.

Why am I me?

Well, "am" (an individual being someone), "I" and "me" (the self) are tricky concepts. One possible way to bypass (some of) the trickiness is to consider the alternative: "why am I not someone else"?

Well, imagine for a moment that you are someone else. Imagine that you are me. In fact, you've always been me, ever since I was born. You've never thought "huh, so this is what it feels like to be someone else". All you've ever thought is "what would it be like to be someone else?". Then one day you tried to imagine what it would be like to be the person who wrote an article on LessWrong and...

Alakazam, now you're back to being you. My point here is that the universe in which you are not you, but someone else, is exactly like our universe, in every way. Which either means that this is already the case, and you really are me, and everyone else too, or that those pesky concepts of self and identity actually don't work at all.

Regarding anthropic arguments, if I understand correctly (from both OP's post and comments), they don't believe that they are a n=1 sample randomly taken from the population of every human to ever exist. I think they are. Are they an n=1 sample of something? Unless the post was written by more than one person, then yes. Are they a sample taken from the population of all humans to ever exist? I do think OP is human, so yes. Are they a randomly selected sample? This is where it gets interesting.

If both your parents were really tall, than you weren't randomly selected from the population of all humans in regards to height. That is because even before measuring your height, we had reason to believe you would grow up to be tall. Your sampling was biased. But in regards to "when you were born", we must ask if there is any reason that we should think OP's birth rank leans one way or another. I can't think of one -- unless we start adding extra information to the argument. If you think the singularity is close and will end Humanity, then we have reason to think OP is one of the last few people to be born. If you think Humanity has a large chance of spreading through the Galaxy and living for eons, than we have reason to think the opposite. But if we want to keep our argument "clean" from outside information, then OP's (and our) birth rank should not be considered special. And it certainly wasn't deliberately selected by anyone beforehand. So yes, OP is a n=1 sample randomly taken from the population of all humans to ever exist, and therefore can do anthropic reasoning.

That doesn't necessarily mean the Doomsday argument is right though. I feel like there might me hidden oversimplifications in it, but I won't try to look for them now. The larger point is that anthropic reasoning is legitimate, if done right (like every other reasoning).

Comment by Giskard (tiago-macedo) on Issues with the Litany of Gendlin · 2021-10-05T00:07:41.024Z · LW · GW

So, I'm 10 years late. Nevertheless I'm throwing my two cents into this comment, even if it's just for peace of mind.

Mostly agree with the litany, as I interpret it as saying not that "there are no negative consequences to handling the truth", but saying instead that "the negative consequences of not handling the truth are always worse than the consequences of handling it". However, upon serious inspection I also feel unsure about it, on the corner cases of truths which could have an emotional impact over people (or on me) greater than their concrete impact.

With that said, my suggestion 10 years ago would have been to include the Litany of Gendlin verbatim, accompanied by "yeah, this one might be wrong".

Performatic Rationality should make a healthy effort to ritualize the idea of questioning it's rituals. Also, it should make a healthy effort not to hide arguments that some think are wrong, but about which there isn't (approximate) unanimity yet. What better way to hit both checkboxes than literally including a famous litany you disagree with and then pointing out that it might be wrong?

Comment by Giskard (tiago-macedo) on Prosocial Capitalism · 2021-10-04T15:49:47.273Z · LW · GW

In this article, you posit that "positive sum networks will out-compete [...] antisocial capitalism [...]".

If I understand correctly, this is due to cooperative systems of agents (positive-sum networks) producing more utility than purely-competitive systems. You paint a good picture of this phenomenon happening, and I think you are describing something similar to what Scott Alexander is in In Favor of Niceness, Community, and Civilization.

However, the question then becomes "what exactly makes people choose to cooperate, and when?" You cite the Prisoner's Dilemma as a situation where the outcome Cooperate/Cooperate is better than the outcome Compete/Compete for both players. That is true, but the outcome Compete/Cooperate is better for player 1 than any other. The reverse is true for player 2. That is what makes the Coop/Coop state a fragile one for agents acting under "classical rationality".

Cooperation tends to be fragile, not because it is worse then Competition (it's better in the situations we posit), but because unilaterally defying is better. So, suppose you have a group of people (thousands? billions?) who follow a norm of "always choose cooperation". This group would surely be more productive than an external group who constantly chooses to compete, sure, but if you put even one person who chooses to compete inside the "always cooperate" group, that person will likely reap enormous benefits to the detriment of others -- they will be player 1 in a Compete/Cooperate dilemma.

If we posit that the cooperating group can learn, they will learn that there is a "traitor" among them, and will become a little more likely to choose Compete instead of Cooperate when they think they might be interacting with the "traitor". But this means that these people themselves be choosing Compete, increasing the amount of "traitors" in the group, and then the whole thing deteriorates.

Do you have any ideas on how to prevent this phenomenon? Maybe the cooperating group is acting under a norm that is more complex than just "always cooperate", that allows a state of Cooperate/Cooperate to become stable?

You cite "communication and trust" as "the two pillars of positive sum economic networks". Do you think that if there is a sufficiently large amount and quality of trust and communication they become self-reinforcing? What I have described is a deterioration of trust in a group. How can this be prevented?

Comment by Giskard (tiago-macedo) on Conflict vs. mistake in non-zero-sum games · 2020-08-10T07:14:56.256Z · LW · GW

Huh.

I think I've gathered a different definition of the terms. From what I got, mistake theory could be boiled down to "all/all important/most of the world's problems are due to some kind of inefficiency. Somewhere out there, something is broken. That includes bad beliefs, incompetence, coordination problems, etc."

Comment by Giskard (tiago-macedo) on Conflict vs. mistake in non-zero-sum games · 2020-08-06T02:37:43.284Z · LW · GW

some outgroups aren't malicious and aren't so diametrically opposed to your goals that it's an intentional conflict, but they're just bad at thinking and can't be trusted to cooperate.

In what way is this different than mistake theory?

Comment by Giskard (tiago-macedo) on Conflict vs. mistake in non-zero-sum games · 2020-08-06T02:24:41.654Z · LW · GW

For example, I think that a mistake theorist is often claiming that the allocation effects of some policy are not what you think (e.g. rent controls or minimum wage).

A big part of optimizing systems is analyzing things to determine it's outcomes. That might be why mistake theorists frequently claim to have discovered that X policy has surprising effects -- even policies related to allocation, like the ones you cited.

It's a stretch, but not a large one, and it explains how "mistake/conflict theory = optimizing first/last" predicts mistake theorists yapping about allocation policies.

Comment by Giskard (tiago-macedo) on Institutional Senescence · 2020-06-28T00:18:39.218Z · LW · GW

Regarding the specific case of forgiving debt every N years: wouldn't lenders simply not offer loans that should be paid back after the next jubilee? Imagine, for example, that 2030 is a jubilee year. Then right now (2020) there would be lots of opportunities to take loans that expire, at most, in 10 years. In 2029, however, only short-term loans would be possible. Why would anybody lend you money to be paid back in 2 years if next year that debt will disappear?

Then if, in 2029, you desperately need to take out a long-term loan (e.g. to cover medical expenses), you would be incentivized to sign a contract where you wave away your right to have the debt forgiven. If this kind of contract is forbidden by law, you are incentivized to take out an illicit loan -- backed by illicit violence. Down the drain go the advantages of jubilee.

I can think of only one way to avoid this effect: have the debt-forgiving happen at random. There could be a minimal waiting period to guarantee two jubilees didn't happen too close to each other, after which it had a certain chance of being declared every 1st of January. The bonus would be that it incentivizes lenders to be careful with their landings. It disincentivizes (but doesn't destroy) the possibility of long-term landings, which is specially bad for people with a low income. But those same people would be the most positively impacted by the random debt-forgiveness, so it's a trade-off situation.

Is there another solution?