Posts

Comments

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Going Out With Dignity · 2021-07-12T13:46:11.751Z · LW · GW

If we are talking about any sort of "optimality", we can't expect even individual humans to have these "optimal" values, much less so en masse. Of course it is futile to dream that our deus ex machina will impose those fantastic values on the world if 99% of us de facto disagree with them.

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-05T13:54:11.148Z · LW · GW

Both definitions have their issues.

"able to act on its desires unimpededly" has 2 problems. First, it is clearly describing the "agent's" (also not a well-defined category, but let's leave it at that) experience, e.g. desires, not something objective from an outside view. Second, "unimpededly" is also intrinsically vague. Is my desire to fly impeded? Is an addict's desire to quit? (If the answer is "no" to both, what would even count as impediment?) But, I guess, it is fine if we agree that "compatibilist free will" is just a feature of subjective experience.

"ability to make undetermined choices" relies on the ambiguous concept of "choice", but also would be surprisingly abundant in a truly probabilistic world. We'd have to attribute "libertarian free will" to a radioactive isotope that's "choosing" when to decay, or to any otherwise deterministic system that relies on such isotope. I don't think that agrees with intuition of those who find this concept meaningful.

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-05T00:19:21.598Z · LW · GW

You'd have to draw the line somewhere so it would have any meaning at all. What's the point in the concept if anything can be interpreted as such. What do you mean when you say "free choice" or "choice"?

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-04T20:55:42.638Z · LW · GW

I don't think it can be meaningfully defined. How could you define free choice so that a human would have it, but a complicated mechanical contraption of stones wouldn't?

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-04T19:43:14.122Z · LW · GW

I don't think computers have any more free will [free choice] than stones. Do you?

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-04T19:35:44.662Z · LW · GW

Not necessarily. Non-determinism (that future is not completely defined by the past) doesn't have anything to do with choice. A stone doesn't make choices even if future is intrinsically unpredictable. The question here is why would anyone think that humans are qualitatively different from stones.

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-04T19:35:27.302Z · LW · GW

Not necessarily. Determinism doesn't have anything to do with choice. The stone doesn't make choices regardless of determinism. The question here is why would anyone think that humans are qualitatively different from stones.

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Should VS Would and Newcomb's Paradox · 2021-07-04T09:24:09.560Z · LW · GW

Yes! It's interesting how the concepts of agency and choice seem so natural and ingrained for us, humans, that we are often tempted to think that they describe reality deeper than they really do. We seem to see agents, preferences, goals, and utilities everywhere, but what if these concepts are not particularly relevant for the actual mechanism of decision-making even from the first-person view?

What if much of the feeling of choice and agency is actually a social adaptation, a storytelling and explanatory device that allows us to communicate and cooperate with other humans (and, perhaps more peculiarly, with our future selves)? While it feels like we are making a choice due to reasons, there are numerous experiments that point to the explanatory, after-the-fact role of reasoning in decision-making. Yes, abstract reasoning also allows us to model conceptually a great many things, but those models just serve as additional data inputs to the true hidden mechanism of decision-making.

It shouldn't be surprising then if for other minds [such as AGI], not having this primal adaptation to being modeled by others, decision-making would feel nothing like choice or agency. We already see it in our simpler AIs -- choice and reasoning mean nothing to a health diagnostic system, it simply computes, it is only for us, humans, to feel like we understand "why" it made a particular choice, we have to add an explanatory module that gives us "reasons" but is completely unnecessary for the decision-making itself!

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Do incoherent entities have stronger reason to become more coherent than less? · 2021-07-02T20:57:09.973Z · LW · GW

My anecdotal experience of being a creature shows that I am very happy when I don't feel like an agent, coherent or not. The need for being an [efficient] agent only arises in the context of an adverse situation, e.g. related to survival, but agency and coherence are costly, in so many aspects. I am truly blessed when I am indifferent enough not to care about my agency or coherence.

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on On the limits of idealized values · 2021-06-24T17:21:08.569Z · LW · GW

Silly question: why do you have to decide what you value? Why can't you value whatever you already do?

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on The Homunculus Problem · 2021-06-04T09:31:54.225Z · LW · GW

It seems that homunculus concept is unnecessary here. You can easily talk about the experience itself, e.g. "seeing", or you can still use "I see" as a language construct while realising that you are only referring to the happening phenomenon of "seeing".

There is a difference between knowing something and experiencing it in a particular way, and the former may only very slightly nudge the latter if at all.

I can know a chair is red, but if I close my eyes, I don't see it.

I can know a chair is red, but if I put on coloured glasses, I will not see it as red.

I can know that nothing changes in reality when I take LSD, but, oh boy, does my perception change.

The real problem here is that we are not rational agents, and, what's worse, the small part of us that even resembles anything rational is not in control of our experience.

We'd like to imagine ourselves as agents and then we run into surprises like "how can I know something, but still experience it (or worse, behave!) differently".

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on What's So Bad About Ad-Hoc Mathematical Definitions? · 2021-03-25T22:54:12.712Z · LW · GW

What you are describing is data (A/B, 1/2) such that parts of the data are independent from the secret X/Y, but the whole data is not independent from the secret. That's an issue that is sort of unusual for any statistical approach, because it should be clear that only the whole leaked data should be considered.

The problem with Pearson correlation criterion is that it does not measure independence at all (even for parts of the data), but measures correlation which is just a single statistic of the two variables. It's as if you compared two distributions by comparing their means.

Let's say leaked data is X = -2, -1, 1, 2 equiprobably, and secret data is Y = X^2. Zero correlation just implies E(XY) - E(X)E(Y) = 0, which is the case, but it is clear that one can fully restore the secret from the leaked, they are not independent at all.

See more at https://en.wikipedia.org/wiki/Correlation_and_dependence#Correlation_and_independence

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on Demand offsetting · 2021-03-25T13:16:04.751Z · LW · GW

An important point (somewhat overlooked in the comments) is that it is not necessary to sell the humane eggs at the same price as factory eggs for this to work. You can start issuing certificates straight away, not changing anything in the distribution process!

It is even better, because it will provide incentive for competition between humane eggs producers. And also, having humane eggs still labeled as "humane", it will help them to drive factory eggs out of the market, by allowing them to have a lower profit margin. In a sense, it is similar to tax reliefs for an industry we want to stimulate, just organized in a perfectly libertarian way.

Comment by Svyatoslav Usachev (svyatoslav-usachev-1) on What's So Bad About Ad-Hoc Mathematical Definitions? · 2021-03-17T22:28:41.596Z · LW · GW

No, that's not what's wrong with Pearson's approach. Your example suffers from a different issue.