Posts

Bryan/College Station, TX – ACX Meetups Everywhere 2022 2022-08-24T22:56:42.314Z
Bryan/College Station, TX – ACX Meetups Everywhere 2021 2021-08-23T08:55:17.297Z

Comments

Comment by Kenny2 (kenny-2) on Why Are The Human Sciences Hard? Two New Hypotheses · 2025-03-18T20:59:10.317Z · LW · GW

This is nice. The "Fruit in the Hand" part also seems to me to line up nicely with the claim that the social sciences deal with more complex questions - if we have more innate faculties for making predictions about social matters than about physical ones, then the questions we want to ask the social scientists start from a higher baseline than the questions we want to ask the physical scientists.

One type of traditional explanation that I'm not sure if you mentioned is the problem of reflexivity - theorizing about society creates a new object (the theory) that influences the society, while theorizing about the physical world doesn't usually create a new object that itself influences the physical world. Especially in certain kinds of market and political interactions, where people are incentivized to frustrate predictions (or occasionally incentivized to realize them), these theories can influence things in harder-to-predict ways than in physical interactions, which usually don't operate according to incentives at all, and thus not on incentives for frustrating or realizing predictions.

Comment by Kenny2 (kenny-2) on Local Trust · 2025-02-25T00:36:35.244Z · LW · GW

The example is really helpful for me getting a concrete understanding of what it looks like to satisfy Trust without Reflection, and why that goes along with deferring to someone else for decisions - but I don't see what this example of Alice has to do with locality. It looks like the only relevant propositions are whether it rains tomorrow, and what Alice's credences are, and there don't seem to be any propositions we don't defer to her on.

Comment by Kenny2 (kenny-2) on Deference and Decision-Making · 2025-02-18T01:27:06.038Z · LW · GW

Nice explanation of the paper!

I really like the trust principle in the paper, about what we can say about the relationship between credence functions when one person would prefer to use another person's credences than their own. But I'm skeptical about the concept that seems to initially motivate it, namely, the idea that some people might actually be experts. Does any of this depend on there being such a proposition, or can we do it all in a language without such propositions?

Comment by Kenny2 (kenny-2) on Evolution and the Low Road to Nash · 2025-01-22T23:38:59.436Z · LW · GW

I think this shows clearly that dynamics don't always lead to the same things as equilibrium rationality concepts. If someone is already convinced that the dynamics matter, this leads naturally to the thought that the equilibrium concepts are missing something important. But I think that at least some discussions of rationality (including some on this site) seem like they might be committed to some sort of "high road" idea under which it really is the equilibrium concept that is core to rationality, and that dynamics were at best a suggestive motivation. (I think I see this in some of the discussions of something like functional decision theory as "that decision theory that a perfectly rational agent would opt to self-program", but with the idea that you don't actually need to go through some process of self-re-programming to get there.)

Is there an argument to convince those people that the dynamics really are relevant to rationality itself, and not just to predictions of how certain naturalistic groups of limited agents will come to behave in their various local optima?