Posts

Why Are The Human Sciences Hard? Two New Hypotheses 2025-03-18T15:45:52.239Z
Local Trust 2025-02-24T19:53:26.953Z
Subjective Naturalism in Decision Theory: Savage vs. Jeffrey–Bolker 2025-02-04T20:34:22.625Z
Deference and Decision-Making 2025-01-27T22:02:17.578Z
Evolution and the Low Road to Nash 2025-01-22T07:06:32.305Z
Chance is in the Map, not the Territory 2025-01-13T19:17:15.843Z

Comments

Comment by Aydin Mohseni (aydin-mohseni) on Existing UDTs test the limits of Bayesianism (and consistency) · 2025-03-31T01:48:08.014Z · LW · GW

Fantastic post! I appreciate your direction of thought. 

Your take on updatelessness—as an adaptive self-modification to handle anticipated experiences or strategic interactions (e.g., generalizations of the prisoner’s dilemma with a twin, or transparent Newcomb problems)—is the most sensible I’ve yet encountered. (I've also appreciated Martin Soto's efforts to get clear on this topic.) As you say, there are many takes; you helped me see a coherent motivation behind one clearly. 

And your picture of EDT evolving into “son of EDT” certainly seems plausible. As you say, perhaps that’s the best we can do—and to go further, we just have to do the hard work of analyzing particular, important problems.

The dynamics might be a little better particularly regarding logical uncertainty (even if we continue to treat logical credences in a Bayesian way).

One deeply insightful but lesser known analysis of logical uncertainty within a Bayesian framework is by Seidenfeld, Schervish, and Kadane (2012): "What Kind of Uncertainty Is That? Using Personal Probability for Expressing One’s Thinking About Logical and Mathematical Propositions." I'd love to hear your thoughts on it sometime.

Comment by Aydin Mohseni (aydin-mohseni) on Existing UDTs test the limits of Bayesianism (and consistency) · 2025-03-31T01:10:03.163Z · LW · GW

Unfortunately, Bayesian probability theory doesn't exactly tell us how to remedy the situation; in that way it fails Demski's criterion that a theory of rationality is meant to provide advice about how to be more rational. 

There is some work on this. In "Measures of incoherence: How not to Gamble If You Must," Schervish, Seidenfeld, and Kadane (2002) provide a measure of incoherence and show that, for an incoherent agent, updating via Bayes will reduce their incoherence

This isn’t a complete answer to how best to deal with incoherent beliefs, but it’s perhaps the start of one—and you can still tell your incoherent friends to use Bayes’ rule to become more coherent!

Comment by Aydin Mohseni (aydin-mohseni) on Why Are The Human Sciences Hard? Two New Hypotheses · 2025-03-24T21:59:01.044Z · LW · GW

I understand the argument, I think I buy a limited version of it (and also want to acknowledge that it is very clever and I do like it)…

Thanks! I've so appreciated your comments and the chance to think about this with you!

…but I also don't think this can explain the magnitude of the difference between the different fields.

I think that’s right—and we agree. As we note in the post, we only expect our hypotheses to explain a fairly modest fraction of the differences between fields. We see our contribution as showing how certain structural features—e.g., the cardinality of the set of tasks in a field’s search space—should influence our expectations about perceived difference in difficulty; not claiming they explain all or even most of the difference.

Then, physics clearly has a very good track record of asking questions and then solving them extraordinarily well.

I agree that the greatest hits of physics are truly great! That said, if by “track record” we mean something like the ratio of successes to failures (rather than greatest successes), then I think it’s genuinely tricky to assess—largely for structural reasons akin to those we highlight in the paper. We tend to preserve extraordinary successes while forgetting the countless unremarkable failures.

Comment by Aydin Mohseni (aydin-mohseni) on Why Are The Human Sciences Hard? Two New Hypotheses · 2025-03-22T04:30:15.659Z · LW · GW

Like, any explanation of the form "psychology is harder for reason X, so physics had more impressive work earlier on, which drew in smarter people..." feels like a just-so story; it seems at least as plausible that more competent people will be drawn to more difficult problems. (Note that both the hypotheses put forward in the OP are of this form, so this is also a response to the OP.)

Thanks, John. I want to clarify how our hypotheses differ from the "just-so story" pattern you described.

Our hypotheses don't claim "psychology is harder for reason X, which led to physics attracting smarter people." Rather, we propose structural factors that may contribute to the *perception* of different disciplines' difficulty, independent of the researchers' capabilities.

I share your skepticism of just-so stories, which typically:

  • Highlight compatibility between evidence and a hypothesis
  • Fail to consider alternative explanations
  • Don't generate novel, testable predictions
  • Often lack explicit calibration about confidence

We've tried to avoid these pitfalls in several ways:

  • First, we're explicit about our confidence levels. We present these as partial explanations among many factors that likely contribute to perceived disciplinary difficulty, not as comprehensive accounts.
  • Second, our hypotheses can generate testable predictions. For example:
    • The Rigid Demands hypothesis makes it more likely that self-reported pre-commitment to specific questions should correlate with lower R² values across fields
    • The Fruit in the Hand hypothesis makes it more likely that the something like the Kolmogorov complexity of algorithms that solve "impressive" tasks in evolved social domains (like facial recognition) should be greater than those for non-evolved physical domains (like calculating rocket trajectories)
  • Third, we've formalized our reasoning mathematically, which helps expose assumptions and clarify the scope of our claims.

Clearly, these are speculative conjectures regarding very hard questions. That said, we think our formal approach can help move us toward more structured hypotheses about a fascinating question—something we hope is marginally better than just-so stories. :)

Comment by Aydin Mohseni (aydin-mohseni) on Why Are The Human Sciences Hard? Two New Hypotheses · 2025-03-22T03:28:14.094Z · LW · GW

This seems a crazy comparison to make.

Perhaps. I appreciate the prompt to think more about this.

Here's a picture that underpins our perspective and might provide a crux: 

The world is full of hard questions. Under some suitable measure over questions, we might find that almost all questions are too hard to answer. Many large scale systems exhibit chaotic behavior, most parts of the universe are unreachable to us, most nonlinear systems of differential equations have no analytic solution. Some prediction tasks are theoretically unsolvable (e.g., an analytic solution to the three-body problem), some are just practically unsolvable given current technology (e.g., knowledge of sufficiently far off domains in our forward light cone).

Are there any actual predictions past physics was trying to make which we still can't make and don't even care about? None that I can think of.

Here are a few prediction tasks regarding the physical world that we have not answered. What is the exact direction of movement of a single atom in the next instant? How do we achieve nuclear fusion at room temperature? What constitutes a measurement in QM? Why do we observe electric charges but not magnetic monopoles? Why did the Pioneer 10 and 11 spacecraft experience an unexplained deceleration? What is the long-run behavior of any real-world chaotic system? Is the sound of knuckles cracking caused by the formation or collapse of air bubbles? How many grains of sand are there on a given beach on third planet on the fourth star of Alpha Centauri?

But this list is deceptive in that it hides the vast number of problems we have not answered but don't remember because we were not pre-committed to their solution—we simply picked them up only to realize they were too hard or unpromising and we put them back down again on the search for more tractable and promising problems. 

As you say, you can't think of any problems in physics we couldn't solve and no longer care about! In labs across the globe, PIs, post-docs, grad students, and researchers of all stripes are formulating vast numbers of questions, the majority of which are abandoned. 

Scientists in the hard sciences routinely dismiss equivocal test results as unsatisfactory and opt to pursue alternative lines of inquiry that promise unequivocal findings. In many cases, the strength of typical inferences even makes statistical analysis unnecessary. The biologist Pamela Reinagel captures this common attitude concisely in this talk: "If you needed to do a statistical test, you just did a bad experiment," and "If you needed statistics, you are studying something so trifling it doesn't matter." What is happening is that we are redefining our problems until we find regularities that are sufficiently strong so as to be worth pursuing.

In contrast, the education researcher and the development economist are stuck with only a few outcome variables they're allowed to care about, and so have to dig through the crud of tiny  values to find anything of interest. When I give you such a problem—explain the cause of market crashes or effective interventions to improve educational outcomes—you might just be out of luck for how much of it you can explain with a few crisp variables and elegant relations. The social scientist in such cases doesn't get to scour the expanse of the space of regularities in physical systems until they find a promising vein of inquiry heretofore unimagined. 

Of course, no scientist is fully unrestricted in her capacity to redefine her problems, but we suspect that there are differences in degree here between the sciences that make a difference.

Tell me if the following vignette is elucidating. 

You and your colleague are given the following tasks: you have 10 years in which to work on cancer come back with your greatest success, your colleague also has 10 years in which to work on cancer but they are also allowed to try to make progress on any other disease. Which do you expect will achieve the greatest success?

This case is simple in the sense that one domain of inquiry is a strict superset of another, which makes the value of larger search space more clear, but the difference in sizes of search space will be there in cases that are less obvious as well.

Our hypotheses suggest that the social scientists may be like the first researcher. They were pre-committed to smaller domain with less flexibility to go find promising veins of tractability in problem space. This, and variations on the general cardinality reasoning mentioned in footnote 7, are the core of our results.

Comment by Aydin Mohseni (aydin-mohseni) on Why Are The Human Sciences Hard? Two New Hypotheses · 2025-03-20T18:56:24.945Z · LW · GW

Thanks! Yeah! That's just how we are thinking about it.

I like that observation, and it sounds just. Marxism, Keynesian economics, and various psychotherapeutic paradigms provide striking examples of theories that substantively influence the behavior they are meant to describe. And, as you say, the nature of their influence can be subtle and multifaceted—ranging from informing people’s expectations of others’ behavior and their own behavior to providing Schelling points for social coordination and introducing possible actions and strategies not previously salient or even imagined.

The best reference I know for a discussion of something like this is by the sociologist of science, Robert Merton in his 1948 "The Self-Fulling Prophecy." In it, he considers mechanisms by which economic, psychological, and sociological theories become self-reinforcing or self-negating.

Comment by Aydin Mohseni (aydin-mohseni) on Deference and Decision-Making · 2025-01-25T16:58:03.943Z · LW · GW
Comment by Aydin Mohseni (aydin-mohseni) on Evolution and the Low Road to Nash · 2025-01-24T04:45:56.163Z · LW · GW

That’s exactly right. Results showing that low-rationality agents don’t always converge to a Nash equilibrium (NE) do not provide a compelling argument against the thesis that high-rationality agents do or should converge to NE. As you suggest, to address this question, one should directly model high-rationality agents and analyze their behavior.

We’d love to write another post on the high-rationality road at some point and would greatly appreciate your input!

Aumann & Brandenburger (1995), “The Epistemic Conditions for Nash Equilibrium,” and Stalnaker (1996), “Knowledge, Belief, and Counterfactual Reasoning in Games,” provide good analyses of the conditions for NE play in strategic games of complete and perfect information.

For games of incomplete information, Kalai and Lehrer (1993), “Rational Learning Leads to Nash Equilibrium,” demonstrate that when rational agents are uncertain about one another’s types, but their priors are mutually absolutely continuous, Bayesian learning guarantees in-the-limit convergence to Nash play in repeated games. These results establish a generous range of conditions—mutual knowledge of rationality and mutual absolute continuity of priors—that ensure convergence to a Nash equilibrium.

However, there are subtle limitations to this result. Foster & Young (2001), “On the Impossibility of Predicting the Behavior of Rational Agents,” show that in near-zero-sum games with imperfect information, agents cannot learn to predict one another’s actions and, as a result, do not converge to Nash play. In such games, mutual absolute continuity of priors cannot be satisfied.