What are the open problems in Human Rationality?

post by Raemon · 2019-01-13T04:46:38.581Z · score: 59 (17 votes) · LW · GW · 46 comments

This is a question post.

Contents

LessWrong has been around for 10+ years, CFAR's been at work for around 6, and I think there have been at least a few other groups or individuals working on what I think of as the "Human Rationality Project."

I'm interested, especially from people who have invested significant time in attempting to push the rationality project forward, what they consider the major open questions facing the field. (More details in this comment [LW · GW])

Rough gesturing at "What is the Rationality Project?"

I'd prefer to leave "Rationality Project" somewhat vague, but I'd roughly summarize it as "the study of how to have optimal beliefs and make optimal decisions while running on human wetware."

If you have your own sense of what this means or should mean, feel free to use that in your answer. But some bits of context for a few possible avenues you could interpret this through:

Early LessWrong focused a lot of cognitive biases and how to account for them, as well as Bayesian epistemology.

CFAR (to my knowledge, roughly) started from a similar vantage point and eventually started moving in the direction of "how to do you figure out what you actually want and bring yourself into 'internal alignment' when you want multiple things, and/or different parts of you want different things and are working at cross purposes. It also looked a lot into Double Crux, as a tool to help people disagree more productively.

CFAR and Leverage both ended up exploring introspection as a tool.

Forecasting as a field has matured a bit. We have the Good Judgment project.

Behavioral Economics has begun to develop as a field.

I recently read "How to Measure Anything", and was somewhat struck at how it tackled prediction, calibration and determining key uncertainties in a fairly rigorous, professionalized fashion. I could imagine an alternate history of LessWrong that had emphasized this more strongly.

With this vague constellation of organizations and research areas, gesturing at an overall field...

...what are the big open questions the field of Human Rationality needs to answer, in order to help people have more accurate beliefs and/or make better decisions?

answer by Wei_Dai · 2019-01-14T07:55:27.159Z · score: 63 (17 votes)

I went through all my LW posts and gathered the ones that either presented or reminded me of some problem in human rationality.

1. As we become more rational, how do we translate/transfer our old values embodied in the less rational subsystems?

2. How to figure out one's comparative advantage?

3. Meta-ethics. It's hard to be rational if you don't know where your values are supposed to come from.

4. Normative ethics. How much weight to put on altruism? Population ethics. Hedonic vs preference utilitarianism. Moral circle. Etc. It's hard to be rational if you don't know what your values are.

5. Which mental subsystem has one's real values, or how to weigh them.

6. How to handle moral uncertainty? For example should we discount total utilitarianism because we would have made a deal to for total utilitarianism to give up control in this universe?

7. If we apply UDT to humans, what does it actually say in various real-life situations like voting or contributing to x-risk reduction?

8. Does Aumann Agreement apply to humans, and if so how?

9. Meta-philosophy. It's hard to be rational if one doesn't know how to solve philosophical problems related to rationality.

10. It's not clear how selfishness works in UDT, which might be a problem if that's the right decision theory for humans.

11. Bargaining, politics, building alliances, fair division, we still don't know how to apply game theory to a lot of messy real-world problems, especially those involving more than a few people.

12. Reality fluid vs. caring measure. Subjective anticipation. Anthropics in general.

13. What is the nature of rationality, and more generally normativity?

14. What is the right way to handle logical uncertainty, and how does that interact with decision theory, bargaining, and other problems?

Comparing the rate of problems opened vs problems closed, we have so far to go....

answer by Thrasymachus · 2019-01-13T15:44:36.207Z · score: 18 (9 votes)

There seem some foundational questions to the 'Rationality project', and (reprising my role as querulous critic) are oddly neglected in the 5-10 year history of the rationalist community: conspicuously, I find the best insight into these questions comes from psychology academia.

Is rationality best thought of as a single construct?

It roughly makes sense to talk of 'intelligence' or 'physical fitness' because performance in sub-components positively correlate: although it is hard to say which of an elite ultramarathoner, Judoka, or shotputter is fittest, I can confidently say all of them are fitter than I, and I am fitter than someone who is bedbound.

Is the same true of rationality? If it were the case that performance on tests of (say) callibration, sunk cost fallacy, and anchoring were all independent, then this would suggest 'rationality' is a circle our natural language draws around a grab-bag of skills or practices. The term could therefore mislead us into thinking it is a unified skill which we can 'generally' improve, and our efforts are better addressed at a finer level of granularity.

I think this is plausibly the case (or at least closer to the truth). The main evidence I have in mind is Stanovich's CART, whereby tests on individual sub-components we'd mark as fairly 'pure rationality' (e.g. base-rate neglect, framing, overconfidence - other parts of the CART look very IQ-testy like syllogistic reasoning, on which more later) have only weak correlations with one another (e.g. 0.2 ish).

Is rationality a skill, or a trait?

Perhaps key is that rationality (general sense) is something you can get stronger at or 'level up' in. Yet there is a facially plausible story that rationality (especially so-called 'epistemic' rationality) is something more like IQ: essentially a trait where training can at best enhance performance on sub-components yet not transfer back to the broader construct. Briefly:

  • Overall measures of rationality (principally Stanovich's CART) correlate about 0.7 with IQ - not much worse than IQ test subtests correlate with one another or g.
  • Infamous challenges in transfer. People whose job relies on a particular 'rationality skill' (e.g. gamblers and calibration) show greater performance in this area but not, as I recall, transfer improvements to others. This improved performance is often not only isolated but also context dependent: people may learn to avoid a particular cognitive bias in their professional lives, but remain generally susceptible to it otherwise.
  • The general dearth of well-evidenced successes from training. (cf. the old TAM panel on this topic, where most were autumnal).
  • For superforecasters, the GJP sees it can get some boost from training, but (as I understand it) the majority of their performance is attributed to selection, grouping, and aggregation.

It wouldn't necessarily be 'game over' for the 'Rationality project' even if this turns out to be the true story. Even if it is the case that 'drilling vocab' doesn't really improve my g, I might value a larger vocabulary for its own sake. In a similar way, even if there's no transfer, some rationality skills might prove generally useful (and 'improvable') such that drilling them to be useful on their own terms.

The superforecasting point can be argued the other way: that training can still get modest increases in performance in a composite test of epistemic rationality from people already exhibiting elite performance. But it does seem crucial to get a general sense of how well (and how broadly) can training be expected to work: else embarking on a program to 'improve rationality' may end up as ill-starred as the 'brain-training' games/apps fad a few years ago.

answer by Elo · 2019-01-13T20:03:05.732Z · score: 10 (4 votes)

The problem of interfaces between cultures.

Humans live in different cultures. A simple version of this is in how cultures greet each other. The Italian double kiss, the ultra orthodox Jewish non touch, the hippie hug, the handshake of various cultures, the Japanese bow/nod, and many more. It's possible to gravely offend a different culture with the way you do introductions.

Now think about the same potential offence but for all conversation culture.

I have the open question of how to successfully interface with other cultures.

answer by Wei_Dai · 2019-01-14T20:22:58.561Z · score: 6 (3 votes)

One more, because one of my posts presented two open problems, and I only listed one of them above:

15. Our current theoretical foundations for rationality all assume a fully specified utility function (or the equivalent), or at least a probability distribution on utility functions (to express moral/value uncertainty). But to the extent that humans can be considered to have a utility function at all, it's may best be viewed as a partial function that returns "unknown" for most of the input domain. Our current decision theories can't handle this because they would end up trying to add "unknown" to a numerical value during expected utility computation. Forcing humans to come up with an utility function or even a probability distribution on utility functions in order to use decision theory seems highly unsafe so we need an alternative.

answer by G Gordon Worley III · 2019-01-14T21:13:58.875Z · score: 5 (4 votes)

To me the biggest open problem is how to make existing wisdom more palatable to people who are drawn to the rationalist community. What I have in mind as an expression of this problem is the tension between the post/metarationalists and the, I don't know, hard core of rationalists: I don't think the two are in conflict; the former are trying to bring in things from outside the traditional sources historically liked by rationalists; the latter see themselves as defending rationality from being polluted by antirationalist stuff; and both are trying to make rationality better (the former via adding; the latter via protecting and refining). The result is conflict even if I think the missions are not in conflict, though, so it seems an open problem is figuring out how to address that conflict.

answer by Chris_Leong · 2019-01-14T13:46:39.541Z · score: 5 (6 votes)

Group rationality is a big one. It wouldn't surprise me if rationalists are less good on average at co-ordinating than other group because rationalists tend to be more individualistic and have their own opinions of what needs to be done. As an example, how long did it take for us to produce a new LW forum despite half of the people here being programmers? And rationality still doesn't have its own version of CEA.

answer by norswap · 2019-01-22T14:11:28.439Z · score: 3 (2 votes)

For applied rationality, my 10% improvement problem: https://www.lesswrong.com/posts/Aq8QSD3wb2epxuzEC/the-10-improvement-problem

Basically, how do you notice small (10% or less) improvements in areas that are hard to quantify. This is important, because after reaping the low-hanging fruits, stacking those small improvements is how you get ahead.

answer by ChristianKl · 2019-01-13T09:02:16.168Z · score: 2 (8 votes)

Is there a way to integrate probability based forecasting into the daily life of the average person that's clearly beneficial for them?

I don't think we are yet at that point where I can clearly say, that we are there. I think we would need new software to do this well.

answer by quanticle · 2019-01-13T07:59:35.217Z · score: 2 (12 votes)

How about: "What is rationality?" and "Will rationality actually help you if you're not trying to design an AI?"

Don't get me wrong. I really like LessWrong. I've been fairly involved in the Seattle rationality community. Yet, all the same, I can't help but think that actual rationality hasn't really helped me all that much in my everyday life. I can point to very few things where I've used a Rationality Technique to make a decision, and none of those decisions were especially high-impact.

In my life, rationality has been a hobby. If I weren't reading the sequences, I'd be arguing about geopolitics, or playing board games. So, to me, the most open question in rationality is, "Why should one bother? What special claim does rationality have over my time and attention that, say, Starcraft does not?"

answer by Elo · 2019-01-13T19:56:01.828Z · score: -1 (5 votes)

One open problem:

The problem of communication across agents, and generally what I call "miscommunication".

46 comments

Comments sorted by top scores.

comment by Raemon · 2019-01-14T00:52:13.167Z · score: 32 (7 votes) · LW · GW

I'm still interested in general answers from most people, but did want clarify:

I'm particularly interested in answers from people who have made a serious study of rationality, and invested time into trying to push the overall field forward. Whether or not you've succeeded or failed, I expect people who've put, say, at least 100 hours into the project to have a clearer sense of what questions are hard and important. (Although I do think "100 hours spent trying to learn rationality yourself" counts reasonably)

My sense is that many answers so far come more from a place of sitting on the sidelines or having waded in a bit, found rationality not obviously helpful in the first place, and are sort of waiting for someone to clarify if there's a there there. Which is quite reasonable but not what I'm hoping for here.

Background thoughts

In physics, I don't consider a major source of importance "how valuable is this to the average person?" – it's not physics job to be valuable to the average person, it's job is to add fundamental insights of how the universe works to the sum of human knowledge.

Rationality is more like the sort of thing that could be valuable to the average person – it would be extremely disappointing (and at least somewhat surprising) if you couldn't distill lessons on how to think / have beliefs / make choices into something beneficial to most people.

Rationality is maybe better compared with "math". The average person needs to know how to add and multiply. They'll likely benefit from thinking probabilistically in some cases. They probably won't actually need much calculus unless they're going into specific fields, and they won't need to care at all about the Riemann Hypothesis.

Everyone should get exposed to at least some math and given the opportunity to learn it if they are well suited and curious about it. But I think schools should approach that more from a standpoint of "help people explore" rather than "definitely learn these particular things."

That's roughly how I feel about rationality. I think there a few key concepts like "expected value" and "remember base rates" that are useful when making significant decisions, that should probably get taught in high school. Much of the rest is better suited for people who either find it fun, or plan to specialize in it.

Professional Rationality

I think "How to Measure Anything" is a useful book to get a sense of how professional rationality might actually look: it was written for people in business (or otherwise embarking on novel projects), where you actually have to take significant gambles that require reasonable models of how the world works. A given company doesn't need everyone to be excellent at calibration, forecasting, model building and expected value (just as they don't need everyone to be a good accountant or graphic designer or CEO). But they do need at least some people who are good at that (and they need other people to listen to them, and a CEO or hiring specialist who can identify such people).

It's a reasonable career path, for some.

[I don't mean How to Measure Anything as the definitive explanation of what "professional rationality" looks, just a take on it that seems clearly reasonable enough to act as a positive existence proof]

So, what open questions are most useful?

A few possible clusters of questions/problems:

  • What are specific obstacles to distilling rationality concepts that seem like they should be valuable to the average person, but we don't know how to teach them yet?
  • What are specific obstacles, problems, or unknown solutions to problems that seem like they should be relevant to a "rationality specialist", who focuses on making decisions in unknown domains with scant data.
  • What are genuinely confusing problems at the edge of the current rationality field – perhaps far away from the point where even specialists can implement them yet, but where we seem confused in a basic way about how the mind works, or how probability or decision theory work.

comment by ChristianKl · 2019-01-14T09:41:41.338Z · score: 6 (8 votes) · LW · GW
My sense is that many answers so far come more from a place of sitting on the sidelines or having waded in a bit, found rationality not obviously helpful in the first place.

That seems for me a strange result from going through the list of people who answered. All have >1000 karma on LessWrong. Most (all expect Elo) are more then 6 years on LessWrong.

It would surprise me if any of the people have spent less then 100 hours learning/thinking about how to make rationality work.

I myself spent years thinking about how to make calibration work. I tested multiple systems created by LessWrongers. That engagement with the topic lead me to an answer of how I think medicine could be revolutionized [LW · GW]. But I'm still lacking a way to make it actually practical for my daily life.

I think "How to Measure Anything" is a useful book to get a sense of how professional rationality might actually look [...] But they do need at least some people who are good at that (and they need other people to listen to them, and a CEO or hiring specialist who can identify such people).

YCombinator tells their startups to talk to their user and do things that don't scale instead of hiring a professional rationalist to help them navigate uncertainty. To me that doesn't look like it's changing.

It's a bit ridiculous to treat the problem of what rationality actually is as solved and hold convictions that we are going to have rationality specialists.

comment by Thrasymachus · 2019-01-14T17:53:39.533Z · score: 17 (5 votes) · LW · GW

FWIW: I'm not sure I've spent >100 hours on a 'serious study of rationality'. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.

On the topic 'under the hood' here:

I sympathise with the desire to ask conditional questions which don't inevitably widen into broader foundational issues. "Is moral nihilism true?" doesn't seem the right sort of 'open question' for "What are the open questions in Utilitarianism?". It seems better for these topics to be segregated, no matter the plausibility or not for the foundational 'presumption' ("Is homeopathy/climate change even real?" also seems inapposite for 'open questions in homeopathy/anthropogenic climate change'). (cf. 'This isn't a 101-space').

That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the 'Project' (even if this seems to be taken very broadly to normative issues in general - if Wei_Dai's list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around "How can one best make decisions on unknown domains with scant data", the superforecasting literature seems some of the lowest hanging fruit to pluck.

Yet community competence in these areas has apparently declined. If you google 'lesswrong GJP' (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here's something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could 'try at home'. (The same applies to RQ: Sotala wrote a cool sequence [LW · GW] on Stanovich's 'What intelligence tests miss', but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)

I don't understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don't see there being a 'lessons from the superforecasting literature' write-up here (I am slowly working on one myself).

Maybe I just missed the memo and many people have kept abreast of this work (ditto other 'relevant-looking work in academia'), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of 'is there anything there?' for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.

[Minor] I think the first para is meant to be block-quoted?

comment by habryka (habryka4) · 2019-01-14T18:07:00.409Z · score: 8 (4 votes) · LW · GW

I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we've seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)

comment by ESRogs · 2019-01-14T19:58:14.963Z · score: 10 (4 votes) · LW · GW

Also superforecasting and GJP are no longer new. Seems not at all surprising that most of the words written about them would be from when they were.

comment by ChristianKl · 2019-01-14T21:35:37.167Z · score: 4 (3 votes) · LW · GW

Given that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies.

It's my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.

comment by Elo · 2019-01-14T10:37:22.747Z · score: 2 (1 votes) · LW · GW

I've been around that long. Or more. I was lurking before I commented.

In my efforts to apply rationality I ended up in post rationality. And ever upwards.

comment by ESRogs · 2019-01-14T05:49:53.043Z · score: 3 (2 votes) · LW · GW
What are genuinely confusing problems at the edge of the current rationality field – perhaps far away from the point where even specialists can implement them yet, but where we seem confused in a basic way about how the mind works, or how probability or decision theory work.

For one example of this, see Abram's most recent post [LW · GW], which begins: "So... what's the deal with counterfactuals?" :-)

comment by JacobKopczynski · 2019-01-14T02:21:49.402Z · score: 17 (7 votes) · LW · GW

In my opinion, the Hamming problem of group rationality, and possibly the Hamming problem of rationality generally, is how to preserve epistemic rationality under the inherent political pressures existing in a group produces.

It is the Hamming problem because if it isn't solved, everything else, including all the progress made on individual rationality, is doomed to become utterly worthless. We are not designed to be rational, and this is most harmful in group contexts, where the elephants in our brains take the most control from the riders and we have the least idea of what goals we are actually working towards.

I do not currently have any good models on how to attack it. The one person I thought might be making some progress on it was Brent, but he's now been justly exiled, and I have the sense that his pre-exile intellectual output is now subject to a high degree of scrutiny. This is understandable, but since I think his explicit models were superior to any anyone else has publicly shared, it's a significant setback.

Since that exile happened, I've attempted to find prior art elsewhere to build on, but the best prospect so far (C. Fred Alford, Group Psychology and Political Theory) turned out to be Freudian garbage.

comment by Chris_Leong · 2019-01-14T14:10:37.753Z · score: 23 (7 votes) · LW · GW

What do you consider to be his core insights? Would you consider writing a post on this?

comment by JacobKopczynski · 2019-02-02T21:48:07.159Z · score: 1 (1 votes) · LW · GW

The central descriptive insight I took from him is that most things we do are status-motivated, even when we think we have a clear picture of what our motivations are and status is not included in that picture. Our picture of what the truth looks like is fundamentally warped by status in ways that are very hard to fully adjust for.

Relatedly, I think the moderation policies of new LessWrong double down on this status-warping, and so I am reluctant to put anything of significant value on this site.

comment by jmh · 2019-01-14T14:22:57.479Z · score: 2 (2 votes) · LW · GW

My comment must necessary be something of an aside since I don't know the Hamming problem. However, your statement "We are not designed to be rational" jumped out for me.

Is that to say something along the lines of rationality is not one of the characteristics that provided an evolutionary advantage for us ? Or would it mean rationality was a mutation of some "design" (and possibly one that was a good survival trait)?

Or is the correct understanding there something entirely different?

comment by JacobKopczynski · 2019-02-02T21:52:12.510Z · score: 1 (1 votes) · LW · GW

Hamming Question: "What is the most important problem in your field?"

Hamming Problem: The answer to that question.

Rationality did not boost inclusive fitness in the environment of evolutionary adaptedness and still doesn't.

comment by ESRogs · 2019-01-13T08:12:29.368Z · score: 14 (7 votes) · LW · GW

I don't have a crisp question yet, but one general area I'd be interested in understanding better is the interplay between inside views and outside views.

In some cases, having some outside view probability in mind can guide your search (e.g. "No, that can't be right because then such and such, and I have a prior that such and such is unlikely."), while in other cases, thinking too much about outside views seems like it can distract you from exploring underlying models (e.g. when people talk about AI timelines in a way that just seems to be about parroting and aggregating other people's timelines).

A related idea is the distinction between impressions and beliefs. In this view impressions are roughly inside views (what makes sense to you given the models and intuitions you have), while beliefs are what you'd bet on (taking into account the opinions of others).

I have some intuitions and heuristics about when it's helpful to focus on impressions vs beliefs. But I'd like to have better explicit models here, and I suspect there might be some interesting open questions in this area.

comment by Elo · 2019-01-13T19:57:03.828Z · score: 2 (3 votes) · LW · GW

Integral theory quadrants give a perspective framework for communicating this problem.

comment by ESRogs · 2019-01-13T22:26:45.086Z · score: 2 (1 votes) · LW · GW

Interesting! Would you be willing to give a brief summary?

comment by Elo · 2019-01-13T23:14:53.251Z · score: 3 (3 votes) · LW · GW

https://integrallife.com/four-quadrants/

Tentatively share this link. Integral gives a whole deeper meaning to interiors, not just "my side of the argument" but the full meditation, mysticism, emotional depths of the subjective interior experience as it relates to the inside view. It's a larger framework but it's a good start to recognise the problem of interior/exterior split.

comment by ofer · 2019-01-13T10:59:06.655Z · score: 2 (2 votes) · LW · GW

Maybe: "What are the most effective interventions for making better predictions/decisions?"

It seems worthwhile to create such a list, ranked according to a single metric as measured in randomized experiments.

(if there is already such a thing please let me know)

comment by jmh · 2019-01-14T14:14:43.532Z · score: 1 (1 votes) · LW · GW

I had a could of thought, to follow, but in thinking on them the issue of separability seemed to come up. If we think of rationality as either a mental skill to develop or some type of trait that can be developed related to our decision making, and therefore actions can that truly be separated from all the other things that make up being human.

Are emotions and any part of our emotional responses part or rationality? The enemy of rationality (I think many people would consider emotional and rational as opposites -- not sure about a more sophisticated view that is found here. Are there certain complementary aspects/elements?

Intuitive rationality (maybe something like Hayek's idea of inarticulate knowledge)? Where does that fit?

Last is the issue of subjective values and recognizing other's rationality. What limits would that place on observing rationality in others as well as assessing our own rationality.

comment by ChristianKl · 2019-01-14T14:46:00.887Z · score: 2 (1 votes) · LW · GW

Are emotions and any part of our emotional responses part or rationality?

Our community uses the word rationality in a way where they are included. See http://www.rationality.org/resources/updates/2013/3-ways-cfar-has-changed-my-perspective-on-rationality

comment by Blam · 2019-01-14T08:46:27.957Z · score: -1 (4 votes) · LW · GW

Rationality goes wrong when it's used to judge people rather than beliefs. Belief-judging rationality is logical contemplation of ideas to try and remove falsehoods and inconsistencies. Person-judging rationality is in practice emotionally based moral judgement, usually judging people as either rational or irrational so you can dismiss them. There's no such thing as an irrational person, only irrational beliefs, it's an important distinction. Any good rationalist should use two options, you believe you understand or believe you don't understand. If you perceive a person as being irrational what you are perceiving is your own projected ignorance of that person's views, they make perfect sense to that person even if you do not understand them, it means they believe or know something that you don't. Just remember rationality only applies to ideas, not people.

comment by Pattern · 2019-01-15T02:28:58.116Z · score: 0 (0 votes) · LW · GW

While disagreements are often matters of miscommunication, mistakes are possible, especially for those who are not all knowing and perfect in their calculations.