What are the open problems in Human Rationality?

post by Raemon · 2019-01-13T04:46:38.581Z · LW · GW · 27 comments

This is a question post.

Contents

  "What is the Rationality Project?"
None
  Answers
    118 Wei_Dai
    68 Scott Alexander
    53 BrienneYudkowsky
    33 Thrasymachus
    21 Wei_Dai
    21 JacobKopczynski
    16 Chris_Leong
    14 MakoYass
    10 Elo
    8 norswap
    8 G Gordon Worley III
    6 ChristianKl
    5 Raemon
    5 quanticle
    3 dspeyer
    1 Polytopos
    0 Elo
    -9 JessieHenshaw
    -9 TAG
None
27 comments

LessWrong has been around for 10+ years, CFAR's been at work for around 6, and I think there have been at least a few other groups or individuals working on what I think of as the "Human Rationality Project."

I'm interested, especially from people who have invested significant time in attempting to push the rationality project forward, what they consider the major open questions facing the field. (More details in this comment [LW(p) · GW(p)])

"What is the Rationality Project?"

I'd prefer to leave "Rationality Project" somewhat vague, but I'd roughly summarize it as "the study of how to have optimal beliefs and make optimal decisions while running on human wetware."

If you have your own sense of what this means or should mean, feel free to use that in your answer. But some bits of context for a few possible avenues you could interpret this through:

Early LessWrong focused a lot of cognitive biases and how to account for them, as well as Bayesian epistemology.

CFAR (to my knowledge, roughly) started from a similar vantage point and eventually started moving in the direction of "how to do you figure out what you actually want and bring yourself into 'internal alignment' when you want multiple things, and/or different parts of you want different things and are working at cross purposes. It also looked a lot into Double Crux, as a tool to help people disagree more productively.

CFAR and Leverage both ended up exploring introspection as a tool.

Forecasting as a field has matured a bit. We have the Good Judgment project.

Behavioral Economics has begun to develop as a field.

I recently read "How to Measure Anything", and was somewhat struck at how it tackled prediction, calibration and determining key uncertainties in a fairly rigorous, professionalized fashion. I could imagine an alternate history of LessWrong that had emphasized this more strongly.

With this vague constellation of organizations and research areas, gesturing at an overall field...

...what are the big open questions the field of Human Rationality needs to answer, in order to help people have more accurate beliefs and/or make better decisions?

Answers

answer by Wei Dai (Wei_Dai) · 2019-01-14T07:55:27.159Z · LW(p) · GW(p)

I went through all my LW posts and gathered the ones that either presented or reminded me of some problem in human rationality.

1. [LW · GW] As we become more rational, how do we translate/transfer our old values embodied in the less rational subsystems?

2. [LW · GW] How to figure out one's comparative advantage?

3. [LW · GW] Meta-ethics. It's hard to be rational if you don't know where your values are supposed to come from.

4. [LW · GW] Normative ethics. How much weight to put on altruism? Population ethics. Hedonic vs preference utilitarianism. Moral circle. Etc. It's hard to be rational if you don't know what your values are.

5. [LW · GW] Which mental subsystem has one's real values, or how to weigh them.

6. [LW · GW] How to handle moral uncertainty? For example should we discount total utilitarianism because we would have made a deal to for total utilitarianism to give up control in this universe?

7. [LW · GW] If we apply UDT to humans, what does it actually say in various real-life situations like voting or contributing to x-risk reduction?

8. [LW · GW] Does Aumann Agreement apply to humans, and if so how?

9. [LW · GW] Meta-philosophy. It's hard to be rational if one doesn't know how to solve philosophical problems related to rationality.

10. [LW · GW] It's not clear how selfishness works in UDT, which might be a problem if that's the right decision theory for humans.

11. [LW · GW] Bargaining, politics, building alliances, fair division, we still don't know how to apply game theory to a lot of messy real-world problems, especially those involving more than a few people.

12. [LW · GW] Reality fluid vs. caring measure. Subjective anticipation. Anthropics in general.

13. [LW · GW] What is the nature of rationality, and more generally normativity?

14. [LW · GW] What is the right way to handle logical uncertainty, and how does that interact with decision theory, bargaining, and other problems?

Comparing the rate of problems opened vs problems closed, we have so far to go....

comment by Raemon · 2019-01-14T08:13:20.974Z · LW(p) · GW(p)

Thank you, this is great!

comment by Stuart_Armstrong · 2019-05-28T11:47:59.068Z · LW(p) · GW(p)

Comparing the rate of problems opened vs problems closed, we have so far to go....

That's always the case when investigating a new field, as you clarify the issues and get more specialised. It doesn't mean necessarily that the majority of work remains ahead of us (though it doesn't mean the opposite, either).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-28T12:15:01.493Z · LW(p) · GW(p)

I'm confused what point you're making. Isn't P(the majority of work remains ahead of us | investigating a new field) quite high?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-05-28T14:41:34.612Z · LW(p) · GW(p)

It's hard to tell whether we're at the beginning of a bugeonning field, or about to hit diminishing returns.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-28T17:54:33.337Z · LW(p) · GW(p)

Oh, I see. My "we have so far to go" wasn't meant to express optimism that we’re at the beginning of a burgeoning field (but I can see now how it could easily be interpreted that way). I meant more that when we've eventually got rationality fully figured out (i.e., some time after the Singularity), the amount of knowledge we have about it today will probably seem tiny in comparison.

Unless you have doubts that even superintelligence can answer my questions?

comment by ChristianKl · 2019-01-14T09:12:54.558Z · LW(p) · GW(p)

Given the way the answer feature goes, I think it would make more sense to have every single point as a separate answer to allow people to vote on them.

Replies from: Raemon, Pattern
comment by Raemon · 2019-01-15T08:46:56.408Z · LW(p) · GW(p)

We have some vague plans to build something that’d make that process cleaner. I’m not sure if we’ll get to it soon, but meanwhile don’t think it’s that urgent to split the answers up.

comment by Pattern · 2019-01-15T02:18:11.230Z · LW(p) · GW(p)

That'd probably be messy.

answer by Scott Alexander · 2019-05-26T05:50:59.428Z · LW(p) · GW(p)

I've actually been thinking about this for a while, here's a very rough draft outline of what I've got:

1. Which questions are important?
a. How should we practice cause prioritization in effective altruism?
b. How should we think about long shots at very large effects? (Pascal's Mugging)
c. How much should we be focusing on the global level, vs. our own happiness and ability to lead a normal life?
d. How do we identify gaps in our knowledge that might be wrong and need further evaluation?
e. How do we identify unexamined areas of our lives or decisions we make automatically? Should we examine those areas and make those decisions less automatically?

2. How do we determine whether we are operating in the right paradigm?
a. What are paradigms? Are they useful to think about?
b. If we were using the wrong paradigm, how would we know? How could we change it?
c. How do we learn new paradigms well enough to judge them at all?

3. How do we determine what the possible hypotheses are?
a. Are we unreasonably bad at generating new hypotheses once we have one, due to confirmation bias? How do we solve this?
b. Are there surprising techniques that can help us with this problem?

4. Which of the possible hypotheses is true?
a. How do we make accurate predictions?
b. How do we calibrate our probabilities?

5. How do we balance our explicit reasoning vs. that of other people and society?
a. Inside vs. outside view?
b. How do we identify experts? How much should we trust them?
c. Does cultural evolution produce accurate beliefs? How willing should we be to break tradition?
d. How much should the replication crisis affect our trust in science?
e. How well does good judgment travel across domains?

6. How do we go from accurate beliefs to accurate aliefs and effective action?
a. Akrasia and procrastination
b. Do different parts of the brain have different agendas? How can they all get on the same page?

7. How do we create an internal environment conducive to getting these questions right?
a. Do strong emotions help or hinder rationality?
b. Do meditation and related practices help or hinder rationality?
c. Do psychedelic drugs help or hinder rationality?

8. How do we create a community conducive to getting these questions right?
a. Is having "a rationalist community" useful?
b. How do strong communities arise and maintain themselves?
c. Should a community be organically grown or carefully structured?
d. How do we balance conflicting desires for an accepting where everyone can bring their friends and have fun, vs. high-standards devotion to a serious mission?
e. How do we prevent a rationalist community from becoming insular / echo chambery / cultish?
f. ...without also admitting every homeopath who wants to convince us that "homeopathy is rational"?
g. How do we balance the need for a strong community hub with the need for strong communities on the rim?
h. Can these problems be solved by having many overlapping communities with slightly different standards?

9. How does this community maintain its existence in the face of outside pressure?

answer by LoganStrohl (BrienneYudkowsky) · 2019-12-26T20:20:08.460Z · LW(p) · GW(p)

I feel daunted by the question, "what are the big open questions the field of Human Rationality needs to answer, in order to help people have more accurate beliefs and/or make better decisions?", but I also think that it's the question at the heart of my research interests. So rather than trying to answer the original question directly, I'm going to share a sampling of my current research interests.

Over in the AMA, I wrote [LW(p) · GW(p)], "My way of investigating always pushes into what I can’t yet see or grasp or articulate. Thus, it has the unfortunate property of being quite difficult to communicate about directly until the research program is mostly complete. So I can say a lot about my earlier work on noticing, but talking coherently about what exactly CFAR’s been paying me for lately is much harder." This will not be a clean bulleted list that doubles as a map of rationality, sorry. It'll be more like sampling of snapshots from the parts of my mind that are trying to build rationality. Here's the collage, in no particular order:

There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t deliberately design my daily life to account for it. Now that I can take my sound sensitivity (and many related things) as object, I’m in a much more powerful position. And it *terrifies* me that I went a quarter of a century without recognizing these basic facts of my experience. It terrifies me even more when I imagine an AI researcher being subject to some similarly crucial thing about how agents work. I would very much like to know what other basic facts of my experience I remain unaware of. I would like to know how to find out what I am currently unable to take as object.

On a related note, you know how an awful lot of people in our community are autistic? It seem to me that our community is subject to this fact. (It also seems to me that many individual people in our community remain subject to most of their autistic patterns, and that this is more like the rule than the exception.) I would like to know what’s going on here, and whether some other state of affairs would be preferable, and how to instantiate that state of affairs.

Why do so many people seem to wait around for other people to teach them things, even when they seem to be trying very hard to learn? Do they think they need permission? Do they think they need authority? What are they protecting? Am I inadvertently destroying it when I try to figure things out for myself? What stops people from interrogating the world on their own terms?

I get an awful lot of use out of asking myself questions. I think I’m unusually good at doing this, and that I know a few other people with this property. I suspect that the really useful thing isn’t so much the questions, as whatever I’m doing with my mind most of the time that allows me to ask good questions. I’d like to know what other people are doing with their minds that prevents this, and whether there’s a different thing to do that’s better.

What is “quality”?

Suppose religion is symbiotic, and not just parasitic. What exactly is it doing for people? How is it doing those things? Are there specific problems it’s solving? What are the problems? How can we solve those problems without tolerating the damage religion causes?

[Some spoilers for bits of the premise of A Fire Upon The Deep and other stories in that sequence.] There’s this alien race in Verner Vinge books called the Tines. A “person” of the Tines species looks at first like a pack of several animals. The singleton members that make up a pack use high-frequency sound, rather than chemical neurotransmitters, to think as one mind. The singleton members of a pack age, so when one of your singletons dies, you adopt a new singleton. Since singletons are all slightly different and sort of have their own personalities, part of personal health and hygiene for Tines involves managing these transitions wisely. If you do a good job — never letting several members die in quick succession, never adopting a singleton that can’t harmonize with the rest of you, taking on new singletons before the oldest ones loose the ability to communicate — then you’re effectively immortal. You just keep amassing new skills and perspectives and thought styles, without drifting too far from your original intentions. If you manage the transitions poorly, though — choosing recklessly, not understanding the patterns an old member has been contributing, participating in a war where several of your singletons may die at once — then your mind could easily become suddenly very different, or disorganized and chaotic, or outright insane, in a way you’ve lost the ability to recover from. I think about the Tines a lot when I experiment with new ways of thinking and feeling. I think much of rationality poses a similar danger to the one faced by the Tines. So I’d like to know what practices constitute personal health and hygiene for cognitive growth and development in humans.

What is original seeing? How does it work? When is it most important? When is it the wrong move? How can I become better at it? How can people who are worse at it than I am become better at it?

In another thread, Adam made a comment that I thought was fantastic. I typed to him, “That comment is fantastic!” As I did so, I noticed that I had an option about how to relate to the comment, and to Adam, when I felt a bid from somewhere in my mind to re-phrase as, “I really like that comment,” or, “I enjoyed reading your comment,” or “I’m excited and impressed by your comment.” That bid came from a place that shares a lot of values with Lesswrong-style rationalists, and 20th century science, and really with liberalism in general. It values objectivity, respect, independence, autonomy, and consent, among other things. It holds map-territory distinctions and keeps its distance from the world, in an attempt to see all things clearly. But I decided to stand behind my claim that the “the comment is fantastic”. I did not “own my experience”, in this case, or highlight that my values are part of me rather than part of the world. I have a feeling that something really important is lost in the careful distance we keep all the time from the world and from each other. Something about the power to act, to affect each other in ways that create small-to-mid-sized superorganisms like teams and communities, something about tending our relationship to the world so that we don’t float off in bubbles of abstraction. Whatever that important thing is, I want to understand it. And I want to protect it, and to incorporate it into my patterns of thought, without loosing all I gain from cold clarity and distance.

I would like to think more clearly, especially when it seems important to do so. There are a lot of things that might affect how clearly you think, some of which are discussed in the Sequences. For example, one common pattern of muddy thought is rationalization, so one way to increase your cognitive clarity is to stop completely ignoring the existence of rationalization. I’ve lately been interested in a category of clarity-increasing thingies that might be sensibly described as “the relationship between a cognitive process and its environment”. By “environment”, I meant to include several things:

  • The internal mental environment: the cognitive and emotional situation in which a thought pattern finds itself. Example: When part of my mind is trying to tally up how much money I spent in the past month, and local mental processes desperately want the answer to be “very little” for some reason, my clarity of thought while tallying might not be so great. I expect that well maintained internal mental environments — ones that promote clear thinking — tend to have properties like abundance, spaciousness, and groundedness.
  • The internal physical environment: the physiological state of a body. For example, hydration seems to play a shockingly important role in how well I maintain my internal mental environment while I think. If I’m trying to solve a math problem and have had nothing to drink for two hours, it’s likely I’m trying to work in a state of frustration and impatience. Similar things are true of sleep and exercise.
  • The external physical environment: the sensory info coming in from the outside world, and the feedback patterns created by external objects and perceptual processes. When I’ve been having a conversation in one room, and then I move to another room, it often feels as though I’ve left half my thoughts behind. I think this is because I’m making extensive use of the walls and couches and such in my computations. I claim that one’s relationship to the external environment can make more or less use of the environment’s supportive potential, and that environments can be arranged in ways that promote clarity of thought.
  • The social environment: people, especially frequently encountered ones. The social environment is basically just part of the external physical environment, but it’s such an unusual part that I think it ought to be singled out. First of all, it has powerful effects on the internal mental environment. The phrase “politics is the mind killer” means something like “if you want to design the social environment to maximize muddiness of thought, have I got a deal for you”. Secondly, other minds have the remarkable property of containing complex cognitive processes, which are themselves situated in every level of environment. If you’ve ever confided in a close, reasonable friend who had some distance from your own internal turmoil, you know what I’m getting at here. I’ve thought a lot lately about how to build a “healthy community” in which to situate my thoughts. A good way to think about what I’m trying to do is that I want to cultivate the properties of interpersonal interaction that lead to the highest quality, best maintained internal mental environments for all involved.

What is "groundedness"?

I built a loft bed recently. Not from scratch, just Ikea-style. When I was about halfway through the process, I realized that I’d put one of the panels on backward. I’d made the mistake toward the beginning, so there were already many pieces screwed into that panel, and no way to flip it around without taking the whole bed apart again. At that point, I had a few thoughts in quick succession:

  • I really don’t want to take the whole bed apart and put it back together again.
  • Maybe I could unscrew the pieces connected to that panel, then carefully balance all of them while I flip the panel around? (Something would probably break if I did that.)
  • You know what, maybe I don’t want a dumb loft bed anyway.

It so happens that in this particular case, I sighed, took the bed apart, carefully noted where each bit was supposed to go, flipped the panel around, and put it all back together again perfectly. But I’ve certainly been in similar situations where for some reason, I let one mistake lead to more mistakes. I rushed, broke things, lost pieces, hurt other people, or gave up. I’d like to know what circumstances obtain when I get this right, and what circumstances obtain when I don’t. Where can I get patience, groundedness, clarity, gumption, and care?

I’ve developed a taste for reading books that I hate. I like to try on the perspective of one author after another, authors with whom I think I have really fundamental disagreements about how the world works, how one ought to think, and whether yellow is really such a bad color after all. There’s a generalized version of “reading books you hate” that I might call “perceptual dexterity”, or I might call “the ground of creativity”, which is something like having a thousand prehensile eye-stalks in your mind, and I think prehensile eye-stalks are pretty cool. But I also think it’s generally a good idea to avoid reading books you hate, because your hatred of them is often trying to protect you from “your self and worldview falling apart”, or something. I’d like to know whether my self and worldview are falling apart, or whatever. And if not, I’d like to know whether I’m doing something to prevent it that other people could learn to do, and whether they’d thereby gain access to a whole lot more perspectives from which they could triangulate reality.

comment by Samyak · 2020-09-30T08:39:56.954Z · LW(p) · GW(p)

"This comment is fantastic!"

comment by PB (pb-1) · 2024-02-28T23:48:50.978Z · LW(p) · GW(p)

Thinking about one’s thinking is certainly interesting and rare, and the demonstration of such a capability in this comment is entertaining. The utility of such processes suggests that their value be ultimately directed toward benefitting other people in order for personal benefit to be derived.

For example, by sharing the very human and common difficulty with the loft bed, one thereby grounds a personal experience in the generality of human reflection, a category of general interest anchored by a particular experience.

answer by Thrasymachus · 2019-01-13T15:44:36.207Z · LW(p) · GW(p)

There seem some foundational questions to the 'Rationality project', and (reprising my role as querulous critic) are oddly neglected in the 5-10 year history of the rationalist community: conspicuously, I find the best insight into these questions comes from psychology academia.

Is rationality best thought of as a single construct?

It roughly makes sense to talk of 'intelligence' or 'physical fitness' because performance in sub-components positively correlate: although it is hard to say which of an elite ultramarathoner, Judoka, or shotputter is fittest, I can confidently say all of them are fitter than I, and I am fitter than someone who is bedbound.

Is the same true of rationality? If it were the case that performance on tests of (say) callibration, sunk cost fallacy, and anchoring were all independent, then this would suggest 'rationality' is a circle our natural language draws around a grab-bag of skills or practices. The term could therefore mislead us into thinking it is a unified skill which we can 'generally' improve, and our efforts are better addressed at a finer level of granularity.

I think this is plausibly the case (or at least closer to the truth). The main evidence I have in mind is Stanovich's CART, whereby tests on individual sub-components we'd mark as fairly 'pure rationality' (e.g. base-rate neglect, framing, overconfidence - other parts of the CART look very IQ-testy like syllogistic reasoning, on which more later) have only weak correlations with one another (e.g. 0.2 ish).

Is rationality a skill, or a trait?

Perhaps key is that rationality (general sense) is something you can get stronger at or 'level up' in. Yet there is a facially plausible story that rationality (especially so-called 'epistemic' rationality) is something more like IQ: essentially a trait where training can at best enhance performance on sub-components yet not transfer back to the broader construct. Briefly:

  • Overall measures of rationality (principally Stanovich's CART) correlate about 0.7 with IQ - not much worse than IQ test subtests correlate with one another or g.
  • Infamous challenges in transfer. People whose job relies on a particular 'rationality skill' (e.g. gamblers and calibration) show greater performance in this area but not, as I recall, transfer improvements to others. This improved performance is often not only isolated but also context dependent: people may learn to avoid a particular cognitive bias in their professional lives, but remain generally susceptible to it otherwise.
  • The general dearth of well-evidenced successes from training. (cf. the old TAM panel on this topic, where most were autumnal).
  • For superforecasters, the GJP sees it can get some boost from training, but (as I understand it) the majority of their performance is attributed to selection, grouping, and aggregation.

It wouldn't necessarily be 'game over' for the 'Rationality project' even if this turns out to be the true story. Even if it is the case that 'drilling vocab' doesn't really improve my g, I might value a larger vocabulary for its own sake. In a similar way, even if there's no transfer, some rationality skills might prove generally useful (and 'improvable') such that drilling them to be useful on their own terms.

The superforecasting point can be argued the other way: that training can still get modest increases in performance in a composite test of epistemic rationality from people already exhibiting elite performance. But it does seem crucial to get a general sense of how well (and how broadly) can training be expected to work: else embarking on a program to 'improve rationality' may end up as ill-starred as the 'brain-training' games/apps fad a few years ago.

comment by Richard_Ngo (ricraz) · 2019-05-21T11:27:43.800Z · LW(p) · GW(p)

This point seems absolutely crucial; and I really appreciate the cited evidence.

comment by Rudi C (rudi-c) · 2019-11-28T22:53:43.331Z · LW(p) · GW(p)

My personal experience is that I had most of what enables me to be epistemically rational from childhood (so probably genetic), but that the exposure to behavioral economics, science and other rationality-adjacent memes early in my life significantly boosted that genetic seed. Another personal observation: I have never felt someone I know has improved their rationality. Though I also don’t know almost anyone who even cares about becoming more rational.

answer by Wei Dai (Wei_Dai) · 2019-01-14T20:22:58.561Z · LW(p) · GW(p)

One more, because one of my posts presented two open problems, and I only listed one of them above:

15. [LW · GW] Our current theoretical foundations for rationality all assume a fully specified utility function (or the equivalent), or at least a probability distribution on utility functions (to express moral/value uncertainty). But to the extent that humans can be considered to have a utility function at all, it's may best be viewed as a partial function that returns "unknown" for most of the input domain. Our current decision theories can't handle this because they would end up trying to add "unknown" to a numerical value during expected utility computation. Forcing humans to come up with an utility function or even a probability distribution on utility functions in order to use decision theory seems highly unsafe so we need an alternative.

comment by Richard_Kennaway · 2019-05-29T10:16:11.877Z · LW(p) · GW(p)

Does it help to propagate "unknown" through computations by treating it like NaN? Or would that tend to turn the answer to every question into NaN?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-30T20:41:31.642Z · LW(p) · GW(p)

If you just do it the straightforward way, any option you can choose would have a non-zero probability [LW · GW] of producing an outcome with "unknown" or NaN utility. If multiply those two numbers together you get NaN, then if you add that to other probability*utility values as part of your expected utility computation you will end up with NaN as your final expected utility. I don't see how to avoid this, hence my question.

answer by Czynski (JacobKopczynski) · 2019-01-14T02:21:49.402Z · LW(p) · GW(p)

In my opinion, the Hamming problem of group rationality, and possibly the Hamming problem of rationality generally, is how to preserve epistemic rationality under the inherent political pressures existing in a group produces.

It is the Hamming problem because if it isn't solved, everything else, including all the progress made on individual rationality, is doomed to become utterly worthless. We are not designed to be rational, and this is most harmful in group contexts, where the elephants in our brains take the most control from the riders and we have the least idea of what goals we are actually working towards.

I do not currently have any good models on how to attack it. The one person I thought might be making some progress on it was Brent, but he's now been justly exiled, and I have the sense that his pre-exile intellectual output is now subject to a high degree of scrutiny. This is understandable, but since I think his explicit models were superior to any anyone else has publicly shared, it's a significant setback.

Since that exile happened, I've attempted to find prior art elsewhere to build on, but the best prospect so far (C. Fred Alford, Group Psychology and Political Theory) turned out to be Freudian garbage.

comment by Chris_Leong · 2019-01-14T14:10:37.753Z · LW(p) · GW(p)

What do you consider to be his core insights? Would you consider writing a post on this?

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2019-02-02T21:48:07.159Z · LW(p) · GW(p)

The central descriptive insight I took from him is that most things we do are status-motivated, even when we think we have a clear picture of what our motivations are and status is not included in that picture. Our picture of what the truth looks like is fundamentally warped by status in ways that are very hard to fully adjust for.

Relatedly, I think the moderation policies of new LessWrong double down on this status-warping, and so I am reluctant to put anything of significant value on this site.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2019-05-21T23:54:16.026Z · LW(p) · GW(p)

Which policies in particular?

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2019-11-12T20:58:16.015Z · LW(p) · GW(p)

Literally everything described here: https://www.lesswrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines [LW · GW]

Moderation styles harsher than "Easy-going" are toxic to group rationality and failure to ban them results in an echo chamber. Banning users from commenting on your post(s) is toxic to group rationality and results in an echo chamber. Deleting comments without a trace, likewise. It is absolutely essential that anyone who is silencing people they don't want to hear from must take significant effort to do so and that these actions must be extremely visible to everyone else, or else there is no way to note and shame people who abuse it. And virtually everyone who has this power and has social power will abuse it, whether they realize that's what they're doing or not.

comment by jmh · 2019-01-14T14:22:57.479Z · LW(p) · GW(p)

My comment must necessary be something of an aside since I don't know the Hamming problem. However, your statement "We are not designed to be rational" jumped out for me.

Is that to say something along the lines of rationality is not one of the characteristics that provided an evolutionary advantage for us ? Or would it mean rationality was a mutation of some "design" (and possibly one that was a good survival trait)?

Or is the correct understanding there something entirely different?

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2019-02-02T21:52:12.510Z · LW(p) · GW(p)

Hamming Question: "What is the most important problem in your field?"

Hamming Problem: The answer to that question.

Rationality did not boost inclusive fitness in the environment of evolutionary adaptedness and still doesn't.

answer by Chris_Leong · 2019-01-14T13:46:39.541Z · LW(p) · GW(p)

Group rationality is a big one. It wouldn't surprise me if rationalists are less good on average at co-ordinating than other group because rationalists tend to be more individualistic and have their own opinions of what needs to be done. As an example, how long did it take for us to produce a new LW forum despite half of the people here being programmers? And rationality still doesn't have its own version of CEA.

comment by Raemon · 2019-01-16T04:01:43.851Z · LW(p) · GW(p)

I do agree with group rationality generally, and probably agree with LWers being self-selected for contrarian individualism. I don't know that taking-so-long to rebuild the forum is that great example – it took someone deciding to make it their fulltime project and getting thousands of dollars in funding, which is roughly what such things normally take.

Replies from: Chris_Leong
comment by Chris_Leong · 2019-01-16T11:30:34.561Z · LW(p) · GW(p)

"It took someone deciding to make it their fulltime project and getting thousands of dollars in funding, which is roughly what such things normally take" - lots of open source projects get off the ground without money being involved

Replies from: philh
comment by philh · 2019-01-20T14:42:36.263Z · LW(p) · GW(p)

Are those comparable, though? My model of open source is that it prototypically looks like someone building something that's useful for themselves, then other people also find it useful and help to work on it (with code, bug reports, feature requests). But that first step doesn't really exist for LW2, because until you're ready to migrate the whole site, the software has very little value to anyone.

Can you think of any open source projects where the first useful version seems comparable in effort to LW2, and that had no financial backing for the first useful version?

Edit: some plausible candidates come to mind, though I wouldn't bet on any of them. Operating systems (e.g. Linux kernel, haiku, menuetOS); programming languages and compliers for them (e.g. gcc, Perl, python, Ruby); and database engines (e.g. postgres, mongo, neo4j).

(Notably, I'd exclude something like elm from the languages list because I think it was a masters or PhD project so funded by a university.)

Replies from: Raemon
comment by Raemon · 2019-01-20T19:54:56.582Z · LW(p) · GW(p)

I also think it's just... actually rare and noteable when an open source project is able to succeed on this scale, rather than something you should expect to work out by default.

comment by IC Rainbow (ic-rainbow) · 2019-01-23T09:46:06.697Z · LW(p) · GW(p)

What's CEA?

Replies from: TheWakalix
comment by TheWakalix · 2019-02-19T02:29:26.383Z · LW(p) · GW(p)

The Centre for Effective Altruism, I believe.

answer by mako yass (MakoYass) · 2019-06-02T01:44:39.558Z · LW(p) · GW(p)

Did you ever see that early (some might say, premature) trailer for the anthropic horror game SOMA where Jarret was wandering around, woefully confused, trying desperately to figure out where his brain was located?

That's how humans are about their values.

I can't find my utility function.

It's supposed to be inside me, but I see other people whose utility functions are definitely outside of their body, subjected to hellish machinations of capricious tribal egregores and cultural traumas, and they're suffering a lot.

I think my utility function might be an epiphenomenon of my tribe (where is my tribe???), but I'm not sure. There are things you can do to a tribe that change its whole values, so this doesn't seem to draw a firm enough boundary.

My values seem to change from hour to hour. Sometimes the idea of a homogeneous superhappy hedonium blob seems condemnably ugly, other times, they seem fine and good and worthy of living. Sometimes I am filled with compassion for all things, and sometimes I'm just a normal human who draws lines between ingroup and outgroup and only cares about what happens on the inner side.

The only people I know who claim to have fixed utility functions appear to be mutilating themselves to get that way, and I pale at the thought of such scarification, but what is the alternative? Is endless mutation really a value more intrinsic than any other? Have we made some kind of ultimate submission to evolution that will eventually depose us completely in favour of whatever disloyal offspring fight up from us?

Where is my utility function?

answer by Elo · 2019-01-13T20:03:05.732Z · LW(p) · GW(p)

The problem of interfaces between cultures.

Humans live in different cultures. A simple version of this is in how cultures greet each other. The Italian double kiss, the ultra orthodox Jewish non touch, the hippie hug, the handshake of various cultures, the Japanese bow/nod, and many more. It's possible to gravely offend a different culture with the way you do introductions.

Now think about the same potential offence but for all conversation culture.

I have the open question of how to successfully interface with other cultures.

comment by mako yass (MakoYass) · 2019-06-02T21:37:36.666Z · LW(p) · GW(p)

The difficulties start with accepting that differences exist and aren't an error and aren't easily dissolvable. Humans don't seem to be good at accepting that. We seem to have been born expecting something else.

Saying "the awkward fracturing of humanity is a brute fact of any multipolar situation in a large universe" doesn't seem to do the job, so I propose naming Babel, the demon of otherness and miscommunication, and explaining that humanity is fractured because a demon cursed it.

There are nice things about Babel. Babel is also the spirit of specialization, trade, and surprise.

comment by mako yass (MakoYass) · 2019-06-02T21:41:15.624Z · LW(p) · GW(p)

Greg Egan's Diaspora had a nice institution. The bridgers, will be people who arrange networks of intermediaries between each of the morphs and subspecies of post-humanity and facilitate global coordination. A mesh of benevolent missing links.

answer by norswap · 2019-01-22T14:11:28.439Z · LW(p) · GW(p)

For applied rationality, my 10% improvement problem: https://www.lesswrong.com/posts/Aq8QSD3wb2epxuzEC/the-10-improvement-problem

Basically, how do you notice small (10% or less) improvements in areas that are hard to quantify. This is important, because after reaping the low-hanging fruits, stacking those small improvements is how you get ahead.

comment by Thomas Kwa (thomas-kwa) · 2020-06-07T00:16:56.797Z · LW(p) · GW(p)

I've put a lot of thought into this since I bumped into the limits of standard Quantified Self (e.g. recording daily mood, productive time, sleep). Doing statistics on myself has limited power already, and this is even worse when what I'm trying to improve is only a moderately strong correlate of what I can measure, or when I want to control for things you can't measure. It throws away all of my human ability to pattern-match.

The best I can do right now is fairly ordinary journaling/mindfulness practices, plus a philosophy of noticing all of the low-hanging fruit. My life has always has more obvious bugs with simple fixes than I notice, and I suspect this is true of most people.

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2019-01-14T21:13:58.875Z · LW(p) · GW(p)

To me the biggest open problem is how to make existing wisdom more palatable to people who are drawn to the rationalist community. What I have in mind as an expression of this problem is the tension between the post/metarationalists and the, I don't know, hard core of rationalists: I don't think the two are in conflict; the former are trying to bring in things from outside the traditional sources historically liked by rationalists; the latter see themselves as defending rationality from being polluted by antirationalist stuff; and both are trying to make rationality better (the former via adding; the latter via protecting and refining). The result is conflict even if I think the missions are not in conflict, though, so it seems an open problem is figuring out how to address that conflict.

comment by Czynski (JacobKopczynski) · 2020-07-19T22:56:48.711Z · LW(p) · GW(p)Replies from: TAG
comment by TAG · 2020-07-20T09:27:43.841Z · LW(p) · GW(p)

Astrology seeks to predict the future , astronomy strives to feel superior to it

comment by Czynski (JacobKopczynski) · 2020-07-21T20:45:35.785Z · LW(p) · GW(p)

Postrationalism isn't adding anything. It's exrationalism, people who decided that they weren't willing to give up the self-defeating mysterianism but still wanting to signal tribal affiliation with rationalism. It's rare to even attempt to justify itself as useful; Meaningness is better than most in that it's only mostly Not Even Wrong.

It's useless and sabotages the entire rationalist project to give it even a modicum of attention. So no, it's fundamentally in conflict.

(A previous comment to this effect was deleted with the message:

I think there’s plenty of interesting arguments about postrationality but this was not a productive one

This is wrong. There are no interesting arguments about postrationality. All the interesting arguments happened in 2010 and earlier, and the postrationalists are the people who refused to accept the result of those arguments with good grace. We settled these arguments before there even were self-identified postrationalists.)

Replies from: TAG
comment by TAG · 2020-07-22T07:35:02.326Z · LW(p) · GW(p)

The point of my comment about astrology was that you are taking the usefulness of rationalism as a given [LW · GW], and it isnt. If you can't get the impressive practical results, the next best thing is having accurate beliefs ,including accurate beliefs about the limitations of different approaches.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2020-07-23T05:11:16.056Z · LW(p) · GW(p)

Even assuming you're correct, postrationalism won't help with any of that because it's nothing but systematized self-delusion. Rationality may not have benefits as huge as one'd naively expect, but it is still substantially better than deliberately turning your back on even attempting to be rational, which is what postrationalism does - intentionally!

Replies from: TAG
comment by TAG · 2020-07-23T09:32:12.587Z · LW(p) · GW(p)

You have made a sweeping claim without any evidence.

Replies from: JacobKopczynski, Raemon
comment by Czynski (JacobKopczynski) · 2020-07-23T19:34:04.231Z · LW(p) · GW(p)

I think it would be hard to find a postrationalist who hasn't outright said so themself, though generally with a wording that's spun to make it sound better.

Replies from: TAG
comment by TAG · 2020-07-23T20:00:03.456Z · LW(p) · GW(p)

If they say so in plain terms, your claim could easily be supported by citations. As for "spin"...that tends to be in the eye of the beholder.

comment by Raemon · 2020-07-23T20:02:45.746Z · LW(p) · GW(p)

(note: self-downvoted because I don't really want to prolong the discussion)

I think it's more like "this entire conversation lives in Simulacrum Level 3, where words are just about tribal affiliation, where empirical material evidence doesn't even matter". I'm kinda annoyed at the whole conversation, despite probably sharing some of Czynski's frustrations with postrationalism, because it's purely tribal, and everyone involved should know better.

The reason I'm annoyed at Postrationalism is that it didn't add anything new beyond things basically already in the sequences (but often describe themselves as such). Most of the point of postrationalism to me seems to be "LW style rationality, but with somewhat different aesthetics and weights/priors about what sort of things are most likely to be useful." (Those different weights/priors are meaningful, but don't seem quantitatively more significant that the weights/priors various camps of rationalists have)

I think it's actually quite fine to be "rationality but with different aesthetics and a particular set of weights/priors on what's important" (seems like a fine thing to form some group identity around), but I'm annoyed when postrationalists seem to argue there's a deeper philosophical divide than there is, and annoyed at people who argue against postrationality for getting sucked into the trap of arguing on the tribal level.

...

Note: Insofar as I have an actual disagreement with the post-rationalist-cluster, it's a lack of common agreement about "what is supposed to happen to the fake frameworks over time."

During the 2018 review I discussed this with Val a bit:

Over the long term, there [should be] an expectation that if Fake Frameworks stick around, they will get grounded out into "real" frameworks, or at least the limits of the framework is more clearly spelled out. This often takes lots of exploration, experimentation, modeling, and explanatory work, which can often take years. It makes sense to have a shared understanding that it takes years (esp. because often it’s not people’s full time job to be writing this sort of thing up), but I think it’s pretty important to the intellectual culture for people to trust that that’s part of the longterm goal (for things discussed on LessWrong anyhow)

I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.

I generally prefer to handle things with “escalating rewards and recognition” rather than rules that crimp people’s ability to brainstorm, or write things that explain things to people with some-but-not-all-of-a-set-of-prequisites.

So one of the things I’m pretty excited about for the review process is creating a more robust system for (and explicit answer to the question of) “when/how do we re-examine things that aren’t rigorously grounded?“.

Gordon's comment is from ~ 1.5 years ago, before the LW Review process. The LW Review process is less than a year old, and will probably be another ~3 years before it's matured enough that everyone can trust it. But the basic idea seems like a straightforward solution to the problem of "are we allowed to explore weird ideas or not that don't match the genre/aesthetic that LWers tend to trust more?". 

The answer is "yes, but eventually you are going to need to ground out the idea into scalable propositional knowledge, and if you can't, it's eventually going to need to be acknowledged at least as non-scalable knowledge, if not outright false. It's okay if it takes years because we have years-long timescales to evaluate things."

(Note: I'm making some claims here that are not yet consensus, and expect my own opinion to subtly shift over the next few years. This is just me-as-an-individual laying out why I find this conversation boring and missing the point)

Replies from: TAG
comment by TAG · 2020-07-24T12:24:59.368Z · LW(p) · GW(p)

The reason I’m annoyed at Postrationalism is that it didn’t add anything new beyond things basically already in the sequences (but often describe themselves as such)

Well it does, because "you can't do everything with Bayes" is an incompatible opposite to "you can do everything with Bayes". Likewise , pluralistic ontology, of the kind Valentine [LW · GW]advises, is fundamentally incompatible with reductionism. Even if valentine doesn't realise it.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-07-24T14:53:38.193Z · LW(p) · GW(p)

Those who say that you can't do everything with Bayes have not been very forthcoming about what you can't do with Bayes, and even less so about what you can't do with Bayes that you can do with other means. David Chapman, for example, keeps on taking a step back for every step forwards.

"Bayes" here I take to be a shorthand for the underlying pattern of reality which forces uncertainty to follow the Bayesian rules even when you don't have numbers to quantify it.

And "everything" means "everything to do with action in the face of uncertainty." (All quantifiers are bounded, even when the bound is not explicitly stated.)

Replies from: TAG
comment by TAG · 2020-07-24T15:45:05.474Z · LW(p) · GW(p)

Those who say that you can’t do everything with Bayes have not been very forthcoming about what you can’t do with Bayes,

I found Chapman on Bayes to be pretty clear and decisive. 1) you can't generate novel hypotheses with Bayes 2) Bayes doesn't give you any guidance in when to backtrack of paradigm-shft 3) you, as a human, don't have enough Compute to execute Bayes over nontrivial domains.

“Bayes” here I take to be a shorthand for the underlying pattern of reality which forces uncertainty to follow the Bayesian rules even when you don’t have numbers to quantify it

That's not what it means -- even here. Here uncertainty is in the mind of the beholder.

If you have a personal philosophy of metaphysical uncertainty, please create an unambiguous name for it.

Edit: it's not as if Yudkowsky originally characterised Bayes as a true-but-not-useful thing. Chapman addresses his original version.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-07-25T08:18:19.155Z · LW(p) · GW(p)

>That's not what it means -- even here. Here uncertainty is in the mind of the beholder.

Well, yes. I was not suggesting otherwise. The uncertainty still has to follow the Bayesian pattern if it is to be resolved in the direction of more accurate beliefs and not less.

Replies from: TAG, TAG
comment by TAG · 2020-08-07T13:35:36.004Z · LW(p) · GW(p)

That still isn't the original claim, it's falling back to a more defensible position.

comment by TAG · 2020-08-07T13:36:41.320Z · LW(p) · GW(p)
answer by ChristianKl · 2019-01-13T09:02:16.168Z · LW(p) · GW(p)

Is there a way to integrate probability based forecasting into the daily life of the average person that's clearly beneficial for them?

I don't think we are yet at that point where I can clearly say, that we are there. I think we would need new software to do this well.

answer by Raemon · 2020-01-10T01:10:33.147Z · LW(p) · GW(p)

What sort of standards for intellectual honesty make sense, given that:

  • There's a large number of free variables in what information you present to people. You can be quite misleading while saying purely true information. "Not lying" doesn't seem sufficient a norm. [LW · GW]
  • It's hard to build norms around complex behavior. Humans have an easier time following (and flagging violations of) bright lines, compared to more nuanced guidelines.
answer by quanticle · 2019-01-13T07:59:35.217Z · LW(p) · GW(p)

How about: "What is rationality?" and "Will rationality actually help you if you're not trying to design an AI?"

Don't get me wrong. I really like LessWrong. I've been fairly involved in the Seattle rationality community. Yet, all the same, I can't help but think that actual rationality hasn't really helped me all that much in my everyday life. I can point to very few things where I've used a Rationality Technique to make a decision, and none of those decisions were especially high-impact.

In my life, rationality has been a hobby. If I weren't reading the sequences, I'd be arguing about geopolitics, or playing board games. So, to me, the most open question in rationality is, "Why should one bother? What special claim does rationality have over my time and attention that, say, Starcraft does not?"

comment by [deleted] · 2020-06-20T16:55:19.687Z · LW(p) · GW(p)

I keep seeing responses like this, and I don't understand them at all. Rationality is inseparably intertwined with everything I do. Long before I found LessWrong I strove to be rational because I'm an agent with goals and rationality is a toolset for achieving those goals. Yes, I got a lot of it wrong, but I also got a lot of it right -- reading the Sequences felt less like discovering some revelatory holy book and more like refactoring my beliefs.

Afterwards, rationality took less than a year to completely change my life. I don't study rationality because it's a hobby; I study it because the level I was at before was insufficient and I can't afford to make another massive mistake. Sometimes you can't solve a problem on intuition alone and then you need the big guns -- those "rationality techniques".

Replies from: quanticle
comment by quanticle · 2020-06-21T23:22:10.106Z · LW(p) · GW(p)

And what game have those "big guns" allowed you to bag that the lesser guns of "ordinary common sense" would not have?

There are lots of people who do lots of amazing things without having once read Kahneman, without having once encountered any literature about cognitive biases. If we are proposing that rationality is some kind of special edge that will allow us to accomplish things that other people cannot accomplish, we had better come up with some examples, hadn't we?

Replies from: None
comment by [deleted] · 2020-06-22T01:49:29.814Z · LW(p) · GW(p)

I don't agree with your dichotomy between rationality techniques and common sense. Common sense is just layman-speak for S1, and S1 can be trained to think rationally. A lot of rationality for me is ingrained into S1 and isn't something I think about anymore. For example, S1's response to a math problem is to invoke S2, rather than try to solve it. Why? Because S1 has learned that it cannot solve math problems, even seemingly simple ones. Lightness, precision, and perfectionism are mostly S1 jobs for me as well.

And I'm also not claiming rationality is a prerequisite for victory. Rather, I see it as a power amplifier. If you don't have any rationality whatsoever, you're flailing around blind. With a little rationality (maybe just stuff you've learned by osmosis) you can chart a rough course, and if you're already powerful enough that might be all it takes.

But those are relatively minor nitpicks. Let's talk about how, specifically, rationality has changed my life.

The major one for me is discovering I'm trans. Rationality got me to seriously think about the problem (by telling me that emotions weren't evil and crazy), and then told me how to decide whether I was actually trans or not (bayesian fermi estimate). It takes many people years or months to figure this out, often with the help of therapists. I did it in a week, alone, and I came out without the doubts and worries that plague normal trans people.

My pre-sequence grasp of rationality was extremely limited, but still enough to let me self-modify out of the pit of borderline-suicidal depression. I also did it alone, without any therapists or friends (in fact, zero people even knew I was depressed). At the time I figured anyone could do it and it was just a matter of willpower... or something. I didn't pursue the question because back then I hadn't heard of the phrase "notice your confusion". Later, I met someone else who was depressed. I dug a little deeper and it turns out all the people who say you need a therapist and social support are right after all. Most people really, really struggle to escape depression. I'm not sure exactly why this is, but I suspect that not having luminosity would be an insurmountable barrier. It was hard enough when I could understand that my thoughts were corrupted and work to minimize them.

Less flashy but just as important, the overpowering desire to win at all costs has let me change from impulsive social cripple to someone who has no trouble fitting into most social circles. Admittedly, that wasn't something I got from rationality. I started out with the desire to win and gradually tried techniques, discarding what didn't work and keeping what did. But that seems like a fundamental rationality mindset to me. Cold reading, social web, signalling, inadequate equilibria, and schelling points are likely to greatly enhance my skills in the future.

I don't think the average common sense is up to any of those tasks. Plenty of people don't discover they're trans until a decade or more later than me, and I think it's highly significant that I figured it out almost as soon as I started rationality. Most people can't solve depression on their own. And I've met exactly one person who makes the leap from "some people are really charismatic" to "maybe I can learn how to be charismatic" to "I should learn how to be charismatic because the power scale goes up to literally taking over an entire country". Normal people simply don't think like that.

The only way you could convince me to abandon rationality is to either give me something that would achieve my goals better or convince me that I don't need to worry about achieving my goals. For example, if we lived in the glorious transhumanist future where FAI takes care of everything complicated, I wouldn't feel the need to personally become stronger. But as it is, if I close my eyes and sleep, I won't wake up until I've sleepwalked right off a cliff.

Furthermore, I know it's possible to improve. Why? Because of EY. Reading his writing is like talking to a thousand-year-old vampire. He's simply better than me, at everything (minus boring nitpicky stuff like [insert obscure skill]). Child EY feels pretty similar to adult me in a lot of ways which makes me think that what's different between us isn't so much raw IQ or talent as it is all the self-modification he layered on top.

But I have to admit that I'm puzzled by the lack of rationalist stars. It is written that it takes a lot of rationality to get anywhere, but surely out of the thousands(?) of us, at least a few would have mastered enough. Yes, there's EY and Scott and... actually that's all I'm aware of, and I wouldn't know of either of them if I wasn't already a rationalist. That feels like one a notice-your-confusion moment, and yet I'm not sure how to reconcile that observation with the way rationality seems intertwined with my own progression. If I only ever had an average helping of rationality... I'd be a depressed, self-hating, incel. Or maybe I'd just be dead. I thought about suicide a lot, and if I hadn't had a strong belief that I could improve due to past improvements, I might have given up entirely.

I'd say I'm just an unusually talented rationalist but my rationality-fueled common sense says that's extremely unlikely and suspiciously egocentric. So I'll just say that I'm not sure what to think anymore.

comment by Rudi C (rudi-c) · 2019-11-28T23:07:02.236Z · LW(p) · GW(p)

Rationality might not have a lot of practical value, precisely because things that had such value are already well-engineered by evolution and culture. But it still advances your understanding of the world and yourself a lot, and I personally find that one of my few terminal values. Incidentally, rationality might imply that Starcraft is a kind of trojan that exploits our reward circuits, and if we want to maximize our values (as opposed to our pleasure), we are well-advised to take a stance against this exploitation.

Replies from: quanticle
comment by quanticle · 2019-11-29T07:29:35.912Z · LW(p) · GW(p)

But it still ad­vances your un­der­stand­ing of the world and your­self a lot

I'm not sure that it does. I certainly haven't seen any evidence of LessWrong-style rationality being a better means of achieving understanding of the world than, say, just getting a bunch of textbooks and journal articles on whatever you're interested in and doing some old-fashioned studying.

Incidentally, rationality might imply that Starcraft is a kind of trojan that exploits our reward circuits, and if we want to maximize our values (as opposed to our pleasure), we are well-advised to take a stance against this exploitation.

Alternatively, we might say that rationality is a toolbox, and makes no judgements about what you apply those tools to. If you apply the tools of rationality to become a better Starcraft player, then good for you! You have used rationality to improve your skills and work towards your goal more efficiently. Certainly, I've seen a much stronger standards of epistemics in the Starcraft and video game speedrunning communities than in many other places, LessWrong included.

answer by dspeyer · 2019-06-03T06:06:52.141Z · LW(p) · GW(p)

Dealing with uncertainty about our own logic. It's a circular sort of problem: any logic I use to deal with my potentially flawed reasoning is itself potentially flawed. It gets worse when you deal with potential rationalization.

answer by Polytopos · 2021-01-10T03:49:18.998Z · LW(p) · GW(p)

I think a big open question is how to think about rationality across paradigms or incompatible ontological schemas. In focusing only on belief evaluation, we miss there is generally tacit framework of background understanding which is required for the beliefs to be understood and evaluated. 

What happens when people have vastly different background understandings? How does rationality operate in such contexts?


 

answer by Elo · 2019-01-13T19:56:01.828Z · LW(p) · GW(p)

One open problem:

The problem of communication across agents, and generally what I call "miscommunication".

answer by JessieHenshaw · 2019-05-22T13:35:11.676Z · LW(p) · GW(p)

I'm a natural systems scientist, who noticed some time ago that it takes organization in nature to use energy, and then takes development to create organization. I was a physicist, but physics seems limited to studying things that can be represented with numbers. Organization is really not so open to that. So I developed various pattern languages, starting with the seemingly universal pattern of "natural growth," as a process of organization development. The pattern begins with an accumulation of small things or changes expanding into big ones, and then to produce stable organization in the end the progression has to reverse, with accumulating smaller and smaller things or changes leading toward a state of completion. If you look close, really everything we do follows that pattern, of diverging then converging accumulations. That seems to mark "time" as an organizational, not a numeric process.

So... for science to apparently not to notice that seems to be a very strong hint as to what in the world is wrong with human thinking. It's not just Descartes. Somewhere in the evolution of human thought, seemingly well before science chose to define nature with numbers, we seem to have decided that it was the job of reason to define reality. Of course we then get it wrong every time, as reality is already defined and so defining it is just not our job. Nature defines reality, and we can only do our best to study it. So, I don't know is my questions will help anyone else yet, but they might. It does seem "less wrong" as a point of view though, rather than to err by assuming our best rules of prediction define the world we are trying to predict. That's contradictory.

answer by TAG · 2019-05-22T11:55:06.121Z · LW(p) · GW(p)

There's one overwhelmingly big problem, which is solving epistemology.

27 comments

Comments sorted by top scores.

comment by Raemon · 2019-01-14T00:52:13.167Z · LW(p) · GW(p)

I'm still interested in general answers from most people, but did want clarify:

I'm particularly interested in answers from people who have made a serious study of rationality, and invested time into trying to push the overall field forward. Whether or not you've succeeded or failed, I expect people who've put, say, at least 100 hours into the project to have a clearer sense of what questions are hard and important. (Although I do think "100 hours spent trying to learn rationality yourself" counts reasonably)

My sense is that many answers so far come more from a place of sitting on the sidelines or having waded in a bit, found rationality not obviously helpful in the first place, and are sort of waiting for someone to clarify if there's a there there. Which is quite reasonable but not what I'm hoping for here.

Background thoughts

In physics, I don't consider a major source of importance "how valuable is this to the average person?" – it's not physics job to be valuable to the average person, it's job is to add fundamental insights of how the universe works to the sum of human knowledge.

Rationality is more like the sort of thing that could be valuable to the average person – it would be extremely disappointing (and at least somewhat surprising) if you couldn't distill lessons on how to think / have beliefs / make choices into something beneficial to most people.

Rationality is maybe better compared with "math". The average person needs to know how to add and multiply. They'll likely benefit from thinking probabilistically in some cases. They probably won't actually need much calculus unless they're going into specific fields, and they won't need to care at all about the Riemann Hypothesis.

Everyone should get exposed to at least some math and given the opportunity to learn it if they are well suited and curious about it. But I think schools should approach that more from a standpoint of "help people explore" rather than "definitely learn these particular things."

That's roughly how I feel about rationality. I think there a few key concepts like "expected value" and "remember base rates" that are useful when making significant decisions, that should probably get taught in high school. Much of the rest is better suited for people who either find it fun, or plan to specialize in it.

Professional Rationality

I think "How to Measure Anything" is a useful book to get a sense of how professional rationality might actually look: it was written for people in business (or otherwise embarking on novel projects), where you actually have to take significant gambles that require reasonable models of how the world works. A given company doesn't need everyone to be excellent at calibration, forecasting, model building and expected value (just as they don't need everyone to be a good accountant or graphic designer or CEO). But they do need at least some people who are good at that (and they need other people to listen to them, and a CEO or hiring specialist who can identify such people).

It's a reasonable career path, for some.

[I don't mean How to Measure Anything as the definitive explanation of what "professional rationality" looks, just a take on it that seems clearly reasonable enough to act as a positive existence proof]

So, what open questions are most useful?

A few possible clusters of questions/problems:

  • What are specific obstacles to distilling rationality concepts that seem like they should be valuable to the average person, but we don't know how to teach them yet?
  • What are specific obstacles, problems, or unknown solutions to problems that seem like they should be relevant to a "rationality specialist", who focuses on making decisions in unknown domains with scant data.
  • What are genuinely confusing problems at the edge of the current rationality field – perhaps far away from the point where even specialists can implement them yet, but where we seem confused in a basic way about how the mind works, or how probability or decision theory work.

Replies from: ChristianKl, ESRogs
comment by ChristianKl · 2019-01-14T09:41:41.338Z · LW(p) · GW(p)
My sense is that many answers so far come more from a place of sitting on the sidelines or having waded in a bit, found rationality not obviously helpful in the first place.

That seems for me a strange result from going through the list of people who answered. All have >1000 karma on LessWrong. Most (all expect Elo) are more then 6 years on LessWrong.

It would surprise me if any of the people have spent less then 100 hours learning/thinking about how to make rationality work.

I myself spent years thinking about how to make calibration work. I tested multiple systems created by LessWrongers. That engagement with the topic lead me to an answer of how I think medicine could be revolutionized [LW · GW]. But I'm still lacking a way to make it actually practical for my daily life.

I think "How to Measure Anything" is a useful book to get a sense of how professional rationality might actually look [...] But they do need at least some people who are good at that (and they need other people to listen to them, and a CEO or hiring specialist who can identify such people).

YCombinator tells their startups to talk to their user and do things that don't scale instead of hiring a professional rationalist to help them navigate uncertainty. To me that doesn't look like it's changing.

It's a bit ridiculous to treat the problem of what rationality actually is as solved and hold convictions that we are going to have rationality specialists.

Replies from: Thrasymachus, Elo
comment by Thrasymachus · 2019-01-14T17:53:39.533Z · LW(p) · GW(p)

FWIW: I'm not sure I've spent >100 hours on a 'serious study of rationality'. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.

On the topic 'under the hood' here:

I sympathise with the desire to ask conditional questions which don't inevitably widen into broader foundational issues. "Is moral nihilism true?" doesn't seem the right sort of 'open question' for "What are the open questions in Utilitarianism?". It seems better for these topics to be segregated, no matter the plausibility or not for the foundational 'presumption' ("Is homeopathy/climate change even real?" also seems inapposite for 'open questions in homeopathy/anthropogenic climate change'). (cf. 'This isn't a 101-space').

That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the 'Project' (even if this seems to be taken very broadly to normative issues in general - if Wei_Dai's list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around "How can one best make decisions on unknown domains with scant data", the superforecasting literature seems some of the lowest hanging fruit to pluck.

Yet community competence in these areas has apparently declined. If you google 'lesswrong GJP' (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here's something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could 'try at home'. (The same applies to RQ: Sotala wrote a cool sequence [LW · GW] on Stanovich's 'What intelligence tests miss', but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)

I don't understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don't see there being a 'lessons from the superforecasting literature' write-up here (I am slowly working on one myself).

Maybe I just missed the memo and many people have kept abreast of this work (ditto other 'relevant-looking work in academia'), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of 'is there anything there?' for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.

[Minor] I think the first para is meant to be block-quoted?

Replies from: habryka4, ChristianKl
comment by habryka (habryka4) · 2019-01-14T18:07:00.409Z · LW(p) · GW(p)

I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we've seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)

Replies from: ESRogs
comment by ESRogs · 2019-01-14T19:58:14.963Z · LW(p) · GW(p)

Also superforecasting and GJP are no longer new. Seems not at all surprising that most of the words written about them would be from when they were.

comment by ChristianKl · 2019-01-14T21:35:37.167Z · LW(p) · GW(p)

Given that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies.

It's my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.

comment by Elo · 2019-01-14T10:37:22.747Z · LW(p) · GW(p)

I've been around that long. Or more. I was lurking before I commented.

In my efforts to apply rationality I ended up in post rationality. And ever upwards.

comment by ESRogs · 2019-01-14T05:49:53.043Z · LW(p) · GW(p)
What are genuinely confusing problems at the edge of the current rationality field – perhaps far away from the point where even specialists can implement them yet, but where we seem confused in a basic way about how the mind works, or how probability or decision theory work.

For one example of this, see Abram's most recent post [LW · GW], which begins: "So... what's the deal with counterfactuals?" :-)

comment by ESRogs · 2019-01-13T08:12:29.368Z · LW(p) · GW(p)

I don't have a crisp question yet, but one general area I'd be interested in understanding better is the interplay between inside views and outside views.

In some cases, having some outside view probability in mind can guide your search (e.g. "No, that can't be right because then such and such, and I have a prior that such and such is unlikely."), while in other cases, thinking too much about outside views seems like it can distract you from exploring underlying models (e.g. when people talk about AI timelines in a way that just seems to be about parroting and aggregating other people's timelines).

A related idea is the distinction between impressions and beliefs. In this view impressions are roughly inside views (what makes sense to you given the models and intuitions you have), while beliefs are what you'd bet on (taking into account the opinions of others).

I have some intuitions and heuristics about when it's helpful to focus on impressions vs beliefs. But I'd like to have better explicit models here, and I suspect there might be some interesting open questions in this area.

Replies from: Elo
comment by Elo · 2019-01-13T19:57:03.828Z · LW(p) · GW(p)

Integral theory quadrants give a perspective framework for communicating this problem.

Replies from: ESRogs
comment by ESRogs · 2019-01-13T22:26:45.086Z · LW(p) · GW(p)

Interesting! Would you be willing to give a brief summary?

Replies from: Elo
comment by Elo · 2019-01-13T23:14:53.251Z · LW(p) · GW(p)

https://integrallife.com/four-quadrants/

Tentatively share this link. Integral gives a whole deeper meaning to interiors, not just "my side of the argument" but the full meditation, mysticism, emotional depths of the subjective interior experience as it relates to the inside view. It's a larger framework but it's a good start to recognise the problem of interior/exterior split.

comment by Emiya (andrea-mulazzani) · 2021-01-02T13:34:46.172Z · LW(p) · GW(p)

Concise. The post briefly sums up the fields and directions where rationality have been developed on the site, then asks for users to lists the big open questions that are still left to answer.

  • The post is mostly useful to 1) people wishing to continue their training in rationality after they went through the recommendations and are looking for what they should do next and 2) continue the conversation on how to improve rationality systematically. The post itself lists a few of the fields that have been developed and are being developed, in the answers there are several open questions left to explore.
  • The post improved the list I made of what I should study in the future to further improve my understanding of rationality.
  • It seems to be accurate, the user that wrote it is focused on organising the LessWrong site, with many years of activity and an extremely high karma. Many high-karma answers from "reliable" users also suggests the post has good informations, both in the question and in the answers.
  • The claims on Forecasting and Behavioral Economics having been developing fields on LessWrong fit with what I saw on the site. I plan to read "How to Measure Anything" which the user claims to be a good book to test how useful its recommendations are, but don't have the time to do so now. 
  • An edit or a followup post listing the open problems of rationality that most answers agreed on (also considering the answers' karma) would be useful. If this has been done, the post lacks a link to it.
comment by jimrandomh · 2020-12-14T06:53:26.446Z · LW(p) · GW(p)

Including this in the 2019 Review is a bit odd, since most of the content is in the answers rather than the question, but I like how those answers set a research agenda that can be followed up on.

comment by Willa (Eh_Yo_Lexa) · 2020-12-14T18:50:01.198Z · LW(p) · GW(p)

TLDR: I nominate this post for the 2019 review because I want more people to pay attention to the sort of self- and community- wide reflection this post and the comments therein encourage! My reasons are listed below this paragraph, but the basic idea is that a central resource explicitly stating open problems, closed problems, research agendas, skills + techniques for solving specific problems, speculation about how to solve specific problems, and more would be rather helpful for the community in that we'd likely become individually and as a group significantly more capable of notices problems and applies skills and techniques to solving those problems, plus we'd get a lot better at institutional (community) onboarding, knowledge transference, etc. etc. etc.

I think hosting (on this site) a list of such problems, techniques used to try to solve them, research agendas for solving them, etc. would be enormously helpful for not only gaining an explicitly promoted community wide resource to serve as a foundation for reflection, understanding, and improvement upon what our Human Rationality project is all about, but would also serve as an excellent resource for individual Rationalists to improve our own epistemic AND instrumental rationality skills, individually.

Especially if, similar to how some commenters categorized or sectioned off different concepts or even concept-areas, the resource had sections ranging from the meta meta level, meta, reflection, abstract, etc. all the way down to concrete instructions on performing this one shiny ritual to improve this one particular skill or aspect about ones own self, techniques for improving instrumental rationality, techniques for building a Rationalist community in one's location, etc.

Basically, it's helpful to have a resource that points out what's wrong, what needs to be fixed, speculates about problems we might not even be cognizant of thus far, specifies what we have solved and how we've solved, what individuals and groups can do to become more epistemically, instrumentally, and prudently Rational (did I miss any types?), and more.

At present it is possible for members of this community to identify open problems, to write about them, try to solve them, but it's usually very individual-only kinds of efforts, ad-hoc, and not easily findable by many other members of the community.

And that's just what's written...at present everyone has to spend months or years of lurking, participating, etc. to get a good sense of what the open problems, or even learn about extant techniques for Rationalist self-improvement. I believe that the lack of a central, explicit resource about said problems and the other things aforementioned in my comment is definitely a net-drag on the community's improvement, momentum, etc. Having such a resource would reduce that drag, increase the momentum of the community, and be very helpful for newcomers and long established members alike!

Thus, such a resource would raise the sanity waterline one Rationalist at a time (since it'd be easier for each person to improve individually thanks to such a community-wide resource) while also helping us improve more quickly and deeply as a community overall.

On a side note, there used to be a Meta section on LessWrong.com in the left sidebar menu thing, but that disappeared at some point. I see that you can now filter posts to see what are tagged "meta" (which is cool!), but even though that tag has hovertext saying that this is intended to be used only with things that are meta for the LessWrong site itself, it seems to be used as a general meta tag by a number of individuals and thus is not as useful as it could be, its utility is diluted, etc.

Cheers, Willa

comment by habryka (habryka4) · 2019-05-21T01:05:43.198Z · LW(p) · GW(p)

Promoted to curated: There is a reason why I am excited about open questions on LessWrong and the responses on this question are a good instance of that. I've referred to the answers to this question quite a few times and am generally excited both about people adding more answers and to picking open problems from here and working on them.

I've been holding off on curating this for a while because I wanted to make sure that our question-UI is actually working well and that we have functional related-question systems such that people can actually take this question and build on it.

Replies from: Raemon
comment by Raemon · 2019-05-21T01:16:01.861Z · LW(p) · GW(p)

Concretely: I'd be interested in Wei Dai or others turning the answers here into "Related Questions", which can then be actually answered.

(Noting that, for now, related questions are default be hidden from the frontpage, although admins can manually toggle them to be displayed, and they would appear in the Open Questions page and the Recent Discussion section).


comment by Zvi · 2021-01-15T19:52:28.122Z · LW(p) · GW(p)

These are good lists of open problems, although as Ben notes are bad lists if they are to be considered all the open problems. I don't think that is the fault of the post, and it's easy enough to make clear the lists are not meant to be complete. 

This seems like a spot where a good list of open problems is a good idea, but here we're mostly going to be taking a few comments. I think that's still a reasonable use of space, but not exciting enough to think of this as important.

comment by Ben Pace (Benito) · 2021-01-12T05:58:33.027Z · LW(p) · GW(p)

So, I think this post is pretty bad as a 'comprehensive' list of the open problems, or as 'the rationality agenda'. All of the top answers (Wei, Scott, Brienne, Thrasymachus) add something valuable, but I'd be pretty unhappy if this was considered the canonical answer to "what is the research agenda of LW", or our best attempt at answering that question (I think we can do a lot better). I think it doesn't address many things I care about. Here's a few examples:

  • What are the best exercises for improving your rationality? Fermi estimates, Thinking Physics, Calibration Training, are all good, but are there much better ones?
  • What are the best heuristics for how to fight Moloch? What are examples of ways in which we have sold our souls to Moloch?
  • What are practical heuristics for how to get in touch with the world in a way that is reality-revealing rather than reality-masking?
  • What challenges do we face as embedded agents, and how should we think about them?
  • (This one's a bit weird) What is the best rationality advice in the utilitarianism/deontology/virtue ethics ontology?
    • For virtue ethics, right now we think that curiosity and caring about something intensely is key. Is there a different virtue we're not noticing?
    • For deontology, we have rules like "hold off on proposing solutions" and "sit down by a clock for 5 minutes trying to solve a problem before giving up on it". What are the most important rules for rationality?
    • For utilitarianism, we have ways to improve our precise modeling like "practice fermi estimates, solve thinking physics problems, do calibration training". Are there other quantitative practises that improve our ability to bring ourselves and the world into alignment?

I also think that I don't come away from the answer feeling like I "learned" something, in the way that I do from posts that set out big problems like Embedded Agency [LW · GW], Reality-Revealing and Reality-Masking Puzzles [LW · GW], and The Treacherous Path to Rationality [LW · GW]. What Failure Looks Like [LW · GW] is a great example of setting up a set of open problems by putting in the work to communicate them. (It's focused on AI not humans so didn't list it in the above list.)

So I feel conflicted on the list. I think there's lots of valuable ideas in it, but it doesn't feel at all like something right now I'd want to give someone as our best list of the open problems. I think I might vote this between -1 and -3 at the minute.

(I notice I think I'd be pretty happy if the post title just changed to "What are some open problems in Human Rationality?". I think then I'd vote at somewhere between +1 and +4.)

Replies from: Raemon
comment by Raemon · 2021-01-15T21:38:15.738Z · LW(p) · GW(p)

So I'm not sure I'd include this in the Best Of book in the first place. If I did, I agree it'd be pretty obviously wrong to imply that the list was comprehensive. I didn't think that was implied by the post – if you ask a question, usually you don't end up getting a comprehensive answer right away. 

As a post on a live forum, I think it's pretty obvious that this isn't a comprehensive list – if it's missing things, people are supposed to just add those things, and you should expect it to need updating over time.

In the case of a printed book, I'm not sure if the right thing is to change the title, or just make sure to say "here are some specific answers this question post got." Either seems potentially fine to me.

I very much don't think the title of the LessWrong post itself should change – it's trying to ask a question, not spell out any particular expectation of an answer.

comment by Ofer (ofer) · 2019-01-13T10:59:06.655Z · LW(p) · GW(p)

Maybe: "What are the most effective interventions for making better predictions/decisions?"

It seems worthwhile to create such a list, ranked according to a single metric as measured in randomized experiments.

(if there is already such a thing please let me know)

comment by Mary Chernyshenko (mary-chernyshenko) · 2019-06-03T17:34:16.748Z · LW(p) · GW(p)

Perhaps information hygiene. There are a lot of information sources which might be called parasitic. Suppose we have some "process" that allows us to somehow find ourselves where we had steered, truth-wise among other things. Biology says "it will be eaten, possibly gradually".

I mean, in the natural way ofthings, the first outsiders to acknowledge rationality as a thing will be those who will swallow the practitioners. Until it happens, we may consider it to be nascent.

comment by jmh · 2019-01-14T14:14:43.532Z · LW(p) · GW(p)

I had a could of thought, to follow, but in thinking on them the issue of separability seemed to come up. If we think of rationality as either a mental skill to develop or some type of trait that can be developed related to our decision making, and therefore actions can that truly be separated from all the other things that make up being human.

Are emotions and any part of our emotional responses part or rationality? The enemy of rationality (I think many people would consider emotional and rational as opposites -- not sure about a more sophisticated view that is found here. Are there certain complementary aspects/elements?

Intuitive rationality (maybe something like Hayek's idea of inarticulate knowledge)? Where does that fit?

Last is the issue of subjective values and recognizing other's rationality. What limits would that place on observing rationality in others as well as assessing our own rationality.

Replies from: ChristianKl
comment by ChristianKl · 2019-01-14T14:46:00.887Z · LW(p) · GW(p)

Are emotions and any part of our emotional responses part or rationality?

Our community uses the word rationality in a way where they are included. See http://www.rationality.org/resources/updates/2013/3-ways-cfar-has-changed-my-perspective-on-rationality

comment by Blam · 2019-01-14T08:46:27.957Z · LW(p) · GW(p)

Rationality goes wrong when it's used to judge people rather than beliefs. Belief-judging rationality is logical contemplation of ideas to try and remove falsehoods and inconsistencies. Person-judging rationality is in practice emotionally based moral judgement, usually judging people as either rational or irrational so you can dismiss them. There's no such thing as an irrational person, only irrational beliefs, it's an important distinction. Any good rationalist should use two options, you believe you understand or believe you don't understand. If you perceive a person as being irrational what you are perceiving is your own projected ignorance of that person's views, they make perfect sense to that person even if you do not understand them, it means they believe or know something that you don't. Just remember rationality only applies to ideas, not people.

Replies from: Pattern
comment by Pattern · 2019-01-15T02:28:58.116Z · LW(p) · GW(p)

While disagreements are often matters of miscommunication, mistakes are possible, especially for those who are not all knowing and perfect in their calculations.