The Foundational Toolbox for Life: Introduction
post by ExCeph · 2019-06-22T06:11:59.497Z · LW · GW · 2 commentsContents
(Co-authored with Sailor Vulcan) Follows from: What Do We Mean by Rationality? No Really, Why Aren't Rationalists Winning? Related resources: Inadequate Equilibria Instrumental Rationality sequence How the ‘Magic: The Gathering’ Color Wheel Explains Humanity What’s Going On Here? Cognitive Filters; or, “Parallel Realities of a Plane” Paradigms and Skills The Toolbox None 2 comments
(Co-authored with Sailor Vulcan)
Follows from:
What Do We Mean by Rationality? [LW · GW]
No Really, Why Aren't Rationalists Winning? [LW · GW]
Related resources:
Inadequate Equilibria [? · GW]
Instrumental Rationality sequence [? · GW]
How the ‘Magic: The Gathering’ Color Wheel Explains Humanity
What’s Going On Here?
As I was growing up, I could tell that something critical was missing from many of the adults that I encountered, and from the way society as a whole functioned. People regularly made decisions that seemed obviously stupid, and the institutions they built were little better. When I eventually learned of others who agreed that society was doing a poor job of addressing problems, I was relieved, but still puzzled that nobody had managed to fix the situation yet.
As I learned the basics of rationality, I became more confused. People had created formalized methods for making better decisions based on understanding how systems and logic worked, which was amazing. However, not only had these methods failed to become the default standard of human adulthood, not only were they not used to guide policy decisions in the public and private sectors, but the methods were obscure. Barely anybody knew they existed. The world rationalists said they wanted was nowhere in sight.
I concluded that there had to be more to implementing effective policies and institutions than just knowing how things worked. If I wanted to solve any important problems like that, I would have to develop other skills. Since all I had was the proverbial hammer of understanding ideas and mechanisms, I embarked on a journey to hammer out an understanding of skills. I asked myself, what do I need to do to build the world I expect?
A look at the people who seemed to be the ones building the world prompted further questions: Why are non-rationalists having so much more impact than rationalists? What skills are they applying that rationalists aren’t? And how do these non-rationalists successfully learn and apply skills they don't fully understand?
Cognitive Filters; or, “Parallel Realities of a Plane”
As was pointed out in What Do I mean by Rationality [LW · GW], it’s not possible to form an accurate view of the world in any reasonable amount of time by tracking minute observations one by one and plugging the data from that into Bayesian formalisms in their full form. This is a principle of epistemic rationality.
I realized there must also be an equivalent principle of what we commonly refer to as instrumental rationality: You cannot succeed at an activity in any reasonable amount of time by tracking one by one the procedural steps and data points involved in that activity, plugging them into decision matrices, and outputting the choice that maximizes expected utility at each step.
In the same way that Bayesian formalisms in their full form are computationally intractable on most real world epistemic problems, decision matrices are computationally intractable on most real world instrumental problems. No one can calculate and obey the math any more than you can win a game of baseball by paying attention to quarks, one by one.
So what do successful humans pay attention to instead?
To compress information into a form which is computationally tractable for instrumental problems while retaining enough accuracy to make the results of the computation useful, humans apply cognitive filters to the information. They screen out minute details and deal with the problem at a higher level of abstraction.
It’s obvious that these filters will be lossy. That’s their advantage: they remove irrelevant information so you can fit the problem into your brain. What’s less obvious is that at each level of abstraction, each level of loss, there are multiple valid filters that can be applied. Each filter regards and disregards different aspects of the situation, just as a map can show physical or political geography. The filters are all approximately accurate—they’re just answering different questions.
As an epistemic example, let us suppose you have a trans-oceanic airplane design that has a surprisingly short lifespan. After only a few flights, its wings break, and it falls out of the sky and into the sea.
To troubleshoot, you apply a physics filter with a mechanical focus. It takes into account the shape, mass distribution, and material strength of the plane’s structure. It factors in the thrust provided by the engine, and the flow of air across the wings. All other information is filtered out to increase the computational efficiency.
Examining the plane through this physics filter tells you that everything should work fine. The wings should not be exposed to any force they can’t handle. It’s not until you apply a chemistry filter that you see that the exposed wing rivets are corroding due to high concentrations of ions in the sea air. The plane is changing from your physical model of it, and your physics filter failed to predict that. This particular physics filter doesn’t even have the concepts you need to ask the right question.
A different designer has an airplane that looks just fine through the chemistry filter, but can’t get off the ground. You apply the physics filter and immediately tell them to either make the plane go faster or lighten the wing loading: the aircraft’s wings are so small compared to its mass that it needs to travel faster to generate enough lift.
Nobody who works with airplanes uses a filter based on the more fundamental, less abstracted physical laws from which both the mechanical physics filter and the chemical filter are derived. Using such a filter would indeed tell them of all the mechanical and chemical problems of the plane. However, it would also require them to know about all the quarks in the plane.
Instead, they apply the physics filter to tell them the most important questions to ask the chemistry filter, and vice versa. If we were to design a plane right now, physics will tell us where we most have to worry about chemistry, such as which areas are most vulnerable to fracturing if the material corrodes. Conversely, chemistry will tell us where we most have to worry about physics, such as which parts of the plane should not be exposed to extreme temperatures. We can explore solutions in each filter and critique them with the other filter, and we can iterate between these as much as we need to until we reach an equilibrium.
We can even look through both filters at once to exploit interactions between phenomena that aren’t all visible through the same filter. For example, we might not take van der Waals forces into account unless we see how phenomena from our chemistry and physics filters affect each other. The filters both lose information for computational tractability, but each one carves reality at different joints [? · GW]. Used together they are greater than the sum of them individually. It turns out dovetailing two or more lossy abstractions is highly effective at solving most fiddly, practical problems, and vastly more efficient than going down to the quark level, to the point where people can solve a surprisingly wide array of problems without even knowing the quark level exists.
Paradigms and Skills
Humans developed most of these cognitive filters, also known as paradigms, through brute force: generations of biological and cultural evolution. Confronted with a task, individual people adopted the paradigms that came easily and naturally to them. The paradigms that helped them succeed were the ones that caught on with humanity at large. That’s why non-rationalists collectively are in possession of many of them. The paradigms they have suffice for dealing with most of the situations they face, at least on the local scale.
It’s not enough to merely have a paradigm, though. Using it effectively requires practice. A paradigm tells you what sorts of details to pay attention to in order to get an answer, and how they fit together. It only yields a skill when it is calibrated with experiences and feedback about how specific details affect the outcome of a situation. Calibrating a paradigm into a skill is the process of using feedback to gauge the relative importance of those details that fit together, and learning to adjust how you process them to accurately predict or change the outcome of the situation.
Some analogies may express this distinction more clearly. If the paradigm you’re applying to a situation is a polynomial equation, like ax^2 + bx + c = y, where x is an input variable you know, and y is the solution you’re looking for, calibrating that paradigm into a skill is figuring out over time what the parameter values a, b, and c are for a recurring situation. If your paradigm is what sort of motion you must make with your arm to throw a ball, calibrating that paradigm into a skill means adjusting the fine dimensions of that motion to actually hit your target.
As the throwing analogy implies, paradigms apply to instrumental skills as well. A non-rationalist may master a skill by gradually developing the right thought patterns and paradigm for that particular skill through experience and practice. They may not know why their thinking habits work, or how to apply them to other situations, but they can succeed at the rigidly-bounded task with which they’re familiar. They can outperform people whose paradigms are less efficient, or less well calibrated.
For example, a rationalist may know how to derive the theory of playing chess from basic logic and game theory, but it doesn’t follow that they will be able to defeat a seasoned non-rationalist chess player in a timed match if they’ve never so much as played a board game before. The non-rationalist chess player will have a better sense of what to pay attention to and what effects moves will have, and thus will have more brain space free to plan ahead in the time they have.
However, people often suffer for not understanding how or why their skills work. Paradigms are defined by assumptions about how systems work and what the user’s values and goals are, and people are often ignorant of the assumptions they’re making. They might have difficulty adapting a paradigm for use in different situations, or using it in conjunction with another paradigm to enhance both. The chess player may fail to use their strategy experience to help prevent accidents, or successfully run a business, because those tasks require other skills in addition to strategy. Understanding paradigms and how to calibrate them is critical to mastering and applying skills beyond the narrow range for which one may have a natural inclination.
The Toolbox
Successful institutions and policies require those participating in them to have constructive skills. The reason so many of them are ineffective or outright harmful is because they don't have a way to educate everyone who needs to participate so they have the skills required. The same goes for movements and communities.
Many people grow up learning how to go through the motions of a skill without understanding why it works, and that makes it harder for them to use it effectively, to adapt their skill to different contexts, and to learn other similar skills. A society of such people is ill-prepared to make wise decisions, and to build and sustain structures that could solve our most important problems.
After figuring out how the paradigms and calibration of non-rationalists help them succeed at developing skills, I was able to put together a toolbox of the most basic concepts and paradigms that are the building blocks of all skills.
The concepts in this toolbox can help people frame problems they're stuck on, and equip them to conceive of and (with practice) implement better solutions. The purpose of these concepts is to form a foundation for people to more easily learn the skills to participate in effective institutions, and to continue learning more skills on their own that they may want or need.
This is the Foundational Toolbox for Life.
2 comments
Comments sorted by top scores.
comment by Viliam · 2019-06-22T22:05:23.195Z · LW(p) · GW(p)
Many people grow up learning how to go through the motions of a skill without understanding why it works, and that makes it harder for them to use it effectively, to adapt their skill to different contexts, and to learn other similar skills.
In mathematics, you address this problem by having students solve problems with different numbers, throwing in some irrelevant numbers, etc.
In programming, you give people a small task to code.
Could this be somehow generalized? I suppose the problem is that in many situations, running an experiment would be long and costly, and the outcome would more depend on random noise.
Replies from: ExCeph↑ comment by ExCeph · 2019-06-23T07:23:22.129Z · LW(p) · GW(p)
Practice with different example problems is indeed important for helping people internalize the principles behind the skills they're learning. However, just being exposed to these problems doesn't always mean a person figures out what those principles are. Lack of understanding of the principles usually means a person finds it difficult to learn the skill and even more difficult to branch out to similar skills.
However, if we can explicitly articulate those principles in a way people can understand, such as illustrating them with analogies or stories, then people have the foundation to actually get the benefits from the practice problems.
For example, let's say you see numbers being sorted into Category A or Category B. Even with a large set of data, if you have no mathematical education, you could spend a great deal of effort without figuring out what rule is being used to determine which category a number belongs in. You wouldn't be able to predict the category of a given number. To succeed, you would have to derive concepts like square numbers or prime numbers from scratch, which would take most people longer than they're willing or able to spend. However, if you're already educated on such concepts, you have tools to help you form hypotheses and mental models much more easily.
The objective here is to provide a basic conceptual framework for at least being aware of all aspects of all types of problems, not just easily quantifiable ones like math problems. If you can put bounds on them, you are better equipped to develop more advanced and specific skills to investigate and address them.
And yes, experiments on the method's effectiveness may be very difficult to design and run. I tend to measure effectiveness by whether people can grasp concepts they couldn't before, and whether they can apply those concepts with practice to solve problems they couldn't before. That's proof of concept enough for me to work on scaling it up.
Does that answer your question?