Epistemic Progress
post by ozziegooen · 2020-11-20T19:58:07.555Z · LW · GW · 16 commentsContents
Introduction Effective Epistemics Epistemic Progress Why not focus on [insert similar term] instead? Next Steps None 16 comments
Epistemic Status: Cautiously optimistic. Much of this work is in crafting and advancing terminology in ways that will hopefully be intuitive and useful. I’m not too attached to the specifics but hope this could be useful for future work in the area.
Introduction
Strong epistemics or “good judgment” clearly seems valuable, so it’s interesting that it gets rather little Effective Altruist attention as a serious contender for funding and talent. I think this might be a mistake.
This isn’t to say that epistemics haven’t been discussed. Leaders and community members on LessWrong and the EA Forum have written extensively on epistemic rationality, “good judgment”, decision making, and so on. These communities seem to have a particular interest in “good epistemics.”
But for all the blog posts on the topic, there is less in terms of long term and full-time efforts. We don’t have teams outlining lists of possible large scale epistemic interventions and estimating their cost-effectiveness, like an epistemics version of the Happier Lives Institute. We don’t have a Global Priorities Institute equivalent trying to formalize and advance the ideas from The Sequences. We have very little work outline what optimistic epistemic scenarios we could hope for 10 to 200 years out from now.
I intend to personally spend a significant amount of time on these issues going forward. I have two main goals. One is to better outline what I think work in this area could look like and how valuable it might be to pursue. The second is to go about doing work in this area in ways that both test the area and hopefully help layout groundwork that makes it easier for more people to join in.
One possible reason for a lack of effort in the space is that the current naming and organization is a bit of a mess. We have a bundle of related terms without clear delineations. I imagine that if I asked different people how they would differentiate “epistemics”, “epistemic rationality”, “epistemology”, “decision making”, “good judgment”, “rationality”, “good thinking”, and the many subcategories of these things, I’d get many conflicting and confused answers. So some of my goal is to try to highlight some clusters particularly worth paying attention to and formalize what they mean in a way that would be useful to make decisions going forward.
I’ll begin by introducing two (hopefully) self-evident ideas. “Epistemic Progress” and “Effective Epistemics.” You can think of “Epistemic Progress” as the “epistemics” subset of “Progress Studies”, and “Effective Epistemics” as the epistemic version of “Effective Altruism.” I don’t mean this as an authoritative cornerstone, but rather as pragmatic intuitions to get us through the next few posts. These names are chosen mainly because I think they would be the most obvious to the audience I expect to be reading this.
Effective Epistemics
“Effective Epistemics” is essentially “whatever seems to work at making individuals or groups of people more correct about things for pragmatic purposes.” It’s a bit higher level than Value of information. This is not focussed on whether something is theoretically true or with precise definitions of formal knowledge. It’s rather about which kinds of practices seem to make humans and machines smarter at coming to the truth in ways we can verify. If wearing purple hats leads to improvement, that would be part of effective epistemics.
There’s a multitude of things that could help or hinder epistemics. Intelligence, personal nutrition, room lighting, culture, economic incentives, mathematical knowledge, access to expertise, add-ons to Trello. If “Effective Epistemics” were an academic discipline, it wouldn’t attempt to engineer advanced epistemic setups, but rather it would survey the space of near and possible options to provide orderings. Think “cause prioritization.”
Effective Altruism typically focuses on maximizing the potential of large monetary donations and personal careers. I’d imagine Effective Epistemics would focus more on maximizing the impact of smaller amounts of effort. For example, perhaps it would be identified that if a group of forecasters all spent 30 hours studying information theory, they could do a 2% better job in their future work. My guess is that epistemic intervention estimations would be more challenging than human welfare cost-effectiveness calculations, so things would probably begin on a more coarse level. Think longtermist prioritization (vague and messy), not global welfare prioritization (detailed estimates of lives saved per dollar).
Perhaps the most important goal for “Effective Epistemics” is to reorient readers to what we care about when we say epistemics. I’m quite paranoid about people defining epistemics too narrowly and ignoring interventions that might be wildly successful, but strange.
This paranoia largely comes from the writings of Peter Drucker on having correct goals, in order to actually optimize for the right things. For example, a school “optimizing education for getting people jobs” might begin with High School students at one point when those are the highest impact. But if things change and they recognize there are new opportunities to educate adults, maybe they should jump to prioritize night school. Perhaps with online education they should close down their physical building and become an online-only nonprofit focussed on international students without great local schools. It can be very easy to fall into the pattern of trying to “be a better conventional High School than the other conventional High Schools, on the conventional measures”, even if what one really cares about upon reflection is the maximization of value from education.
Epistemic Progress
“Epistemic Progress” points to substantial changes in epistemic abilities. Progress Studies is an interesting new effort to study the long term progress of humanity. So far it seems to have a strong emphasis on scientific and engineering efforts, which makes a lot of sense as these are very easy to measure over time. There have been a few interesting posts on epistemics but these are a minority. This post [LW · GW] on Science in Ancient China seems particularly relevant.
Historic epistemic changes are challenging to define and measure, but they are still possible to study. It seems clear in retrospect that the Renaissance and Enlightenment presented significant gains, and the Internet led to a complex mesh of benefits and losses. One should eventually create indices on “epistemic abilities” and track these over time and between communities.
One aspect I’d like to smuggle into “Epistemic Progress” is a focus on progress going forward, or perhaps “epistemic futurism”. Epistemic abilities might change dramatically in the future, and it would be interesting to map how that could happen. Epistemic Progress could refer to both minor and major progress, both seem important.
Why not focus on [insert similar term] instead?
I’m not totally sure that “epistemics” is the right frame for my focus, as opposed to the more generic “rationality”, or the more specific “institutional decision making.” As said earlier, there are several overlapping terms floating around. There are tradeoffs for each.
First, I think it doesn’t really matter. What matters is that we have some common terminology with decent enough definitions, and use that to produce research findings. Many of the research findings should be the same whether one calls the subject “epistemics”, “epistemic rationality”, “intellectual development”, or so on. If in the future a more popular group comes out with a different focus, hopefully, they should make use of the work produced from this line of reasoning. The important thing is really that this work gets done, not what we decide to call it.
As to why it’s my selected choice of the various options, I have a few reasons. “Epistemics” is an expression with rather positive connotations. Hopefully the choice of “epistemics” vs. “group thinking” would tilt research to favor actors that are well-calibrated instead of just being intelligent. An individual or group with great decision making or reasoning abilities, but several substantial epistemic problems, could do correspondingly severe amounts of damage. A group with great epistemics could also be destructive, but a large class of failures (intense overconfidence) may be excluded.
I prefer “epistemics” to “decision making” because it gets more to the heart of things. I’ve found when thinking through the use of Guesstimate that often by the time you’re making an explicit decision, it’s too late. Decisions are downstream of general beliefs. For example, someone might make a decision to buy a house in order to shorten their commute, but wouldn’t have questioned whether the worldview that produced their lifestyle was itself dramatically suboptimal. Perhaps their fundamental beliefs should have been continuously questioned, leading them to forgo their position and become a Buddhist monk.
I’ve been thinking about using some specific modifier word to differentiate “epistemics” as I refer to it from other conceptions. I’m trying to go with the colloquial definition that has emerged within the Rationality and Effective Altruist circles, but it should be noted that this definition holds different connotations to other uses of the term. For this essay, two new terms feel like enough. I’m going to reflect on this for future parts. If you have ideas or preferences, please post them in the comments.
Next Steps
Now that we have the key terms, we can start to get into specifics. I currently have a rough outline to formally write a few more posts in this sequence. The broader goal is to help secure a foundation and some motivation for further work in this space. If you have thoughts or feedback, please reach out or post in the comments.
16 comments
Comments sorted by top scores.
comment by Davis_Kingsley · 2020-11-21T09:54:53.081Z · LW(p) · GW(p)
Strongly agree and am excited to see this -- this area seems deeply neglected.
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-22T00:45:21.598Z · LW(p) · GW(p)
My impression of where this would lead is something like this:
While enormous amounts of work has been done globally to develop and employ epistemic aids, we have relatively little study being done to explore which epistemic interventions are most useful for specific problems.
We can envision an analog to the medical system. Instead of diagnosing physical sickness, it diagnoses epistemic illness and prescribes solutions on the basis of evidence.
We can also envision two wings of this hypothetical system. One is the "public epistemic health" wing, which studies mass interventions. Another is patient-centered epistemic medicine, which focuses on the problems of individual people or teams.
"Effective epistemics" is the attempt to move toward mechanistic theories of epistemology that are equivalent in explanatory power to the germ theory of disease. Whether such mechanistic theories can be found remains to be seen. But there was also a time during which medical research was forced to proceed without a germ theory of disease. We'd never have gotten medicine to the point where it is today if early scientists had said "we don't know what causes disease, so what's the point in studying it?"
So having a reasonable expectation that formal study would uncover mechanisms with equivalent explanatory power would be a good use of resources, considering the extreme importance of correct decision-making for every problem humanity confronts.
Is this a good way to look at what you're trying to do?
Replies from: ozziegooen↑ comment by ozziegooen · 2020-11-22T02:28:48.801Z · LW(p) · GW(p)
Kudos for the thinking here, I like the take.
There's a whole lot to "making people more correct about things." I'm personally a lot less focused on trying to make sure the "masses" believe things we already know, than I am in improving the epistemic abilities of "best" groups. From where I'm standing, I imagine even the "best" people have a long way to improve. I personally barely feel confident about a bunch of things and am looking for solutions where I could be more confident. More "super intense next level prediction markets" and less "fighting conspiracy theories".
I do find the topic of epistemics of "the masses" to be interesting, it's just different. CSER did some work in this area, and I also liked the podcast about Taiwan's approach to it (treating lies using epidemic models, similar to how you mention.)
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-22T03:08:42.764Z · LW(p) · GW(p)
I have an idea along these lines: adversarial question-asking.
I have a big concern about various forms of forecasting calibration.
Each forecasting team establishes its reputation by showing that its predictions, in aggregate, are well-calibrated and accurate on average.
However, questions are typically posed by a questioner who's part of the forecasting team. This creates an opportunity for them to ask a lot of softball questions that are easy for an informed forecaster to answer correctly, or at least to calibrate their confidence on.
By advertising their overall level of calibration and average accuracy, they can "dilute away" inaccuracies on hard problems that other people really care about. They gain a reputation for accuracy, yet somehow don't seem so accurate when we pose a truly high-stakes question to them.
This problem could be at least partly solved by having an external, adversarial question-asker. Even better would be some sort of mechanical system for generating the questions that forecasters must answer.
For example, imagine that you had a way to extract every objectively answerable question posed by the New York Times in 2021.
Currently, their headline article is "Duty or Party? For Republicans, a Test of Whether to Enable Trump"
Though it does not state this in so many words, one of the primary questions it raises is whether the Michigan board that certifies vote results will certify Biden's victory ahead of the Electoral College vote on Dec. 14.
Imagine that one team's job was to extract such questions from a newspaper. Then they randomly selected a certain number of them each day, and posed them to a team of forecasters.
In this way, the work of superforecasters would be chained to the concerns of the public, rather than spent on questions that may or may not be "hackable."
To me, this is a critically important, and to my knowledge totally unexplored question that I would very much like to see treated.
Replies from: ozziegooen↑ comment by ozziegooen · 2020-11-22T17:33:27.117Z · LW(p) · GW(p)
Comparing groups of forecasters who worked on different question sets only using simple accuracy measures like brier scores is basically not feasible. You're right that forecasters can prioritize easier questions and do other hacks.
This post goes into detail on several incentive problems:
https://forum.effectivealtruism.org/posts/ztmBA8v6KvGChxw92/incentive-problems-with-current-forecasting-competitions [EA · GW]
I don't get the impression that platforms like Metaculus or GJP bias their questions much to achieve higher brier scores. This is one reason why they typically focus more on their calibration graphs, and on direct question comparisons between platforms.
All that said, I definitely think we have a lot of room to get better at doing comparisons of forecasting between platforms.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-23T06:04:53.200Z · LW(p) · GW(p)
I’m less interested in comparing groups of forecasters with each other based on brier scores than with getting a referendum on forecasting generally.
The forecasting industry has a collective interest in maintaining their reputation for predictive accuracy on general questions. I want to know if they are in fact accurate in general questions, or whether some of their apparent success rests on choosing the questions that they address with some cunning.
comment by Chris_Leong · 2020-11-20T22:13:35.625Z · LW(p) · GW(p)
Yeah, for a long time I'm thought that it's been somewhat unfortunate that LW-style rationality hasn't been able to penetrate into academia like Effective Altruism has. This would have the advantage of leading to more rigorous research on these topics and also more engagement with other points of view.
Replies from: ozziegooen↑ comment by ozziegooen · 2020-11-20T22:22:43.524Z · LW(p) · GW(p)
A few quick thoughts here:
- Effective Altruism has actively been trying to penetrate academia. There are several people basically working in Academia full-time (mainly around GPI, CSER, CHAI, and FHI) focused on EA, but very few focused on LessWrong-style rationality. It seems to me like the main candidates for people to do this all decided to work on AI safety directly.
- I'd note that in order to introduce "LW-style rationality" to Academia, you'd probably want to chunk it up and focus accordingly. I think epistemics is basically one subset.
- I personally expect much of the valuable work on Rationality/Epistemics to come from nonprofits and individuals, not academic institutions.
comment by TAG · 2020-11-21T19:25:01.069Z · LW(p) · GW(p)
whatever seems to work at making individuals or groups of people more correct about things for pragmatic purposes.”
Correct or pragmatic? There are useful falsehoods and useless truths, so you need to make your mind up which you are aiming for.
Replies from: ozziegooen↑ comment by ozziegooen · 2020-11-21T19:33:15.973Z · LW(p) · GW(p)
I think this is a really important question, one I'd like to explore further in future work.
I agree that there are areas where being locally incorrect can be pragmatically useful. Real people are bad at lying, so it's often locally EV-positive to believe something that is false.
The distinction I was focused on here though is on correct truths that are valuable vs. ones that aren't. Among correct beliefs, there's a broad spectrum in how useful those beliefs are. I think we could get pretty far optimizing valuable truths, before we get into the territory of marginally valuable untruths.
How to Measure Anything gets into the distinction of information that is only true vs. information that is both true and also highly valuable; or relevant for important decisions. That's what I was going for when I wrote this.
↑ comment by TAG · 2020-11-24T01:12:55.425Z · LW(p) · GW(p)
What are the useful truths that the mainstream doesn't know?
Replies from: ozziegooen↑ comment by ozziegooen · 2020-11-24T02:30:15.265Z · LW(p) · GW(p)
I'm not sure what you are looking for. Most people know very little in the space of all the things one could find out in books and the like, much which is useful to some extent. If you're curious what things I specifically think are true but the public doesn't yet know of, then continue to read my blog posts; it's a fair bit of stuff, but rather specific.
Replies from: TAG↑ comment by TAG · 2020-12-04T15:56:31.674Z · LW(p) · GW(p)
There is a large and old thing which is dedicated to teaching people a subset of the useful things in books, and that is education. Rationalism is a small and new thing which is effectively claiming to be able to do the same thing better....so it would be helpful to have some concrete examples.
comment by adamShimi · 2020-11-21T17:41:06.281Z · LW(p) · GW(p)
Really excited about this! I'm not planning to work on it, but I'll definitely read more of what you discover and write on the topic!
Effective Altruism typically focuses on maximizing the potential of large monetary donations and personal careers. I’d imagine Effective Epistemics would focus more on maximizing the impact of smaller amounts of effort.
I was confused by the link with EA, but that clarified it nicely!
About epistemic progress, the idea of historical studies made me think of things like the history of atheism (like in this cool post of a sequence on Machiavelli), and other shifts in intellectual perspectives and norms. This is definitely a topic I find fascinating, and I would love to see more work on it.
comment by edoarad · 2020-11-26T07:26:48.473Z · LW(p) · GW(p)
This sounds like an amazing project and I find it very motivating. Especially the questions around how we'd like future epistemics to be and prioritizing different tools/training.
As I'm sure you are aware, there is a wide academic literature around many related aspects including the formalization of rationality, descriptive analysis of personal and group epistemics, and building training programs. If I understand you correctly, a GPI analog here would be something like an interdisciplinary research center that attempts to find general frameworks with which it would be possible later on to better compare between interventions that aim at improving epistemics and to standardize a goal of "epistemic progress", with a focus on the most initially promising subdomains?