In the presence of disinformation, collective epistemology requires local modeling
post by jessicata (jessica.liu.taylor) · 2017-12-15T09:54:09.543Z · LW · GW · 39 commentsContents
39 comments
In Inadequacy and Modesty, Eliezer describes modest epistemology:
How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?
Surely it would be more realistic to search for possible reasons why the Bank of Japan might not be as stupid as it seemed, as stupid as some econbloggers were claiming. Possibly Japan’s aging population made growth impossible. Possibly Japan’s massive outstanding government debt made even the slightest inflation too dangerous. Possibly we just aren’t thinking of the complicated reasoning going into the Bank of Japan’s decision.
Surely some humility is appropriate when criticizing the elite decision-makers governing the Bank of Japan. What if it’s you, and not the professional economists making these decisions, who have failed to grasp the relevant economic considerations?
I’ll refer to this genre of arguments as “modest epistemology.”
I see modest epistemology as attempting to defer to a canonical perspective: a way of making judgments that is a Schelling point for coordination. In this case, the Bank of Japan has more claim to canonicity than Eliezer does regarding claims about Japan's economy. I think deferring to a canonical perspective is key to how modest epistemology functions and why people find it appealing.
In social groups such as effective altruism, canonicity is useful when it allows for better coordination. If everyone can agree that charity X is the best charity, then it is possible to punish those who do not donate to charity X. This is similar to law: if a legal court makes a judgment that is not overturned, that judgment must be obeyed by anyone who does not want to be punished. Similarly, in discourse, it is often useful to punish crackpots by requiring deference to a canonical scientific judgment.
It is natural that deferring to a canonical perspective would be psychologically appealing, since it offers a low likelihood of being punished for deviating while allowing deviants to be punished, creating a sense of unity and certainty.
An obstacle to canonical perspectives is that epistemology requires using local information. Suppose I saw Bob steal my wallet. I have information about whether he actually stole my wallet (namely, my observation of the theft) that no one else has. If I tell others that Bob stole my wallet, they might or might not believe me depending on how much they trust me, as there is some chance I am lying to them. Constructing a more canonical perspective (e.g. a in a court of law) requires integrating this local information: for example, I might tell the judge that Bob stole my wallet, and my friends might vouch for my character.
If humanity formed a collective superintelligence that integrated local information into a canonical perspective at the speed of light using sensible rules (e.g. something similar to Bayesianism), then there would be little need to exploit local information except to transmit it to this collective superintelligence. Obviously, this hasn't happened yet. Collective superintelligences made of humans must transmit information at the speed of human communication rather than the speed of light.
In addition to limits on communication speed, collective superintelligences made of humans have another difficulty: they must prevent and detect disinformation. People on the internet sometimes lie, as do people off the internet. Self-deception is effectively another form of deception, and is extremely common as explained in The Elephant in the Brain.
Mostly because of this, current collective superintelligences leave much to be desired. As Jordan Greenhall writes in this post:
Take a look at Syria. What exactly is happening? With just a little bit of looking, I’ve found at least six radically different and plausible narratives:
• Assad used poison gas on his people and the United States bombed his airbase in a measured response.
• Assad attacked a rebel base that was unexpectedly storing poison gas and Trump bombed his airbase for political reasons.
• The Deep State in the United States is responsible for a “false flag” use of poison gas in order to undermine the Trump Insurgency.
• The Russians are responsible for a “false flag” use of poison gas in order to undermine the Deep State.
• Putin and Trump collaborated on a “false flag” in order to distract from “Russiagate.”
• Someone else (China? Israel? Iran?) is responsible for a “false flag” for purposes unknown.
And, just to make sure we really grasp the level of non-sense:
• There was no poison gas attack, the “white helmets” are fake news for purposes unknown and everyone who is in a position to know is spinning their own version of events for their own purposes.
Think this last one is implausible? Are you sure? Are you sure you know the current limits of the war on sensemaking? Of sock puppets and cognitive hacking and weaponized memetics?
All I am certain of about Syria is that I really have no fucking idea what is going on. And that this state of affairs — this increasingly generalized condition of complete disorientation — is untenable.
We are in a collective condition of fog of war. Acting effectively under fog of war requires exploiting local information before it has been integrated into a canonical perspective. In military contexts, units must make decisions before contacting a central base using information and models only available to them. Syrians must decide whether to flee based on their own observations, observations of those they trust, and trustworthy local media. Americans making voting decisions based on Syria must decide which media sources they trust most, or actually visit Syria to gain additional info.
While I have mostly discussed differences in information between people, there are also differences in reasoning ability and willingness to use reason. Most people most of the time aren’t even modeling things for themselves, but are instead parroting socially acceptable opinions. The products of reasoning could perhaps be considered as a form of logical information and treated similar to other information.
In the past, I have found modest epistemology aesthetically appealing on the basis that sufficient coordination would lead to a single canonical perspective that you can increase your average accuracy by deferring to (as explained in this post). Since then, aesthetic intuitions have led me to instead think of the problem of collective epistemology as one of decentralized coordination: how can good-faith actors reason and act well as a collective superintelligence in conditions of fog of war, where deception is prevalent and creation of common knowledge is difficult? I find this framing of collective epistemology more beautiful than the idea of a immediately deferring to a canonical perspective, and it is a better fit for the real world.
I haven't completely thought through the implications of this framing (that would be impossible), but so far my thinking has suggested a number of heuristics for group epistemology:
- Think for yourself. When your information sources are not already doing a good job of informing you, gathering your own information and forming your own models can improve your accuracy and tell you which information sources are most trustworthy. Outperforming experts often doesn't require complex models or extraordinary insight; see this review of Superforecasting for a description of some of what good amateur forecasters do.
- Share the products of your thinking. Where possible, share not only opinions but also the information or model that caused you to form the opinion. This allows others to verify and build on your information and models rather than just memorizing "X person believes Y", resulting in more information transfer. For example, fact posts will generally be better for collective epistemology than a similar post with fewer facts; they will let readers form their own models based on the info and have higher confidence in these models.
- Fact-check information people share by cross-checking it against other sources of information and models. The more this shared information is fact-checked, the more reliably true it will be. (When someone is wrong on the internet, this is actually a problem worth fixing).
- Try to make information and models common knowledge among a group when possible, so they can be integrated into a canonical perspective. This allows the group to build on this, rather than having to re-derive or re-state it repeatedly. Contributing to a written canon that some group of people is expected to have read is a great way to do this.
- When contributing to a canon, seek strong and clear evidence where possible. This can result in a question being definitively settled, which is great for the group's ability to reliably get the right answer to the question, rather than having a range of "acceptable" answers that will be chosen from based on factors other than accuracy.
- When taking actions (e.g. making bets), use local information available only to you or a small number of others, not only canonical information. For example, when picking organizations to support, use information you have about these organizations (e.g. information about the competence of people working at this charity) even if not everyone else has this info. (For a more obvious example to illustrate the principle: if I saw Bob steal my wallet, then it's in my interest to guard my possessions more closely around Bob than I otherwise would, even if I can't convince everyone that Bob stole my wallet).
39 comments
Comments sorted by top scores.
comment by RyanCarey · 2017-12-16T20:09:12.157Z · LW(p) · GW(p)
A few thoughts that have been brooding, that are vaguely relevant to your post...
One thing that I find is often disappointingly absent from LW discussions of epistemology is how much the appropriate epistemology depends on your goals and your intellectual abilities. If you are someone of median intelligence who just want to carry out a usual trade like making shoes or something, you can largely get by with recieved wisdom. If you are a researcher, your entire job consists of coming up with things that aren't already present in the market of ideas, and so using at least some local epistemology (or 'inside view', or 'figuring things out') is a job requirement. If you are trying to start a start-up, or generate any kind of invention, again, you usually have to claim to have some knowledge advantage, and so you need a more local epistemology.
Relatedly, even for any individual person, the kind of thinking I should use depends very much on context. Personally, in order to do research, I try to do a lot of my thinking by myself, in order to train myself to think well. Sure, I do engage in a lot of scholarship too, and I often check my answers through discussing my thinking with others. But I do a lot more independent thinking than I did two years ago anyway. But if I am ever making a truly important decision, such as who to work for, it makes sense for me to be much more deferential, and to seek advice of people who I know to be the best at making that decision, and then to defer to them to a fairly large degree (notwithstanding that they lack some information, which I should adjust for).
It would be nice to see people relax blanket pronouncements (not claiming this is particularly worse in this post compared to elsewhere) in order to give a bit more attention to this dependence on context.
Replies from: AnnaSalamon, Benito, Chris_Leong↑ comment by AnnaSalamon · 2017-12-16T20:59:13.387Z · LW(p) · GW(p)
RyanCarey writes:
If you are someone of median intelligence who just want to carry out a usual trade like making shoes or something, you can largely get by with recieved wisdom.
AFAICT, this only holds if you're in a stable sociopolitical/economic context -- and, more specifically still, the kind of stable sociopolitical environment that provides relatively benign information-sources. Examples of folks who didn't fall into this category: (a) folks living in eastern Europe in the late 1930's (especially if Jewish, but even if not; regardless of how traditional their trade was); (b) folks living in the Soviet Union (required navigating a non-explicit layer of recieved-from-underground knowledge); (c) folks literally making shoes during time-periods in which shoe-making was disrupted by the industrial revolution. It is to my mind an open question whether any significant portion of the US/Europe/etc. will fall into the "can get by largely with received wisdom" reference class across the next 10 years. (They might. I just actually can't tell.)
Replies from: TAG, RyanCarey↑ comment by RyanCarey · 2017-12-17T02:50:42.213Z · LW(p) · GW(p)
It seems like we're anchoring excessively on the question of sufficiency, when what matters is the net expected benefit. If we rephrase the question and ask "are there populations that are made worse off, on expectation, by more independent thought?", the answer is clearly yes, which is I think the question that we should be asking (and that fits the point I'm making).
In order to research existential risk, and to actually survive, yes, we need more thought, although this is the kind of research I had in mind in my original comment.
Replies from: Benquo, vanessa-kosoy↑ comment by Benquo · 2017-12-17T19:01:13.900Z · LW(p) · GW(p)
Independent thought helps not just for x-risk, but for personal well-being things like not joining the army, not playing the slot machines, getting off Facebook. In other words, not just for accepting action requests culturally construed as exotic, but rejecting some normal action requests. "Stable sociopolitical/economic context" is actually a fairly strong requirement, given how much the current global narrative is based on exponential growth.
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2017-12-17T08:27:25.476Z · LW(p) · GW(p)
The question is, what do you mean by "independent thought"? If "independent thought" means "mistrust everyone" then clearly it can be a harmful heuristic. If "independent thought" means "use your own thinking faculties to process all evidence you have, including the evidence which consists of the expression of certain beliefs by certain other people" then it's not clear to me there are significant populations that would be harmed by it. If there are, it means that the society in question has wise and benevolent leaders s.t. for large swaths of the population it is impossible to verify (given their cognitive abilities) that these leaders are indeed wise and benevolent. It seems to me that arriving at such a situation would require a lot of luck, given that the leadership of a society is ultimately determined by the society itself.
↑ comment by Ben Pace (Benito) · 2017-12-17T22:26:14.904Z · LW(p) · GW(p)
My view is that any world where the value of possible outcomes is heavy-tailed distributed (x-risk is a thing, but also more everyday things like income, and I'd guess life satisfaction) is a world where the best opportunities are nonobvious and better epistemology will thus have very strong returns.
I maybe am open to an argument that a 10th century peasant literally has no ability to have a better life, but I basically think that it holds for everyone I talk to day-to-day.
Replies from: RyanCarey↑ comment by RyanCarey · 2017-12-17T23:11:49.896Z · LW(p) · GW(p)
So you're saying rationality is good if your utility is linear in the quantity of some goods? (For most people it is more like logarithmic, right?) But it seems that you want to say that independent thought is usually useful...
I'm sure the 10th century peasant does have ways to have a better life, but they just don't necessarily involve doing rationality training, which is pretty obviously does not (and should not) help in all situations. Right?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-01-06T03:32:47.914Z · LW(p) · GW(p)
Yes, it seems to me that we should care about some things linearly, though I’ll have to think some more about why I think that.
↑ comment by Chris_Leong · 2020-04-04T11:26:20.199Z · LW(p) · GW(p)
"One thing that I find is often disappointingly absent from LW discussions of epistemology is how much the appropriate epistemology depends on your goals and your intellectual abilities" - Never really thought of it that way, but makes a lot of sense.
comment by paulfchristiano · 2017-12-18T07:59:18.372Z · LW(p) · GW(p)
I think it's worth asking: can we find a collective epistemology which would work well for the people who use it, even if everyone else behaved adversarially? (More generally: can we find a policy that works well for the people who follow it, even if others behave adversarially?)
The general problem is clearly impossible in some settings. For example, in the physical world, without secure property rights, an adversary can just shoot you and there's not much you can do. No policy is going to work well for the people who follow it in that setting, unless the other people play nice or the group of people following it is large enough to overpower those who don't.
In others settings it seems quite easy, e.g. if you want to make a series of binary decisions about who to trust, and you immediately get a statistical clue when someone defects, then even a small group can make good decisions regardless of what everyone else does.
For collective epistemology I think you can probably do well even assuming if 99% of people behave adversarially (including impersonating good faith actors until its convenient to defect---in this setting, "figure out who is rational, and then trust them" is a non-starter approach). And more generally, given secure property rights I think you could probably organize an entire rational economy in a way that is robust to large groups defecting strategically, yet still pretty efficient.
The adversarial setting is of course too pessimistic. But I think it's probably a good thing to work on anyway in cases where it looks possible, since (a) it's an easy setting to think about, (b) in some respects the real world is surprisingly adversarial, (c) robustness lets you relax about lots of things and is often pretty cheap.
I'm way more skeptical than you about maintaining a canonical perspective; I almost can't imagine how that would work well given real humans, and its advantages just don't seem that big compared to the basic unworkability.
Replies from: Benquo, ESRogs, Benquo↑ comment by Benquo · 2017-12-18T16:22:17.279Z · LW(p) · GW(p)
It seems to me that part of the subtext here is that humans for the most part track a shared perspective, and can't help but default to it quite often, because (a) we want to communicate with other humans, and (b) it's expensive to track the map-territory distinction.
For instance, let's take the Syria example. Here are some facts that I think are tacitly assumed by just about everyone talking about the Syria question, without evaluating whether there is sufficient evidence to believe them, simply because they are in the canonical perspective:
- Syria is a place.
- People live there.
- There is or was recently some sort of armed conflict going on there.
- Syria is adjacent to other places, in roughly the spatial arrangement a map would tell you.
- Syria contains cities in which people live, in roughly the places a map would tell you. The people in those cities for the most part refer to them by the names on the map, or some reasonably analogous name in their native language.
- One of the belligerents formerly had almost exclusive force-projection capacity over the whole of Syria. The nominal leader of this faction is Bashar al-Assad.
- ISIL/ISIS was a real organization, that held real territory.
The level of skepticism that would not default to the canonical perspective on facts like that seems - well, I don't know of anyone who seems to have actually internalized that level of skepticism of canon, aside from the President of the United States. He seems to have done pretty well for himself, if he in fact exists.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-19T07:15:06.889Z · LW(p) · GW(p)
Robust collective epistemology need not look like "normal epistemology but really skeptical." Treating knowledge as provisional and tentative doesn't require a high level of skepticism. It may involve some revision to the default way humans think, but that ship had already sailed well before the enlightenment.
It seems reasonable to believe X "merely" because it is falsifiable, no one credible objects, and you've personally seen no evidence to the contrary. That protocol probably won't lead you astray, but for most interesting claims it is going to be easy for an adversary to DoS it (since even if the claim is true someone could object without compromising their own credibility) and so you are going to need to rely on more robust fallbacks.
Replies from: Benquo↑ comment by Benquo · 2017-12-19T16:26:23.026Z · LW(p) · GW(p)
My point isn't that you should doubt that sort of stuff strongly, it's that it seems to me to be prohibitively computationally expensive to evaluate it at all rather than passively accepting it as background observations presumed true. How, in practice, does one treat that sort of knowledge as provisional and tentative?
My best guess is that someone with the right level of doubt in social reality ends up looking like they have a substantially higher than normal level of psychosis, and ends up finding it difficult to track when they're being weird.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-20T02:35:53.082Z · LW(p) · GW(p)
How, in practice, does one treat that sort of knowledge as provisional and tentative?
A belief being tentative is a property of your algorithm-for-deciding-things, not what a state of mind feels like from the inside. You can get a lot of mileage by e.g. (a) independently revisiting tentative claims with small probability, (b) responding appropriately when someone points out to you that a load-bearing tentative assumption might be wrong.
I don't think this question should be expected to have a really short answer, even if there are ironclad collective epistemology protocols. It's like saying "how, in practice, do people securely communicate over untrusted internet infrastructure?" There is a great answer, but even once you have a hint that it's possible it will still take quite a lot of work to figure out exactly how the protocol works.
Replies from: Benquo↑ comment by Benquo · 2017-12-27T16:46:50.373Z · LW(p) · GW(p)
Do we actually have a disagreement here? I'm saying that actually-existing humans can't actually do this. You seem to be saying that it's conceivable that future humans might develop a protocol for doing this, and it's worth exploring.
These can both be true! But in the meantime we'd need to explore this with our actually-existing minds, not the ones we might like to have, so it's worth figuring out what the heck we're actually doing.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-28T16:39:05.329Z · LW(p) · GW(p)
I agree that it would take some work to figure out how to do this well.
I would say "figure out how to do this well" is at a similar level of complexity to "figure out what the heck we're actually doing." The "what should we do" question is more likely to have a clean and actionable answer. The "what do we do" question is more relevant to understanding the world now at the object level.
↑ comment by ESRogs · 2017-12-27T06:43:16.387Z · LW(p) · GW(p)
I'm way more skeptical than you about maintaining a canonical perspective
Which part of the post are you refering to?
I read Jessica as being pretty down on the canonical perspective in this post. As in this part:
Since then, aesthetic intuitions have led me to instead think of the problem of collective epistemology as one of decentralized coordination: how can good-faith actors reason and act well as a collective superintelligence in conditions of fog of war, where deception is prevalent and creation of common knowledge is difficult? I find this framing of collective epistemology more beautiful than the idea of a immediately deferring to a canonical perspective, and it is a better fit for the real world.Replies from: paulfchristiano
↑ comment by paulfchristiano · 2017-12-28T16:44:22.915Z · LW(p) · GW(p)
I'm on board with the paragraph you quoted.
I'm objecting to:
Try to make information and models common knowledge among a group when possible, so they can be integrated into a canonical perspective. This allows the group to build on this, rather than having to re-derive or re-state it repeatedly.
and:
This can result in a question being definitively settled, which is great for the group's ability to reliably get the right answer to the question, rather than having a range of "acceptable" answers that will be chosen from based on factors other than accuracy.
↑ comment by Benquo · 2017-12-18T16:54:30.096Z · LW(p) · GW(p)
For example, in the physical world, without secure property rights, an adversary can just shoot you and there's not much you can do. No policy is going to work well for the people who follow it in that setting, unless the other people play nice or the group of people following it is large enough to overpower those who don't.
There are actual processes by which people have managed to transition from a state in which they were not protected against violence, to a state in which they were protected against some violence, or were able to coordinate via high-trust networks within a broader low-trust context. The formation of states and other polities is a real thing that happens, and to write off the relevant coordination problem as impossibly hard seems bizarre, so perhaps I'm misunderstanding you. It has historically been fairly popular to attribute this sort of transition to gods, but that seems more like reifying our confusion than actually alleviating it.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-19T06:39:48.987Z · LW(p) · GW(p)
The "everyone else behaves adversarially" assumption is very pessimistic. Lots of problems are solvable in practice but insoluble if there is an adversarial majority.
Collective epistemology may be such a case. But I suspect that it's actually much easier to build robust collective epistemology than to build a robust society, in the sense that it can be done under much weaker assumptions.
Replies from: Benquo↑ comment by Benquo · 2017-12-19T16:28:38.657Z · LW(p) · GW(p)
Hmm. Are you assuming e.g. that we don't know who's part of a kinship group etc., so there's no a priori way of inferring who's likely to have aligned interests? That seems like an interesting case to model, but it's worth noting that the modern era is historically unusual in resembling it.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-20T02:24:32.911Z · LW(p) · GW(p)
I'm just asking: can you design a system that works well, for the people who adopt it, without assuming anything about everyone else? So in particular: without making assumptions about aligned interests, about rationality, about kindness, etc.
When this harder problem is solvable, I think it buys you a lot. (For example, it may spare you from having to go literally form a new society.)
Replies from: whpearson↑ comment by whpearson · 2017-12-20T15:01:26.434Z · LW(p) · GW(p)
It seems that in very adversarial settings there is limited incentive to share information. Also resources used for generating object level information might have to be deployed trying to figure out the goals and of any adversarial behavior. Adversarial behavior directed at you might not be against your goals. For example an allied spy might lie to you to maintain their cover.
For these reasons I am skeptical of a productive group epistemology.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-20T17:40:02.879Z · LW(p) · GW(p)
I meant to propose the goal: find a strategy such that the set H of people who use it do "well" (maybe: "have beliefs almost as accurate as if they had known each others' identities, pooled their info honestly, and built a common narrative") regardless of what others do.
There aren't really any incentives in the setup. People in H won't behave adversarially towards you, since they are using the proposed collective epistemology. You might as well assume that people outside of H will behave adversarially to you, since a system that is supposed to work under no assumptions about their behavior must work when they behave adversarially (and conversely, if it works when they behave adversarially then it works no matter what).
Replies from: whpearson↑ comment by whpearson · 2017-12-21T14:50:42.766Z · LW(p) · GW(p)
I think the group will struggle with accurate models of things where data is scarce. This scarcity might be due to separation in time or space between the members and the phenomenon being discussed. Or could be due to the data being dispersed.
This fits with physics and chemistry being more productive than things like economics or studying the future. In these kinds of fields narratives that serve certain group members can take hold and be very hard to dislodge.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-22T04:16:19.253Z · LW(p) · GW(p)
Economics and futurism are hard domains for epistemology in general, but I'm not sure that they'd become disproportionately harder in the presence of disinformation.
I think the hard cases are when people have access to lots of local information that is hard for others to verify. In futurism and economics people are using logical facts and publicly verifiable observations to an unusual extent, so in that sense I'd expect trust to be unusually unimportant.
Replies from: whpearson↑ comment by whpearson · 2017-12-23T22:53:57.467Z · LW(p) · GW(p)
I was just thinking that we would be able to do better than being keynesian or believing in the singularity if we could aggregate information from everyone reliably.
If we could form a shared narrative and get reliable updates from chip manufacturers about the future of semiconductors we could make better predictions about the pace of computational improvement. Than if you assume they will be saying things with half an eye on their share price.
There might be an asymptote you can reach on doing "well" under these mildly adversarial settings. I think knowing the incentives of people helps a lot so can know when people are incentivised deceive you.
Are you assuming you can identify people in H reliably?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2017-12-28T03:34:52.445Z · LW(p) · GW(p)
If you can identify people in H reliably then you would ignore everyone outside of H. The whole point of the game is that you can't tell who is who.
Replies from: whpearson↑ comment by whpearson · 2017-12-31T18:17:09.862Z · LW(p) · GW(p)
So what you can do is.
- Ignore all non-gears level feedback: Getting feedback is important for epistemolgical correctness. But if it is in an adversarial setting feedback may be trying to make you believe something to their benefit. Ignore all karma scores, for example. If however someone can tell you how and why you are going wrong (or right) that can be useful, if you agree with their reasoning.
- Only update on facts that logically follow from things you already believe. If someone has followed an inference chain further than you, you can use their work safely.
- If arguments rely on facts new to you, look at the world and see if those facts are consistent with what is around you.
That said, as I don't believe in a sudden switch to utopia, I think it important to strengthen the less-adversarial parts of society, so I will be seeking those out. "Start as you mean to go on," seems like decent wisdom, in this day and age.
↑ comment by Benquo · 2017-12-17T18:47:05.865Z · LW(p) · GW(p)
This isn't strong enough. The more people tend to defer to a canonical perspective, the more one can control others' actions through altering the canonical perspective.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-25T08:01:30.439Z · LW(p) · GW(p)
Although, less so, to the extant that this advice...
When taking actions (e.g. making bets), use local information available only to you or a small number of others, not only canonical information. For example, when picking organizations to support, use information you have about these organizations (e.g. information about the competence of people working at this charity) even if not everyone else has this info. (For a more obvious example to illustrate the principle: if I saw Bob steal my wallet, then it's in my interest to guard my possessions more closely around Bob than I otherwise would, even if I can't convince everyone that Bob stole my wallet).
...is heeded.
comment by Ben Pace (Benito) · 2017-12-16T00:59:33.976Z · LW(p) · GW(p)
Things I liked:
- This has my favourite argument for modest epistemology I've read, and in general connected together a bunch of concepts previously discussed here
- Really useful metaphors I'll probably use again (humanity as collective superintelligence bottlenecked by e.g. speed, fog of war)
- Concrete and useful epistemic recommendations at the end
- The post is short
So I've promoted it to Featured.
comment by JenniferRM · 2017-12-21T08:41:11.532Z · LW(p) · GW(p)
I really like your promotion of fact checking :-)
Also, I'd like to especially thank you for offering the frame where every human group is potentially struggling to coordinate on collective punishment decisions from within a fog of war.
I had never explicitly noticed that people won't want their pursuit of justice to seem like unjustified aggression to "allies from a different bubble of fog", and for this reason might want to avoid certain updates in their public actions.
Like, I even had the concept of altriustic punishment and I had the concept of a fog of war, but somehow they never occured in my brain at the same time before this. Thank you!
If I was going to add a point of advice, it would be to think about being part of two or three "epistemic affinity groups". The affinity group model suggests these groups should be composed of maybe 3 to 15 people each and they should be built around a history of previous prolonged social contact. When the fog of war hits, reach out to at least one of your affinity groups!
↑ comment by Benquo · 2017-12-18T16:47:29.153Z · LW(p) · GW(p)
I don't think Jessica's actually making that conflation, though:
[C]ollective superintelligences made of humans have another difficulty: they must prevent and detect disinformation. People on the internet sometimes lie, as do people off the internet. Self-deception is effectively another form of deception, and is extremely common [...]
In the past, I have found modest epistemology aesthetically appealing on the basis that sufficient coordination would lead to a single canonical perspective that you can increase your average accuracy by deferring to [...]. Since then, aesthetic intuitions have led me to instead think of the problem of collective epistemology as one of decentralized coordination: how can good-faith actors reason and act well as a collective superintelligence in conditions of fog of war, where deception is prevalent and creation of common knowledge is difficult? I find this framing of collective epistemology more beautiful than the idea of a immediately deferring to a canonical perspective, and it is a better fit for the real world.
The piece explicitly distinguishes between canonical perspective and truth, and explicitly argues that a strong reason to avoid deferring to the canonical perspective is that it is frequently made of lies.
comment by TAG · 2017-12-19T16:03:54.664Z · LW(p) · GW(p)
Think for yourself.
But learn how first.
Replies from: Benquo↑ comment by Benquo · 2017-12-27T16:47:41.282Z · LW(p) · GW(p)
Impossible.
Replies from: Benquo↑ comment by Benquo · 2017-12-27T16:50:46.147Z · LW(p) · GW(p)
I think the reason why this seems like a plausibly good idea instead of an obviously terrible one is that we conflate thinking with acting. You can't learn to think for yourself properly without repeatedly trying, which will necessarily involve thinking for yourself improperly. But, in the meantime, you should expect to be wrong about lots of things and should probably conform to the customs of some people with a track record of not totally failing.
Relevant: Be secretly wrong