Aumann-agreement is common
post by tailcalled · 2023-08-26T20:22:03.738Z · LW · GW · 33 commentsContents
My vacation in Norway relied tons on Aumann agreements Aumannian disagreements quickly disappear, and so "disagreement" connotes/denotes non-Aumannian disagreements Most Aumannian disagreements are a simple lack of awareness Aumann agreement is extremely efficient and powerful Aumann agreement is about pooling, not moderation Aumann conditions information on trust A lot of Aumann-updates are on promises, history or universals None 33 comments
Thank you to Justis Mills for proofreading and feedback. This post is also available on my substack.
Aumann's agreement theorem [? · GW] is a family of theorems which say that if people trust each other and know each other's opinions, then they agree with each other. Or phrased another way, if people maintain trust with each other, then they can reach agreement. (And some variants of the theorem, which take computational factors into consideration, suggest they can do so quite rapidly.)
The original proof is pretty formal and confusing, but a simpler heuristic argument is that for an honest, rational agent, the mere fact of them professing an opinion [? · GW] can be strong evidence [LW · GW] to another rational agent, because if the speaker's probabilities are higher than the speaker's prior, then they must have seen corresponding evidence to justify that opinion.
Some people find this confusing, and feel like it must be wrong because it doesn't apply to most disagreements. I think these people are wrong because they are not sufficiently expansive in what they think of as a disagreement. The notion of disagreement that Aumann's agreement theorem applies to is when the people assign different probabilities to events; this is a quite inclusive notion which covers many things that we don't typically think of as disagreements, including cases where one party has information about a topic and the other party has no information.
My vacation in Norway relied tons on Aumann agreements
Recently, I had a vacation in Norway with my wife.
In order to get there, and to get around, we needed transport. At first we disagreed with people who provided transport there, as we didn't know of many specific means of transport, only vaguely that there would be some planes and ships, without knowing which ones. But my wife had heard that there was something called the "Oslo ferry", so we Aumann-agreed that this was an option, and decided to investigate further.
We disagreed with the company that provided the Oslo ferry, as we didn't know what their website is, so we asked Google, and it provided some options for what the ferry might be, and we Aumann-agreed with Google and then went investigating from there. One website we found claimed to sell tickets to the ferry; at first we disagreed with the website about when we could travel as we didn't know the times of the ferry, but then we read which times it claimed was available, and Aumann-updated to that.
We also had to find some things to do in Norway. Luckily for us, some people at OpenAI had noticed that everyone had huge disagreements with the internet as nobody had really memorized the internet, and they thought that they could gain some value by resolving that disagreement, so they Aumann-agreed with the internet by stuffing it into a neural network called ChatGPT. At first, ChatGPT disagreed with us about what to visit in Norway and suggested some things we were not really interested in, but we informed it about our interests, and then it quickly Aumann-agreed with us and proposed some other things that were more interesting.
One of the things we visited was a museum for an adventurer who built a raft and sailed in the ocean. Prior to visiting the museum, we had numerous disagreements with it, as e.g. we didn't know that one of the people on the raft had fallen in the ocean and had to be rescued. But the museum told us this was the case, so we Aumann-agreed to believe it. Presumably, the museum learnt about it through Aumann-agreeing with the people on the raft.
One example of an erroneous Aumann agreement was with the train company Vy. They had said that they could get us a train ticket on the Bergen train, and we had Aumann-agreed with that. However, due to a storm, their train tracks were broken, and the company website kept promising availability on the train until the last moment, so we didn't get corrected by Vy.
But we were not saved by empirically seeing the damaged tracks, or by rationally reasoning that it was not available. Instead, we were saved because we told someone about our plans to take the Bergen train, expecting them to Aumann-agree to a belief that we would take the train, but instead they kept disagreeing, and told us that it was flooded and would be cancelled. This made us Aumann-agree that we had to find some other method, and we asked Google whether there were any flights, of which it suggested some that we Aumann-agreed to.
Later, I've told my dad and now also you about the trip. Prior to talking about it, I expect you disagreed as you didn't know anything about it, but at least I'm pretty sure my dad Aumann-agreed to the things I told him, and I suspect you did so too.
Aumannian disagreements quickly disappear, and so "disagreement" connotes/denotes non-Aumannian disagreements
The disagreements mentioned in my story all happened between parties with reasonable levels of trust, and they mostly involved one party lacking information and the other party having information, so they were quickly resolved by transferring one's information. Even noticing the specifics of the disagreement is sufficient to transfer the information and resolve it.
Meanwhile, in politics, disagreements often occur between people who have conflicting goals, where they it is reasonable to suspect that one side is misrepresenting things because they care more about gaining power than accurately informing the people they talk to.
Because the preconditions for Aumannian agreement don't hold when you suspect the counterparty to be biased, such disagreements won't be resolved so quickly, and instead stick around long-term. But if we form our opinions about what disagreements are like from what disagreements stick around long-term, then that means we are filtering out the disagreements where Aumann's conditions hold.
Thus, "disagreement" comes to connote (or maybe even denote), "difference in opinion between people who don't trust each other" rather than simply "difference in opinion".
Most Aumannian disagreements are a simple lack of awareness
The Bayesian paradigm doesn't fundamentally[1] distinguish disbelief in a proposition due to having no information about it, versus due to having observed contradictory information. Consider e.g. picking two random people and making a statement such as "Marv Elsher is dating Abrielle Levine" about them. You have no idea who these people are, and most people are not dating each other, so you should rationally assign this a very low probability.
But that's not because you actively disbelieve it from contradictory evidence! In fact you might not even think of yourself as having had a belief about it ahead of time. If there is in fact a Marv Elsher who is dating Abrielle Levine, then Marv assigns a very high probability to this statement, while you wouldn't even have thought of it without this post.
If you consider all of the cases where people assign different probabilities to symbolically expressible propositions, then almost all of them will be something along these lines, because there's tons of random local information which you simply don't have access to. Thus, if you want to think of the typical case of a disagreement that Aumann's agreement theorem refers to, you should think "Person A has observed X and Person B does not even have awareness of what's going on around X, let alone any evidence on X itself".
Aumann agreement is extremely efficient and powerful
For most of the updates that happened during the vacation, it would simply not be feasible to verify things by oneself. Often they concerned things that were very far away, both in space and time. Sometimes they concerned things that happened in the past where it wouldn't even be physically possible to verify. But even for the things you could verify, it would take orders of magnitude more time and resources than to just Aumann-update.
Aumann agreement is about pooling, not moderation
In my examples, people generally didn't converge to a compromise position; instead they adopted the counterparty's positions wholesale. This is generally the correct picture to have in mind for Aumann agreement. While the exact way you update can vary depending on the prior and the evidence, one simple example I like is this:
You both start with having your log-odds being some vector x according to some shared prior (i.e. you start out agreeing). You then observe some evidence y, updating your log-odds to be x+y, while they observe some independent evidence z, updating their log-odds to be x+z. If you exchange all your information, then this updates your shared log-odds to be x+y+z, which is most likely going to be an even more radical departure from x than either x+y or x+z alone.
Aumann conditions information on trust
Surely sometimes it seems like Aumann agreement should cause people to moderate, right? Like in politics, if you have spent a lot of time absorbing one party's ideology, and your interlocutor has spent a lot of time absorbing the other party's ideology, but you then poke lots of holes in each other's arguments?
I think in this case, learning that there are holes in the arguments your learned from your party may be reason to doubt the trustworthiness of your party, especially when they cannot fix those holes. Since you Aumann-updated a great deal on your party's view specifically because you trusted them, this should also make you un-update away from their views, presumably moderating them.
(I think this has massive implications for collective epistemics, and I've gradually been developing a theory of collective rationality based on this, but it's not finished yet and the purpose of this post is merely to grok the agreement theorem rather than to lay out that theory.)
There may also be less elaborate ways in which you might moderate due to Aumann agreement, e.g. if contradictory information cancels out.
A lot of Aumann-updates are on promises, history or universals
Many of the most obvious Aumann updates in my story were about promises; for instance that an interlocutor would provide me a certain transport at a certain time from one location to another.
One might think this suggests that promises have a unique link to Aumann's agreement theorem, but I think this is actually because promises are an unusually prevalent type of information due to the combo of:
- People's capacity to make reliable claims about them.
- Being useful enough in practice to be worth sharing.
- Covering a diverse and open-ended set of possibilities.
For instance, if you promise me a sandwich in your kitchen, then you can ensure that your promise is true by paying rent to keep ownership of your kitchen, buying and storing ingredients for the sandwich so they are ready for assembly, and then assembling the sandwich for me when it is time.
Meanwhile, if you tell me that there is an available sandwich in someone else's kitchen, then because you don't maintain control over that kitchen, it might cease to be true once we actually reach the time when I need it, so you can't reliably make claims about it. Furthermore, even if you could, I would probably not get away with taking it, so it would not be useful to me.
You could probably reasonably reliably make claims about certain things you've seen in the past, but most of those are not very useful because they happened in the past. For example, while I know from the museum that that guy on the raft fell in the water, I don't have anything to use it for. That said, sometimes (e.g. to attribute outcomes to causes, or to make generalizations), they are useful.
You can read a physics textbook and do a lot of useful Aumann updates on this, but this is mainly because physics is a "universal" subject, but this also means that it is a closed subject with a bounded amount of information. There can't be an "alternate physics" with alternate particles and strengths of attraction, unlike the same sense as there can be an "alternate plane company" with alternate flight times.
Promises, history and universals aren't meant to be a complete taxonomy, it's just something I've noticed.
- ^
It is distinguished through the history of updating from prior to posterior, but the distinction is not "stored" anywhere in the probability distribution, so the beliefs themselves are treated the same, even if their history are different.
33 comments
Comments sorted by top scores.
comment by Martin Randall (martin-randall) · 2024-12-16T19:23:59.843Z · LW(p) · GW(p)
This is excellent. Before reading this post in 2023, I had the confusion described. Roughly, that Aumann agreement is rationally correct, but this mostly doesn't happen, showing that mostly people aren't rational. After reading this post, I understood that Aumann agreement is extremely common, and the exceptions where it doesn't work are best understood as exceptions. Coming back to read it in 2024, it seems obvious. This is a symptom of the post doing its job in 2023.
This is part of a general pattern. When I think that human behavior is irrational, I know nothing. When I see how human behavior can be modeled as rational, I have learned something. Another example is how people play The Ultimatum Game. When I was shown how turning down an "unfair" share can be modeled as a rational response to coercion, I had a better model with better predictions and a better appreciation of my fellow humans.
The post is short, clearly written, seeks to establish a single thing, establishes it, and moves on without drama. Perhaps this is why it didn't get a lot of engagement when it was posted. The 2023 review is a chance to revisit this.
I could build on this post by describing how Aumann agreement occurs in prediction markets. On Manifold there are frequently markets where some group of people think "90% YES" and others think "90% NO" and there are big feelings. If this persists over a long enough period, with no new evidence coming in, the market settles at some small percentage range with people on both sides hiding behind walls of limit orders and scowling at each other. To some extent this is because both sides have built up whatever positions satisfy their risk tolerance. But a lot of it is the horrible feeling that the worst people in the world may be making great points.
comment by Unnamed · 2023-09-30T01:35:37.302Z · LW(p) · GW(p)
I like the examples of quickly-resolved disagreements.
They don't seem that Aumannian to me. They are situations where one person's information is a subset of the other person's information, and they can be quickly resolved by having the less-informed person adopt the belief of the more-informed person. That's a special case of Aumann situations which is much easier to resolve in practice than the general case.
Aumann-in-general involves reaching agreement even between people who have different pieces of relevant information, where the shared posterior after the conversation is different from either person's belief at the start of the conversation. The challenge is: can they use all the information they have to reach the best possible belief, when that information is spread across multiple people? That's much less of a challenge when one person starts out with the full combined set of information.
(Note: my understanding of Aumann's theorem is mostly secondhand, from posts like this [LW · GW] and this [LW · GW].)
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2024-12-16T19:28:18.229Z · LW(p) · GW(p)
This is pretty common in any joint planning exercise. My friend and I are deciding which movie to see together. We share relevant information about what movies are available and what movies we each like and what movies we have seen. We both conclude that this movie here is the best one to see together.
comment by Dagon · 2023-08-28T14:41:56.375Z · LW(p) · GW(p)
I like these stories, and I very much agree that human-level trust far outweighs distrust on many many topics, which leads to easy and correct agreement. I believe that other peoples' beliefs CAN be evidence.
But Aumann's theorem still doesn't apply to humans, and invoking that label for this kind of human-level communication is quite misleading.
Replies from: tailcalled↑ comment by tailcalled · 2023-08-28T15:13:39.385Z · LW(p) · GW(p)
In what ways and in what situations does it mislead?
Replies from: Dagon↑ comment by Dagon · 2023-08-28T22:45:30.430Z · LW(p) · GW(p)
It implies a more rigorous backing for the trust and agreement than is justified. Sometimes (often, even) it works, but it's nowhere near as universal or trustworthy as a theorem.
Replies from: tailcalled↑ comment by tailcalled · 2023-08-29T07:38:47.442Z · LW(p) · GW(p)
This is really abstract. It's hard for me to argue a universal negative, and you might genuinely have thought of something misleading that I haven't thought of. Can you give a concrete example of a situation in which it misleads, so we can more properly talk about it?
comment by qvalq (qv^!q) · 2023-09-28T07:29:59.951Z · LW(p) · GW(p)
I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can't intentionally exchange information, and can see only the other's assigned probability. [I checked Wikipedia; with common knowledge of each other's probabilistic belief about something, ideal agents with shared priors have the same belief. There's something about dialogues, but Aumann didn't prove that. I was wrong.]
Your post seems mostly about exchange of information. It doesn't matter which order you find your evidence, so ideal agents with shared priors that can exchange everything they've seen will always come to agree.
I don't think this requires understanding Aumann's theorem.
Is this wrong, or otherwise unimportant?
Replies from: tailcalled↑ comment by tailcalled · 2023-09-28T07:40:43.035Z · LW(p) · GW(p)
Knowing each other's probability for a statement requires exchanging information about which statement the probability is assigned to. In basically all of my examples, this was the information exchanged.
Replies from: qv^!q↑ comment by qvalq (qv^!q) · 2023-09-28T08:06:58.600Z · LW(p) · GW(p)
Thank you. I was probably wrong.
In most examples, there's no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don't understand what you mean. What's Aumann agreement? How's it a useful concept?
Replies from: tailcalled↑ comment by tailcalled · 2023-09-29T15:35:26.215Z · LW(p) · GW(p)
It is true that the original theorem relies on common knowledge. In my original post, I phrased it as "a family of theorems" because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn't get distracted by the boilerplate [LW · GW] because the core principle is often more general than the proof. So e.g. the principle you mention, of "If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have.", is something I'd suggest is in the same family as Aumann's agreement theorem.
The reason for my post is that a lot of people find Aumann's agreement theorem counterintuitive and feel like its conclusion doesn't apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann's agreement theorem defines "disagreement" extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires.
I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreement would be specific to politics, but which people might assume is plausible if they haven't thought about general tendencies for agreement).
More generally I have a whole framework of disagreement and beliefs that I intend to write about.
comment by TAG · 2023-08-29T13:35:17.925Z · LW(p) · GW(p)
The existence of agreement and trust, as outcomes, isn't evidence for Aumann as the mechanism producing them. The mechanism could be quite non-rational. And there is evidence that it is, because groups of people tend to share beliefs, many of which are quite arbitrary and not evidence based.
Replies from: tailcalled↑ comment by tailcalled · 2023-08-29T14:46:11.092Z · LW(p) · GW(p)
Can you list a bunch of examples of groups as well as examples of irrational beliefs each group has that you have in mind, so we can discuss them more concretely?
Replies from: TAG↑ comment by TAG · 2023-08-29T14:51:45.520Z · LW(p) · GW(p)
Religion and stuff.
Replies from: tailcalled↑ comment by tailcalled · 2023-08-29T15:04:27.170Z · LW(p) · GW(p)
I'm not sure religion is a strong counterexample? Religion has declined a lot after the discovery of evolution, and from what I understand, it used to earn its trust by serving as the center for life wisdom and morality. Even today, people typically become religious because people they rationally trust a lot (their parents) attribute great things to the religions.
Certainly there are pathologies that arise, which religion is guilty of lots of, where the trust leads to false beliefs, systemic vulnerabilities, etc.. This is related to the theory of collective rationality that I briefly allude to when saying:
(I think this has massive implications for collective epistemics, and I've gradually been developing a theory of collective rationality based on this, but it's not finished yet and the purpose of this post is merely to grok the agreement theorem rather than to lay out that theory.)
I plan on writing more on this later, some of which might dissect the pathologies of religion.
Replies from: TAG↑ comment by TAG · 2023-08-29T15:56:32.326Z · LW(p) · GW(p)
90% of the world is religious. And look at polarised political as well.
Even today, people typically become religious because people they rationally trust a lot (their parents)
You have no evidence that it is rational. Kids trust their parents before they reach the age of reason.
Replies from: tailcalled↑ comment by tailcalled · 2023-08-29T16:59:21.124Z · LW(p) · GW(p)
You seem to be using the term "rational" in a different way from how I use it, if you restrict it so that e.g. babies don't count as using rational inference methods.
Replies from: TAG↑ comment by TAG · 2023-08-30T18:28:49.504Z · LW(p) · GW(p)
Well, I think I'm using in it in a way that's appropriate for Aumamns theorem -- involving a level of conscious, reflective awareness about your own thought processes and those of others
It would have been helpful to tell me what you mean.
Replies from: tailcalled↑ comment by tailcalled · 2023-08-30T18:52:02.272Z · LW(p) · GW(p)
I mean "rational" as in "an algorithm which produces map-territory correspondences". So for instance we have two reasons to believe that "trust your parents" is a rational heuristic for a baby (i.e. that trusting your parents produces map-territory correspondences, such as a belief of "my parents are trustworthy" which corresponds to a territory of having trustworthy parents):
- Mechanistically, parents are sampled from humans who we know are somewhat trustworthy in general, and they are conditioned on having children, which probably positively correlates with or at least doesn't negatively correlate with trustworthyness. Relationally, parents tend to care about their children and so are especially trustworthy to their children.
- Logically, trusting your parents tends to produce a lot of beliefs, and evolutionarily, if those beliefs tended to be mistaken (e.g. if your parents would tend to encourage you to do dangerous stuff, rather than warn you about dangerous stuff), then that would be selected against, leading people to not trust their parents. So the fact that they do trust their parents is evidence that trusting one's parents is (generally) rational.
↑ comment by TAG · 2023-09-05T13:29:30.885Z · LW(p) · GW(p)
I mean “rational” as in “an algorithm which produces map-territory correspondences”
Most such algorithms aren't the Aumann mechanism.
So for instance we have two reasons to believe that “trust your parents” is a rational heuristic for a baby
Infants have no choice but to trust their parents: they can't swap them for other parents and they can't go it alone. So they would trust their parents even if their parents were irrational. So their trust doesn't have to be the outcome of any rational mechanism, least of all Auman Agreement.
(i.e. that trusting your parents produces map-territory correspondences, such as a belief of “my parents are trustworthy” which corresponds to a territory of having trustworthy parents):
- Mechanistically, parents are sampled from humans who we know are somewhat trustworthy in general, and they are conditioned on having children, which probably positively correlates with or at least doesn’t negatively correlate with trustworthyness. Relationally, parents tend to care about their children and so are especially trustworthy to their children.
- Logically, trusting your parents tends to produce a lot of beliefs, and evolutionarily, if those beliefs tended to be mistaken (e.g. if your parents would tend to encourage you to do dangerous stuff, rather than warn you about dangerous stuff), then that would be selected against, leading people to not trust their parents. So the fact that they do trust their parents is evidence that trusting one’s parents is (generally) rational.
Evolution targets usefulness rather than correspondence. The two can coincide, but don't have to. Worse still, beliefs that are neutral in usefulness, neither useful nor harmful, will be passed down the generations like junk DNA, because the mechanisms of familial and tribal trust just aren't that rational, and don't involve checking for correspondence-truth. If you look at the wider non-WEIRD world, there is abundant evidence of such arbitrary beliefs. Checking for correspondence truth had to be invented separately: it's called science.
Replies from: tailcalled↑ comment by tailcalled · 2023-09-05T19:30:30.254Z · LW(p) · GW(p)
Most such algorithms aren't the Aumann mechanism.
If an algorithm does not act in accordance to Aumann's agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
Infants have no choice but to trust their parents: they can't swap them for other parents and they can't go it alone. So they would trust their parents even if their parents were irrational. So their trust doesn't have to be the outcome of any rational mechanism, least of all Auman Agreement.
The trust is an outcome of evolution, which is a rational process.
Evolution targets usefulness rather than correspondence. The two can coincide, but don't have to.
Evolution is quantitatively a way more rational mechanism than e.g. assigning base pairs randomly. It's not the same as correspondence, but rationality is a matter of degree.
Replies from: TAG↑ comment by TAG · 2023-09-09T17:18:38.763Z · LW(p) · GW(p)
If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
That's different to the claim that there's a lot of Aumamn about already.
but rationality is a matter of degree
And kind!
Replies from: tailcalled, tailcalled↑ comment by tailcalled · 2023-09-10T08:56:42.266Z · LW(p) · GW(p)
I should say, by the way you talk about these things, it sounds like you have a purpose or application in mind of the concepts in question where my definitions don't work. However I don't know what this application is, and so I can't grant you that it holds or give my analysis of it.
If instead of poking holes in my analysis, you described your application and showed how your impression of my analysis gives the wrong results in that application, then I think the conversation could proceed more effectively.
↑ comment by tailcalled · 2023-09-09T19:51:28.992Z · LW(p) · GW(p)
That's different to the claim that there's a lot of Aumamn about already.
There is a lot of Aumann already because non-Aumannian algorithms for obtaining information can be improved by making them Aumannian.
Replies from: TAG↑ comment by TAG · 2023-09-10T17:28:06.672Z · LW(p) · GW(p)
There isn't a lot of Aumamn around already, because it requires knowledge of priors, not some vaguely defined trust.
Replies from: tailcalled↑ comment by tailcalled · 2023-09-11T10:21:26.327Z · LW(p) · GW(p)
https://www.lesswrong.com/posts/ybKP6e5K7e2o7dSgP/don-t-get-distracted-by-the-boilerplate [LW · GW]
Replies from: TAG↑ comment by TAG · 2023-09-28T12:26:45.915Z · LW(p) · GW(p)
...doesn't prove that the boilerplate never matters. It just cherry picks a few cases where it doesnt.
Replies from: tailcalled↑ comment by tailcalled · 2023-09-28T13:34:32.752Z · LW(p) · GW(p)
Yes, but I still don't see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann's agreement theorem.
Replies from: TAG↑ comment by TAG · 2023-09-28T13:52:45.585Z · LW(p) · GW(p)
maybe believing in God and the Copenhagen Interpretation doesn't lead to real world errors, either. But rationalism isn't pragmatism.
Replies from: tailcalled↑ comment by tailcalled · 2023-09-28T14:39:43.816Z · LW(p) · GW(p)
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask "for what purpose?", since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn't have a goal to constrain one's categorization method.
Replies from: TAG, TAG↑ comment by TAG · 2023-09-28T15:20:34.484Z · LW(p) · GW(p)
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
But that was never fully resolved. An if you are going to adopt "epistemic rationality doesn't matter" as a premise, you need to make it explicit.
Replies from: tailcalled↑ comment by tailcalled · 2023-09-28T15:28:55.653Z · LW(p) · GW(p)
I don't think epistemic rationality doesn't matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
↑ comment by TAG · 2023-09-28T15:20:17.858Z · LW(p) · GW(p)
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
But that was never fully resolved. And if you are going to adopt "epistemic rationality doesn't matter" as a premise, you need to make it explicit.