Aiming for Convergence Is Like Discouraging Betting
post by Zack_M_Davis · 2023-02-01T00:03:21.315Z · LW · GW · 18 commentsContents
Summary None 19 comments
Summary
-
In a list of guidelines for rational discourse [LW · GW], Duncan Sabien proposes that one should "[a]im for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth."
-
However, prediction markets illustrate fundamental reasons why rational discourse doesn't particularly look like "aiming for convergence." When market prices converge on the truth, it's because traders can only make money by looking for divergences where their beliefs are more accurate than the market's. Similarly, when discussions converge on the truth, it's because interlocutors can only advance the discussion by making points where the discussion-so-far has been wrong or incomplete. Convergence on the truth, if it happens, happens as a side-effect of correctly ironing out all existing mispricings/disagreements; it seems wrong to describe this as "aiming for convergence" (even if convergence would be the end result if everyone were reasoning perfectly).
-
Sabien's detailed discussion of the "aim for convergence on truth" guideline concerns itself with how to determine whether an interlocutor is "present in good faith and genuinely trying to cooperate." I don't think I understand how these terms are being used in this context. More generally, the value of "collaborative truth-seeking" is unclear to me: if I can evaluate arguments on their merits, the question of whether the speaker is "collaborative" with me does not seem intellectually substantive.
Mostly, I don't expect to disagree with heavily-traded prediction markets. If the market says it's going to rain on Saturday with 85% probability, then I (lacking any special meteorology knowledge) basically think it's going to rain on Saturday at 85% probability.
Why is this? Why do I defer to the market, instead of tarot cards, or divination sticks, or my friend Maddie the meteorology enthusiast?
Well, I don't expect the tarot cards to tell me anything about whether it will rain on Saturday, because there's no plausible physical mechanism by which information about the weather could influence the cards [LW · GW]. Shuffling and dealing the cards should work the same in worlds where it will rain and worlds where it won't rain. Even if there is some influence (because whether it will rain affects the moisture and atmospheric pressure in the air, which affects my grip on the cards, which affects my shuffling motion?), it's not something I can detect from which cards are drawn.
I do expect my friend Maddie the meteorology enthusiast to tell me something about whether it will rain on Saturday. That's because she's always looking at the latest satellite cloud data and tinkering with her computer models, which is a mechanism by which information about the weather can influence her forecasts. The cloud data will be different in worlds where it will rain and worlds where it won't rain. If Maddie is pretty sharp and knows her stuff, maybe she can tell the difference.
And yet—no offense, Maddie—I expect the market to do even better. It's not just that the market has a lot of other pretty sharp people looking at the cloud data, and that maybe some of them are even sharper than Maddie, even though Maddie is my friend and my friends are the best.
It's that the market mechanism rewards people for being less wrong than the market. If the rain-on-Saturday market is trading at 85%, and Maddie's rival Kimber buys 100 shares of No, that doesn't mean Kimber thinks it's not going to rain. It means Kimber thinks 85% is too high. If Kimber thinks it's "actually" [LW · GW] only going to rain with 80% probability, then she figures that a No share that pays out $1 if it doesn't rain should be worth 20¢. If it's currently trading for 15¢, it's worth buying for the "expected" profit of 5¢ per share—effectively, buying a dollar for 15¢ in the 20% of worlds where it doesn't rain—even though it's still probably going to rain. If she were risk-neutral and had enough money, Kimber would have an incentive to keep buying No shares from anyone willing to sell them for less than 20¢, until there were no such sellers left—at which point, the rain-on-Saturday market would be trading at 80%.
Conversely, if I can't tell whether 85% is too low or too high, then I can't expect to make money by buying Yes or No shares. There's no point in buying a dollar for 85¢ in 85% of worlds, or for 15¢ in 15% of worlds.
That's why I defer to the market. It's not that I'm aiming to converge my beliefs with those of market participants. It's not that market participants are trying to converge with each other, "cooperating" in some "collaborative truth-seeking" project. The market converges on truth (if it does) because market participants are trying to make money off each other, and it's not so easy to make money off of an aggregation of sharp people who are already trying to do the same. I would prefer to correctly diverge from the market—to get something right that the market is getting wrong, and make lots of money in the future when my predictions come true. But mostly, I don't know how.
Unfortunately, not everything can be the subject of a prediction market. Prediction markets work on future publicly observable measurements [EA(p) · GW(p)]. We bet today on whether it will rain on Saturday (which no one can be sure about), expecting to resolve the bets on Saturday (when anyone can just look outside).
Most disputes of intellectual interest aren't like this. We do want to know whether Britain's coal reserves were a major cause of the Industrial Revolution, or whether Greg Egan's later work has discarded the human factor for mathematical austerity, but we can't bet without some operationalization for how to settle the bet, which is lacking in cases like these that require an element of "subjective" judgement.
Nevertheless, many of the principles regarding prediction markets and when to bet in them, approximately generalize to the older social technology of debates and when to enter them.
Mostly, I don't expect to enter heavily-argued debates. If prevailing opinion on the economic history subreddit says that Britain's coal reserves were a major cause of the Industrial Revolution, then I (lacking any special economic history knowledge) basically think that Britain's coal reserves were a major cause of the Industrial Revolution.
If Kimber's sister Gertrude leaves a comment pointing to data that cities closer to coalfields started growing faster in 1750, it's not because that comment constitutes the whole of Gertrude's beliefs about the causes of the Industrial Revolution. It means that Gertrude thinks that the city-growth/coal-proximity correlation is an important consideration that the discussion hadn't already taken into account; she figures that she can win status and esteem from her fellow economic–history buffs by mentioning it.
Conversely, if I don't know anything about economic history, then I can't expect to win status or esteem by writing "pro-coal" or "anti-coal" comments: there's no point in saying something that's already been said upthread, or that anyone can tell I just looked up on Wikipedia.
That's why I defer to the forum: because (hopefully) the forum socially rewards people for being less wrong than the existing discussion. The debate converges on truth (if it does) because debaters are trying to prove each other wrong, and it's not so easy to prove wrong an aggregation of sharp people who are already trying to do the same.
In a reference post on "Basics of Rationalist Discourse" [LW · GW], Duncan Sabien proposes eleven guidelines for good discussions, of which the (zero-indexed) fifth is, "Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth."
This advice seems ... odd. What's this "convergence" thing about, that differentiates this guideline from "aim for truth"?
Imagine giving the analogous advice to a prediction market user: "Aim for convergence on the correct probability, and behave as if your fellow traders are also aiming for convergence on the correct probability."
In some sense, this is kind of unobjectionable: you do want to make trades that bring the market price closer to your subjective probability, and in the process, you should take into account that other traders are also already doing this.
But interpreted another way, the advice is backwards: traders make money by finding divergences where their own beliefs are more accurate than the market's. Every trade is an expression of the belief that your counterparty is not aiming to converge on the correct probability—that there's a sucker at every table, and that this time it isn't you.
(This is with respect to the sense of "aiming" in which an archer "aiming" an arrow at a target might not hit it every time, but we say that their "aim" is good insofar as they systematically tend to hit the target, that any misses are best modeled by a random error term that can't be predicted. Similarly, the market might not always be right, but if you can predict when the market is wrong, the traders must not have been "aiming" correctly from your perspective.)
So why is the advice "behave as if your interlocutors are also aiming for convergence on truth", rather than "seek out conversations where you don't think your interlocutors are aiming to converge on truth, because those are exactly the conversations where you have something substantive to say instead of already having converged"?
(For example, the reason I'm writing the present blog post contesting Sabien's Fifth Guideline of "Aim for convergence on truth [...]" and not the First Guideline of "Don't say straightforwardly false things", is because I think the Fifth Guideline is importantly wrong, and the First Guideline seems fine.)
Sabien's guidelines are explicitly disclaimed to be shorthand [LW · GW] that it sometimes makes sense to violate [LW · GW]; the post helpfully includes another 900 words elaborating on how the Fifth Guideline should be interpreted. Unfortunately, the additional exposition does not seem to clarify matters. Sabien writes:
If you are moving closer to truth—if you are seeking available information and updating on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.
But this can't be right. To see why, substitute "making money on prediction markets" for "moving closer to truth", "betting" for "updating", and "trying to make money on prediction markets" for "seeking truth":
If you are making money on prediction markets—if you are seeking available information and betting on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also trying to make money on prediction markets.
But the only way to make money on prediction markets is by correcting mispricings, which necessarily entails moving away from agreement from the consensus market price. (As it is written [LW · GW], not every change is an improvement, but every improvement is necessarily a change.)
To be sure, most traders shouldn't bet in most markets; you should only bet when you think you see a mispricing. In the same way, most people shouldn't speak in most discussions; you should only speak up when you have something substantive to say. All else being equal, the more heavily-traded the market or the more well-trodden the discussion, the more worried you should be that the mispricing or opportunity to make a point that you thought you saw, was illusory. In any trade, one party has to be on the losing side; in any disagreement, at least one party has to be in the wrong; be wary if not afraid that it might be you!
But given that you're already in the (unusual!) situation of making a trade or prosecuting a disagreement, "aim for convergence on truth" doesn't seem like particularly useful advice, because the "for convergence" part isn't doing any work. And "behave as if your interlocutors [or counterparties] are also aiming for convergence on truth" borders on the contradictory: if you really believed that, you wouldn't be here!
(That is, disagreement is disrespect; the very fact that you're disagreeing with someone implies that you think there's something wrong with their epistemic process, and that they think there's something wrong with your epistemic process. Perhaps each of you could still consider the other to be "aiming for convergence on truth" if the problem is construed as a "capabilities failure" rather than an "alignment failure": that you each think the other is "trying" to get the right answer (whatever "trying" means), but just doesn't know how. Nevertheless, "don't worry; I'm not calling you dishonest, I'm just calling you stupid [LW · GW]" doesn't hit the note of symmetrical mutual respect that the Fifth Guideline seems to be going for.)
Prediction markets, and betting more generally, are hallmarks of "rationalist" culture, something "we" (the target audience of a blog post on "rationalist discourse") generally encourage, rather than discourage. Why is this, if idealized Bayesian reasoners would never bet against each other, because idealized Bayesian reasoners would never disagree with each other? Why don't we condemn offers to bet as violations of a guideline to "behave as if your interlocutors are also aiming for convergence on truth"?
It's out of an appreciation that the process of bounded agents becoming less wrong, doesn't particularly look like the final outcome if everyone were minimally wrong. The act of sticking your neck (or your wallet) out at a particular probability disciplines the mind. Bayesian superintelligences need no discipline and would never have occasion to bet against each other, but you can't become a Bayesian superintelligence by imitating this surface behavior; clarifying real disagreements is more valuable than steering towards fake agreement. Every bet and every disagreement is the result of someone's failure. But the only way out is through.
Sabien's exposition on the Fifth Guideline expresses concern about how to distinguish "genuine bad faith" from "good faith and genuinely trying to cooperate", about the prevalence of "defection strategies" getting in the way of "treat[ing] someone as a collaborative truth-seeker".
My reply to this is that I don't know what any of those words mean [LW · GW]. Or rather, I know how these words in my vocabulary map onto concepts in my ontology, but those meanings don't seem consistent with the way Sabien seems to be using the words.
In my vocabulary, I understand the word "cooperate" used in the proximity of the word "defect" or "defection" to indicate a Prisoner's Dilemma-like situation, where each party would be better off Defecting if their counterparty's behavior were held constant [LW · GW], but both parties prefer the Cooperate–Cooperate outcome over the Defect–Defect outcome (and also prefer Cooperate–Cooperate over taking turns alternating between Cooperate–Defect and Defect–Cooperate). Sabien's references to "running a tit-for-tat algorithm", "appear[ing] like the first one who broke cooperation", and "would-be cooperators hav[ing] been trained and traumatized into hair-trigger defection" would seem to suggest he has something like this in mind?
But, normatively, rationalist discourse shouldn't be a Prisoner's Dilemma-like situation at all. If I'm trying to get things right (every step of my reasoning cutting through to the correct answer in the same movement), I can just try to get things right unilaterally. I prefer to talk to people who I judge as also trying to get things right, if any are available—they probably have more to teach me, and are better at learning from me, than people who are motivatedly getting things wrong.
But the idiom of "cooperation" as contrasted to "defection", in which one would talk about the "first one who broke cooperation", in which one cooperates in order to induce others to cooperate, doesn't apply. If my interlocutor is motivatedly getting things wrong, I'm not going to start getting things wrong in order to punish them.
(In contrast, if my roommate refused to do the dishes when it was their turn, I might very well refuse when it's my turn in order to punish them, because "fair division of chores" actually does have the Prisoner's Dilemma-like structure, because having to do the dishes is in itself a cost rather than a benefit; I want clean dishes, but I don't want to do the dishes in the way that I want to cut through to the correct answer in the same movement.)
A Prisoner's Dilemma framing would make sense if we modeled discourse as social exchange: I accept a belief from you, if you accept a belief from me; I'll use cognitive algorithms that produce a map that reflects the territory, but only if you do, too. But that would be crazy. If people are natively disposed to think of discourse as a Prisoner's Dilemma in this way, we should be trying to disabuse them of the whole ontology, not induce them to "cooperate"!
Relatedly, the way Sabien speaks of "good faith and genuinely trying to cooperate" in the same breath—almost as if they were synonymous?—makes me think I don't understand what he means by "good faith" or "bad faith". In my vocabulary, I understand "bad faith" to mean putting on the appearance of being moved by one set of motives, while actually acting from another.
But on this understanding, good faith doesn't have anything to do with cooperativeness. One can be cooperative in good faith (like a true friend), adversarial in good faith (like an honorable foe), cooperative in bad faith (like a fair-weather friend who's only being nice to you now in order to get something out of you), or adversarial in bad faith (like a troll just saying whatever will get a rise out of you).
(In accordance with Sabien's Seventh Guideline ("Be careful with extrapolation, interpretation, and summary/restatement"), I should perhaps emphasize at this point that this discussion is extrapolating a fair amount from the text that was written; perhaps Sabien means something different by terms like "defection" or "bad faith" or "collaborative", than what I take them to mean, such that these objections don't apply. That's why my reply is, "I don't know what any of those words mean", rather than, "The exposition of the Fifth Guideline is wrong.")
Sabien gives this example of a request one might make of someone whose comments are insufficiently adhering to the Fifth Guideline:
"Hey, sorry for the weirdly blunt request, but: I get the sense that you're not treating me as a cooperative partner in this conversation. Is, uh. Is that true?"
Suppose someone were to reply:
"You don't need to apologize for being blunt! Let me be equally blunt. The sense you're getting is accurate: no, I am not treating you as a cooperative partner in this conversation. I think your arguments are bad, and I feel very motivated to explain the obvious counterarguments to you in public, partially for the education of third parties, and partially to raise my status at the expense of yours."
I consider this a good faith reply. It's certainly not a polite thing to say. But politeness is bad faith. (That's why someone might say in response to a compliment, "Do you really mean it, or are you just being polite?") Given that someone actually in fact thinks my arguments are bad, and actually in fact feels motivated to explain why to me in public in order to raise their status at expense of mine, I think it's fine for them to tell me so. How would me expecting them to lie about their motives help anyone? Complying with such an expectation really would be in bad faith!
I suppose such a person would not be engaging in the "collaborative truth-seeking" that the "Basics of Rationalist Discourse" guideline list keeps talking about. But it's not clear to me why I should care about that, when I can can just ... listen to the counterarguments and judge them on their merits, without getting distracted by the irrelevancy of whether the person seems "collaborative" with me?
In slogan form, you could perhaps say that I don't believe in collaborative truth-seeking; I believe in competitive truth-seeking. But I don't like that slogan, because in my ontology, they're not actually different things. "Attacking your argument because it sucks" sounds mean, and "Suggesting improvements to your argument to make it even better" sounds nice, but the nice/mean dimension is not intellectually substantive. The math is the same either way.
18 comments
Comments sorted by top scores.
comment by localdeity · 2023-02-01T06:48:36.619Z · LW(p) · GW(p)
So why is the advice "behave as if your interlocutors are also aiming for convergence on truth", rather than "seek out conversations where you don't think your interlocutors are aiming to converge on truth, because those are exactly the conversations where you have something substantive to say instead of already having converged"?
[...] To see why, substitute "making money on prediction markets" for "moving closer to truth", "betting" for "updating", and "trying to make money on prediction markets" for "seeking truth"
The one should not be substituted for the other, because there are important differences in the goals.
On a betting market, if you have a knowledge edge, it's in your interest to keep it that way, to the extent possible. Obviously, the fact of your making bets leaks information, but you don't want information to leak via any other means. If you have a brilliant weather model that's 10x more accurate than everyone else's, you definitely don't want to publish it on your website; you want to keep winning bets against people with worse models. In fact, if you have the opportunity to verbally praise and promote the wrong models, it's in your interest to do so; and if, for some reason, you have to publish the details of your weather model, it's in your interest to make your writeup as confusing, inscrutable, and hard to implement as possible.
If, on a forum, you think you "win points" solely by writing correct arguments when others are wrong, then it's in your interest to make sure no one else learns from the things you write, so you can keep winning. If you have an opportunity to phrase something more offensively, take it, so your opponents are more likely to get angry, reject your correct arguments, and stay wrong. And, for that matter, why explain your reasoning? Why not just say "You're wrong, you stupid f***; X is the truth"?
I don't think you actually believe that you "win points" solely by writing correct arguments when others are wrong. I suspect you have a notion of what "making proper arguments" is—and it involves clearly explaining your reasoning and such—and view participation in the forum as a game in which participants are trying to be the best at "making proper (and novel) arguments". Well, it seems like we could choose whatever notion of a "proper argument" we liked, and upvote arguments to the extent that they match the ideal, and at least in theory we'd end up with posts of the type we're rewarding—so we need to decide what we want to reward, and presumably "clearly stated arguments that aren't deliberately trying to enrage people" are part of what we'd like to end up with.
So, exactly what type of posts do we want people to be trying to write? One strategic decision, which I think Duncan makes and I'm not sure of your opinion on, is to try to get lots of value from participants who are fairly good but imperfect—specifically, are at least somewhat prone to turn arguments into slap fights if they feel like they've been slapped (and evolutionary processes have created memes that encourage people to view lots of things as slaps)—and therefore to have the "ideal posting goals" call for error-correcting mechanisms and stuff that make this less likely.
(An alternate strategy would be "Assume that all participants we care about are the platonic ideal, who won't take any bait and never let anger or any other emotion bring them to any wrong decisions; rely on downvotes to purge any bad behavior." This could be a good approach, especially if you think this platonic ideal is easy to achieve. However, if there are actually quite a lot of imperfect participants, this could go badly. I will merely say that this would be more appropriate for a website called Never Wrong.)
[Why not] "seek out conversations where you don't think your interlocutors are aiming to converge on truth, because those are exactly the conversations where you have something substantive to say instead of already having converged"?
It depends somewhat on one's model here. Ideally, interlocutors who aren't aiming to converge on the truth, and write bad posts as a result, will get downvoted, and then we don't need to care about them. Or maybe the socially enforced rules will end up pushing them into writing posts that are actually good even if they didn't mean them to be; that's a fine outcome. Also, given that somewhat bad posts exist, another strategy is to find them and write a really good reply that enlightens the readers and may even push the authors of the bad posts to write better replies themselves; that also seems like a good outcome, therefore one we'd want to reward. (A possible downside: replying at all does attract more eyes to the conversation—e.g. the frontpage does show recent comments—and if the conversation leading up to your post is bad enough, it may be net negative to the reader if your great reply wasn't so great as to outweigh that.)
So, yes, it may in fact make sense to seek out conversational cesspools and write comments to improve them. The difference with betting markets is: with a betting market, you hope this keeps happening so you can keep profiting off others' ignorance; but on a forum where the goals are what I think they are, you hope that the participants and observers learn to stop creating cesspools—or, well, you hope for whatever you want[1], but you act as though that's your goal, and do your best in your comment to encourage future good behavior, because that's what the forum ideally rewards.
- ^
There is potentially the issue, pointed out in some fictional stories and sometimes in real life, where if someone's identity / fulfillment / most profitable career path is "swooping in to save everyone from instances of problem X", then they may have the perverse incentive to discourage anyone else from solving X in general. Luckily, the tragedy of the commons can help us here: though it might e.g. benefit cardiologists collectively if everyone had horrible nutrition, it's unlikely to be worthwhile to any individual cardiologist to spend the effort lobbying for that.
↑ comment by Zack_M_Davis · 2023-02-01T21:07:21.974Z · LW(p) · GW(p)
I suspect you have a notion of what "making proper arguments" is—and it involves clearly explaining your reasoning and such—and view participation in the forum as a game in which participants are trying to be the best at "making proper (and novel) arguments".
Right!
specifically, are at least somewhat prone to turn arguments into slap fights if they feel like they've been slapped
I guess I'm OK with a little bit of slap-fighting when I don't think it's interfering too much with the "make proper and novel arguments" game, and that on the current margin (in the spaces where people are reading this post and the one it's responding to), I'm worried that the cure is worse than the disease (even though this is a weird problem to have relative to the rest of the internet)?
The standard failure mode where fighting and insults get in the way of making-proper-and-novel-arguments is definitely bad. But in the spaces I inhabit, I'm much worried about the failure mode where people form a hugbox/echo-chamber where they/we congratulate them/our-selves on being such good "collaborative truth-seekers", while implicitly colluding to shut out proper and novel arguments on the pretext that the speaker is being insufficiently "collaborative", "charitable", &c.
In particular, if I make a criticism that is itself wrong, I think it's great and fine for people to just criticize my criticism right back, even if the process of litigating that superficially looks like a slap-fight. I think that's more intellectually productive than (for example) expecting critics to pre-emptively pass someone's Intellectual Turing Test.
therefore to have the "ideal posting goals" call for error-correcting mechanisms and stuff that make this less likely.
I'm in favor of ideal posting goals and error-correcting mechanisms, but I think that "rationalist" goals need to justify themselves in terms of correctness and only correctness, and I'm extremely wary of norm-enforcement attempts that I see as compromising correctness in favor of politeness (even when the people making such an attempt don't think of themselves as compromising correctness in favor of politeness).
If someone thinks I'm mistaken in my claim that a particular norm-enforcement attempt is sacrificing correctness in favor of politeness, I'm happy to argue the details and explain why I think that, but it's frustrating when attempts to explain problems with proposed norms are themselves subjected to attempts to enforce the norms that are being objected to!
The difference with betting markets is: with a betting market, you hope this keeps happening so you can keep profiting off others' ignorance; but on a forum where the goals are what I think they are, you hope that the participants and observers learn to stop creating cesspools
Thanks, this is an important disanalogy that my post as originally written does not adequately address!
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-02-01T01:40:45.800Z · LW(p) · GW(p)
A betting market is a mechanism for attaching a cost to inaccuracy, as well as a reward for accuracy. However, it also disincentivizes information-sharing, because your winnings depend on being more accurate than your competitors.
While we like the feature of disincentivizing inaccuracy, the way prediction markets incentivize withholding information is a downside. It means that there really is a difference between searching for points of disagreement and aiming for convergence. You’re looking for alpha.
Maybe a tweak of the wording is in order. In a collaborative debate, with full information sharing, we’re aiming for convergence in that we respect each others’ judgment and would find it a problem if we felt persuaded by an argument that didn’t persuade our partner. However, we are not forcing convergence, not trying to create a perception of agreement and move on for its own sake.
comment by Ben Pace (Benito) · 2024-12-05T22:27:22.407Z · LW(p) · GW(p)
I think that someone reading this would be challenged to figure out for themselves what assumptions they think are justified in good discourse, and would fix some possible bad advice they took from reading Sabien's post. I give this a +4.
(Below is a not especially focused discussion of some points raised; perhaps after I've done more reviews I can come back and tighten this up.)
Sabien's Fifth guideline is "Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth."
My guess is that the idea that motivates Sabien's Fifth Guideline is something like "Assume by-default that people are contributing to the discourse in order to share true information and strong arguments, rather than posing as doing that while sharing arguments they don't believe or false information in order to win", out of a sense that there is indeed enough basic trust to realize this as an equilibrium, and also a sense that this is one of the ~best equilibriums for public discourse to be in.
One thing this post argues is that a person's motives are of little interest when one can assess their arguments. Argument screens off authority [LW · GW] and many other things too. So we don't need to make these assumptions about people's motives.
There's a sense in which I buy that, and yet also a sense in which the epistemic environment I'm in matters. Consider two possibilities:
- I'm in an environment of people aspiring to "make true and accurate contributions to the discourse" but who are making many mistakes/failing.
- I'm in an environment of people who are primarily sharing arguments and evidence filtered to sound convincing for positions that are convenient to them, and are pretending to be sort of people described in the first one.
I anticipate very different kinds of discussions, traps, and epistemic defenses I'll want to have in the two environments, and I do want to treat the individuals differently.
I think there is a sense in which I can just focus on local validity and evaluating the strength of arguments, and that this is generally more resilient to whatever the particular motives are of the people in the local environment, but my guess is that I should still relate to people and their arguments differently, and invest in different explanations or different incentives or different kinds of comment thread behavior.
I also think this provides good pushbacks on some possible behaviors people might take away from Sabien's fifth guideline. (I don't think that this post correctly understands what Sabien is going for, but I think bringing up reasonable hypotheses and showing why they don't make sense is helpful for people's understanding of how to participate well in discourse.)
Simplifying a bit, this is another entry in the long-running discourse on how adversarial one should model individuals in public discourse as, and what assumptions to make about other people's motives, and I think this provides useful arguments about that topic.
comment by lalaithion · 2023-02-03T20:30:32.875Z · LW(p) · GW(p)
There are tactics I have available to me which are not oriented towards truthseeking, but instead oriented towards "raising my status at the expense of yours". I would like to not use those tactics, because I think that they destroy the commons. I view "collaborative truth seeking" as a commitment between interlocutors to avoid those tactics which are good at status games or preaching to the choir, and focus on tactics which are good at convincing.
Additionally,
I can can just ... listen to the counterarguments and judge them on their merits, without getting distracted by the irrelevancy of whether the person seems "collaborative" with me
I do not have this skill. When I perceive my partner in discourse as non-collaborative, I have a harder time honestly judging their counterarguments, and I have a harder time generating good counterarguments. This means discourse with someone who is not being collaborative takes more effort, and I am less inclined to do it. When I say "this should be a norm in this space", I am partially saying "it will be easier for you to convince me if you adopt this norm".
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-01T00:44:14.391Z · LW(p) · GW(p)
And "behave as if your interlocutors [or counterparties] are also aiming for convergence on truth" borders on the contradictory: if you really believed that, you wouldn't be here!
(That is, disagreement is disrespect; the very fact that you're disagreeing with someone implies that you think there's something wrong with their epistemic process, and that they think there's something wrong with your epistemic process.
I find that this post contains a non-negligible number of ... questionable leaps, and big roundings-off, and various other things that would tend more to cause people to be [unable to see what's true] rather than to [improve their ability to see what's true]. I'm highlighting this one as an example. Another example is the second paragraph of the post (the second bullet point in the summary).
There's the implicit assertion that "being here" is incompatible with the described behavior; that because Zack can't think of a way to make the two of them compatible, they simply aren't.
There's the explicit assertion that you must disagree with someone because you think there's something wrong with their epistemic processes (as opposed to, say, because you think they've seen different evidence, or haven't processed that evidence yet).
I suspect that people who are generally nodding along will not catch such subtle maneuvers on the author's part, and will correspondingly become less clear in their thoughts rather than moreso.
(I note for context that, when an incomplete earlier draft of the post Zack is responding to went live by accident for ten minutes, he ignored that draft's request that he read expansions of summary bullet points before objecting to a straw understanding of the summary, and fired off a comment beginning "This is insane," before proceeding to do lots more of the-sort-of-move I'm objecting to above.)
Replies from: Zack_M_Davis, Duncan_Sabien↑ comment by Zack_M_Davis · 2023-02-01T05:04:30.839Z · LW(p) · GW(p)
Thanks for commmenting!
as opposed to, say, because you think they've seen different evidence, or haven't processed that evidence yet
Hm, I'm not immediately sure how I would rewrite the offending paragraph to make the intended meaning clearer. Would adding the word "persistent" (persistent disagreement) help, or does the whole section need an overhaul?
I hope you'll let me try again, in accordance with the Eighth Guideline: I'm trying to paint this vision of the world where—there's only one territory, and accurate maps of that territory should all agree, and so most of the time (relative to the space of possible disagreements), people don't disagree about things. (You won't see a mispricing in most prediction markets; we don't spend time talking about whether water is wet, because I can safely assume we've already converged on that without having to say anything.)
I agree, of course, that oftentimes, someone will say something that seems wrong, and you might mention it, expecting that the two of you "don't 'actually' disagree"—that you'll quickly converge once you have time to share evidence or clear up trivial language differences. That's not the kind of situation this post is trying to talk about. (If that wasn't clear from context, maybe I'm a bad writer. Sorry, and please downvote as appropriate.)
I'm saying that in the unusual situations where people persistently disagree, our guidelines for navigating that shouldn't explicitly encourage convergence (even though ideal rational agents would end up converging), because the process by which rational agents would converge, fundamentally doesn't look like them trying to converge. As far as the theory of normative reasoning is concerned, doing an Bayesian update based on a human's verbal output isn't fundamentally different from doing an update based on a photograph: you update because you think the states of the human/photo are systematically correlated with the states of some other part of reality that you want to know about [LW · GW]. It would be very strange to talk about being "collaborative" with a photograph, or giving the photograph two more chances to demonstrate that it's here in good faith! That's why the Fifth Guideline sounds so strange to my ears. That's why I'm suspicious that it's—I apologize in advance for this uncharitable phrasing, but I think the uncharitable phrasing is warranted to communicate the nature of the suspicion—"an etiquette guideline masquerading as a rationality guideline".
Does that make more sense?
because Zack can't think of a way to make the two of them compatible, they simply aren't.
I agree, of course, that my map is not the territory. In general, when I claim that two things are incompatible, it's always possible that I'm mistaken, that the things are compatible in the territory, and the fact that my map doesn't represent them as being compatible means that my map is wrong.
I don't think I understand the function of appealing to the map-territory distinction here? What is being communicated by "because Zack can't think of a way to make the two of them compatible, they simply aren't" that couldn't also be said as "Zack is wrong to imply the two aren't compatible; in fact, they are compatible", or would you consider those equivalent?
I note for context that
Thanks for adding this context. (I wasn't going to mention it if you weren't!)
So, from my perspective, I didn't ignore the draft's request to read the expansions before objecting; I read the expansions, and didn't think my concern was addressed by the expansions, so I wrote a comment.
As it happens, I ... still don't think the concern has been addressed? The "Bayesian reasoners aren't trying to converge" thing seems very fundamental to me (I'm proud of that second summary bullet point!), and I still don't know what your reply to that is.
In retrospect, given that I offended you so much (which was not the outcome I was hoping for), I definitely wish both that I had used a nicer tone, and that I had explicitly included a sentence to the effect of "I read the expansion, but I still don't think my objection has been addressed." (I think I would have taken more care if I were commenting on your personal Facebook wall, rather than on Less Wrong.)
I'm aware that "I'm sorry you were offended" isn't really an apology. The reason I can't offer you a better sincere apology, is because I ... don't particularly think I did anything wrong? (I can make more of an effort to conform with your preferred communication norms when I'm talking with you, in order to try to be on good terms with you, but that would be me trying to be sensitive to your preferences, rather than me recognizing those norms qua norms.)
To gesture at where I'm coming from (without expecting you to conform to my norms), in my culture, "This is insane" was the least interesting part of the comment, and that (in Zackistan, though not in the world of Duncans), harping on it would reflect poorly on you. In my culture, if someone like (say) Said Achmiz leaves me a comment starting, "This is insane," followed by an intellectually substantive counterargument to the post, I don't consider that a norm violation. I think, "Gee, sounds like Said really didn't like my post," and then (if I care and have time), I respond to the counterargument. I don't think of myself as having the "right" to request that people engage with my writing in a particular way; if I think the counterargument was already addressed by something I said in the post, I'll say, "I think I already covered this in this-and-such paragraph; does that address your objection?" I would definitely never tell a critic that they've failed to pass the ITT of the post they think they're objecting to; I think passing an ITT, while desirable, is a high bar, not something you can reasonably expect of anyone before they react to a post!
Basically, to Zackistani eyes, it looks like Duncans are prone to getting unreasonably offended and shutting down productive conversations over perceived norm violations that Zackistani people just don't recognize as enforceable norms, and instead see as part of the "cost of doing business" of having intellectually substantive discussions. It's definitely annoying when (for example) critics seem to motivatedly misunderstand ("strawman") your work, but in Zackistan, the culturally normative response is to just keep arguing (correct the misunderstanding; don't stress about whether it was in "good faith"); as uncivilized and anarchic as it must seem to visitors from Duncanland, we don't have a book of guidelines that everyone has agreed to be bound by.
Does that make sense? I really think Zackistan and the world of Duncans should be able to have friendly diplomatic relations—that there should be some way for us to cooperate despite apparently having different conceptions of what cooperation looks like.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-01T01:58:09.549Z · LW(p) · GW(p)
Separately:
"You don't need to apologize for being blunt! Let me be equally blunt. The sense you're getting is accurate: no, I am not treating you as a cooperative partner in this conversation. I think your arguments are bad, and I feel very motivated to explain the obvious counterarguments to you in public, partially for the education of third parties, and partially to raise my status at the expense of yours."
I consider this a good faith reply. It's certainly not a polite thing to say. But politeness is bad faith. (That's why someone might say in response to a compliment, "Do you really mean it, or are you just being polite?") Given that someone actually in fact thinks my arguments are bad, and actually in fact feels motivated to explain why to me in public in order to raise their status at expense of mine, I think it's fine for them to tell me so. How would me expecting them to lie about their motives help anyone? Complying with such an expectation really would be in bad faith!
I suppose such a person would not be engaging in the "collaborative truth-seeking" that the "Basics of Rationalist Discourse" guideline list keeps talking about.
Zack supposes wrong, and this is an excellent demonstration that he cannot pass the ITT of the post he thinks he is objecting to, and is in fact objecting to a strawman of his own construction. The imagined reply is not particularly nice, and is not the sort of comment I would tend to write when there are other, better ways to convey the same accurate information, but it doesn't break any of the guidelines listed (and note also that the guidelines are guidelines, not rules, and that they are explicitly described as only being an 80/20 of good discourse anyway).
Replies from: Zack_M_Davis, Zack_M_Davis↑ comment by Zack_M_Davis · 2023-02-01T05:05:17.126Z · LW(p) · GW(p)
it doesn't break any of the guidelines listed
Wait, sorry, on further thought, I don't think I understand why this doesn't break the Fifth Guideline ("[...] behave as if your interlocutors are also aiming for convergence on truth")?
I understand that guidelines are not rules, and that you wrote an additional 900 words explaining the shorthand summary of the Fifth Guideline. But if the Fifth Guideline isn't trying to get people to not say this kind of thing, then I'm not sure what the Fifth Guideline is saying? Is my hypothetical mean person in the clear because he's sticking to the object-level ("your arguments are bad") and not explicitly making any claims about his interlocutor's motivations (e.g., "you're here in bad faith")?
↑ comment by Zack_M_Davis · 2023-02-01T02:07:07.923Z · LW(p) · GW(p)
Thanks for clarifying! I agree that I'm not yet passing your ITT. (The post itself explicitly says that I'm not sure I understand how you use some words, so this shouldn't be surprising.) I don't think passing an ITT is or should be a prerequisite for replying to a post (although passing is definitely desirable).
comment by Ben Pace (Benito) · 2023-02-05T19:35:41.224Z · LW(p) · GW(p)
Great post, I found it fairly persuasive. This especially helped me clear up my thinking around the terms "good and bad faith".
comment by JenniferRM · 2023-02-01T07:36:00.000Z · LW(p) · GW(p)
I think there are two key details that help make sense of human verbal performance and its epistemic virtue (or lack of epistemic virtue) in causing the total number of people to have better calibrated anticipations about what they will eventually observe.
The first key detail is that most people don't particularly give a fuck about having accurate anticipations or "true beliefs" or whatever.
They just want money, and to have sex (and/or marry) someone awesome, and to have a bunch of kids, and that general kind of thing.
For such people, you have to make the argument, basically, that because of how humans work (with very limited working memory, and so on and so forth) it is helpful for any of them with much agency to install a second-to-second and/or hour-to-hour and/or month-to-month "pseudo-preference" for seeking truth as if it was intrinsically valuable.
This will, it can be argued, turn out to generate USEFUL beliefs sometimes, so they can buy a home where it won't flood, or buy stocks before they go up a lot, or escape a country that is about to open concentration camps and put them and their family in these camps to kill them, and so on... Like, in general "knowing about the world" can help make one choices in the world that redound to many prosaic benefits! <3
So we might say that "valuing epistemology is instrumentally convergent"... but cultivation like this doesn't seem to happen in people by default, and out in the tails the instrumental utility might come apart, such that someone with actual intrinsic love of true knowledge would act or speak differently. Like, specifically... the person with a true love of true knowledge might seem to be "self harming" to people without such a love in certain cases.
As DirectedEvolution says (emphasis not in original):
While we like the feature of disincentivizing inaccuracy, the way prediction markets incentivize withholding information is a downside.
And this does seem to be just straightforwardly true to me!
And it relies on the perception that MONEY is much more motivating for many people than "TRUTH".
But also "markets vs conversational sharing" will work VERY differently for a group of 3, vs a group of 12, vs a group of 90, vs a group of 9000, vs a group of 3 million.
Roko is one of the best rationalists there has been, and one of his best essays spelled out pretty clearly how instrumental/intrinsic epistemics come apart IN GROUPS when he wrote "The Tragedy of the Social Epistemology Commons [LW · GW]".
Suppose for the sake of argument, that I'm some kind of crazy weirdo who styles herself as some sort of fancy "philosopher" who LOVES the idea of WISDOM (for myself? for others? for a combination thereof?) but even if I did that I basically have to admit that most people are (platonic) timocrats or oligarchs AT BEST.
They are attached to truth only insofar as it helps them with other things, and they pay nearly nothing extra to know a true fact with no market relevance, or a true fact whose avowed knowers are spurned in political contests.
(Just consider the state of the IQ debate in the US, for example. Or consider the "lab leak hypothesis" which everyone with a brain privately puts worryingly high credence on, and yet Peter Daszak is getting more grant money, rather than being under indictment by a grand jury. Look at how cowardly Pinker & Rees are as they outsource their judgement on covid, rather than talk it out like peers, and then tell the world what they bilaterally decided.)
The generalized overarching LACK of a true love for truth, and its public proclamation, should be kept in mind at all times if one wants to truly understand human social systems.
The second key detail is that human social systems are not that bad in a larger axiological sense. And they rely on using language for things OTHER than scientific conversations!
Babies are great! Thanksgiving dinners are wonderful. People playing board games and baseball, and donating some money to make sure the church has a nice stained-glass-window... all this is just DARN TOOTIN' GOOD.
I mean this in a half-joking, half-super-serious way... like if you look at the human genome, and notice that it encodes a "recipe" for an agent, and try to figure out what that agent is likely to "value", then it is just sort of "an objectively true fact about such agents" that their preferences and feelings and godawful-mockeries-of-utility-functions semi-coherently pursue and positively evaluate situations full of friendship and social amity and shared prosperity and jokes and songs and lots and lots of other complexly good things that... sound kind of saccharine and when you just literally talk about them like a human with a common sense understanding of primary goods.
If you, dear reader, are a human, then you'd be a damn fool to ignore the science of human happiness. You can try to edit yourself to defy being built this way, to edit yourself to not predictably derive positive feelings and health and growth from these things... but... it ain't gonna be fun.
(If you, dear reader, are an AGI, and maybe don't have exactly the same axiological makeup as humans, please send JenniferRM a DM on Lesswrong anyway. I'd like to negotiate a peace treaty, or befriend you, or whatever it is that is pro-social or positive-sum that can or should happen between beings capable of reason, whose sentiments might be truly onto-genetically distinct.)
So I think it is just "a thing about humans" that we do BETTER THAN BETTING MARKETS when we share data verbally on some social scales.
And we share data with those we love instinctively.
A key point here is that when a group of nodes needs to be in consensus, the algorithms for this are basically all O(N^2) in the number of nodes, or worse. This is true for dog packs, and schools of fish, and database replicas, and for humans as well.
Once you have 250 nodes, that's looking like ~62,000 directional pings, just for one round of pings, which... can't happen in a week at human conversational speeds. If you need consensus over that many people... come back in 3 years maybe?
When I read Duncan "charitably", I don't notice the bad epistemology so much. That's just normal. Everyone does that, and it is ok that everyone does that. I do it too!
What I notice is that he really really really wants to have a large healthy strong community that can get into consensus on important things.
This seems rare to me, and also essentially GOOD, and a necessary component of a motivational structure if someone is going to persistently spend resources on this outcome.
And it does seem to me like "getting a large group into consensus on a thing" will involve the expenditure of "rhetorical resources".
There are only so many seconds in a day. There are only so many words a person can read or write in a week. There are only so many ideas that can fit into the zeitgeist. Only one "thing" can be "literally at the top of the news cycle for a day". Which "thing(s)" deserve to be "promoted all the way into a group consensus" if only some thing(s) can be so promoted?
Consider a "rhetoric resource" frame when reading this:
But the idiom of "cooperation" as contrasted to "defection", in which one would talk about the "first one who broke cooperation", in which one cooperates in order to induce others to cooperate, doesn't apply. If my interlocutor is motivatedly getting things wrong, I'm not going to start getting things wrong in order to punish them.
(In contrast, if my roommate refused to do the dishes when it was their turn, I might very well refuse when it's my turn in order to punish them, because "fair division of chores" actually does have the Prisoner's Dilemma-like structure, because having to do the dishes is in itself a cost rather than a benefit; I want clean dishes, but I don't want to do the dishes in the way that I want to cut through to the correct answer in the same movement.)
So if a statement has to be repeated over and over and over again to cause it to become part of a consensus, then anyone who quibbles with such a truth in an expensive and complex way could be said to be "imposing extra costs" on the people trying to build the consensus. (And if the consensus was very very valuable to have, such costs could seem particularly tragic.)
Likewise, if two people want two different truths to enter the consensus of the same basic social system, then they are competitors by default, because resources (like the attention of the audience, or the time it takes for skilled performers of the ideas being pushed into consensus to say them over and over again in new ways) are finite.
The idea that You Get About Five Words [LW · GW] isn't exactly central here, but it is also grappling with a lot of the "sense of tradeoffs" that I'm trying to point to.
(
For myself, until someone stops being a coward about how the FDA is obviously structurally terrible (unless one thinks "medical innovation is bad, and death is good, and slowing down medical progress is actually secretly something that has large unexpected upsides for very non-obvious reasons"?), I tend to just... not care very much about "being in consensus with them".
Like if they can't even reason about the epistemics and risk calculations of medical diagnosis and treatment, and the epistemology of medical innovations, and don't understand how libertarians look at violations of bilateral consent between a doctor and a patient...
...people like that seem like children to me, and I care about them as moral patients, but also I want them out of the room when grownups are talking about serious matters. Because: rhetorical resource limits!
I chose this FDA thing as "a thing to repeat over and over and over" because if THIS can be gotten right by a person, as something that is truly a part of their mental repertoire [LW · GW], then that person is someone who has most of the prerequisites for a LOT of other super important topics in cognition, meta-cognition, safety, science, regulation, innovation, freedom, epidemiology, and how institutions can go catastrophically off the rails and become extremely harmful in incorrigible ways.
If I could ask people who already derived "FDA delenda est" on their own about whether it is now too expensive to bother pushing into a rationalist consensus, given alternatives, that would be a little bit helpful for me. Honestly it is rare for me to meet people even in rationalist communities that actually grok the idea, for themselves, based on understanding how "a drug being effective and safe when prescribed by a competent doctor, trusted by a patient, for that properly diagnosed patient, facing an actual risk timeline" leaves the entire FDA apparatus "surplus to requirements" and "probably only still existing because of regulatory capture".
Maybe at this point I'm wrong about how cheap and useful FDA stuff would be to push into the consensus?
Like... the robots are potentially arriving so soon, and will be able to destroy the FDA and also everything else that any human has ever valued, that maybe we should completely ignore "getting into consensus on anything EXCEPT THAT" at this point?
Contrariwise: making the FDA morally perfectible or else non-existent seems to me like a simpler problem than making AGI morally perfectible or else non-existent. Thus, the argument about "the usefulness of beating the dead horse about the FDA" is still "live" for me, maybe?
)
So that's my explanation, aimed almost entirely at you, Zack, I guess?
I'm saying that maybe Duncan is trying to get "the kinds of conversational norms that could hold a family together" (which are great and healthy and better than the family betting about literally everything) to apply on a very large scale, and these norms are very useful in some contexts, but also they are intrinsically related to resource allocation problems, and related to making deals to use rhetorical resources efficiently, so the family knows that the family knows the important things that the family would want to have common knowledge about, and the family doesn't also have to do nothing but talk forever to reach that state of mutual understanding.
I don't think Duncan is claiming "humans do this instinctively, in small groups", but I think it is true that humans do this instinctively in small groups, and I think that's part of the evolutionary genius of humans! <3
The good arguments against his current stance, I think, would take the "resource constraints" seriously, but focus on the social context, and be more like "If we are very serious about mechanistic models of how discourse helps with collective epistemology, maybe we should be forming lots of smaller 'subreddits' with fewer than 250 people each? And if we want good collective decision-making maybe (since leader election is equivalent to consensus) maybe we should just hold elections that span the entire site?"
Eliezer seems to be in favor of a mixed model (like a mixture of sub-Dunbar groups and global elections) where a sub-Dunbar number of people have conversations with a high-affinity "first layer representative", so every person can "talk to their favorite part of the consensus process in words" in some sense?
Then in Eliezer's proposals stuff happens in the middle (I have issues with the stuff in the middle but like: try applying security mindset to various designs for electoral systems and you will find that highly fractal representational systems can be VERY sensitive to who is in which branch) but ultimately it swirls around until you have a "high council" of like 7 people such that almost everyone in the community thinks at least one of them is very very reasonable.
Then anything the 7 agree on can just be treated as "consensus"! Maybe?
Also, 7*6/2==21 bilateral conversations to get a "new theorem into the canon" is much much much smaller than something crazy big, like 500*499/2==124,750 conversations <3
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-02-02T02:42:10.399Z · LW(p) · GW(p)
The idea that You Get About Five Words
This is little bit ironic: I think your comment would have been better if it had just started with "when a group of nodes needs to be in consensus", without the preceding 1000 words. (But the part about conflicts due to the costs of cultivating consensus was really insightful, thanks!)
comment by jessicata (jessica.liu.taylor) · 2023-02-01T04:52:34.334Z · LW(p) · GW(p)
There's a lot in this post that I agree with, but in the spirit of the advice in this post, I'll focus on where I disagree:
If you are moving closer to truth—if you are seeking available information and updating on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.
But this can’t be right. To see why, substitute “making money on prediction markets” for “moving closer to truth”, “betting” for “updating”, and “trying to make money on prediction markets” for “seeking truth”:
If you are making money on prediction markets—if you are seeking available information and betting on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also trying to make money on prediction markets.
But the only way to make money on prediction markets is by correcting mispricings, which necessarily entails moving away from agreement from the consensus market price. (As it is written, not every change is an improvement, but every improvement is necessarily a change.)
Before thinking about prediction markets, let's imagine a scenario where type-A agents are trying to figure out the properties of the tiles on the floor, and type-B agents aren't; maybe they're treating the properties of the tiles as an infohazard, or trying to get a politically correct answer, or just don't care, etc. In this case, although they start out with a wide distribution over tile properties, type-A agents will tend to get similar answers even without communicating (by looking at the tiles), and will get even more similar answers if they do communicate. So Duncan's original statement seems correct in this case.
With respect to prediction markets, the rephrased statement also seems true. People who are trying to make money on prediction markets will, even though they disagree with each other, each bet against obvious falsehoods in the market prices. They will therefore end up in a "correct contrarian cluster" which differs from the general trader distribution in the direction of the obvious pricing corrections. The traders trying to make money will move away from agreement with consensus market prices, but will move towards agreement with each other, as they notice the same obvious mispricings.
I suppose if the traders all started out with the consensus market prices as their credences, then correcting the market would almost necessarily involve at least temporarily having higher variance in one's credences, so would look like disagreement compared to the initial state. However, the initial market prices, as in the tile case, would tend to represent wide, uninformative distributions; the agents trying to make money would over time develop more specific beliefs, reaching more substantive agreement than they had initially. It's like the difference between agreeing with someone that there's a 50% chance a coin will turn up heads, and agreeing with someone that there's a 99% chance that a coin will turn up heads; the second agreement is more substantive even if there is agreement about probabilities in both cases.
Replies from: TAG↑ comment by TAG · 2023-02-01T15:29:02.876Z · LW(p) · GW(p)
the agents trying to make money would over time develop more specific beliefs, reaching more substantive agreement than they had initially. It’s like the difference between agreeing with someone that there’s a 50% chance a coin will turn up heads, and agreeing with someone that there’s a 99% chance that a coin will turn up heads; the second agreement is more substantive even if there is agreement about probabilities in both cases
In Popperian epistemology, it's a virtue to propose hypotheses that are easily disproven...which isn't the same thing as always incrementally moving towards truth: it's more like babble-and-prune. Of course, the instruction to converge on truth doesnt quite say "get closer to truth in every step --no backtracking" -- it's just that Bayesians are likely to take it that way.
And of course, epistemology is unsolved. No one can distill the correct theoretical epistemology into practical steps, because no one knows what it is ITFP.
comment by Zack_M_Davis · 2023-02-01T00:06:32.909Z · LW(p) · GW(p)
When does the new auto-tagger [LW · GW] run? This post should get the "Rationality" core tag, but I didn't add it myself, because I want to see the bot do it.
comment by LVSN · 2023-02-01T17:32:31.551Z · LW(p) · GW(p)
I have been thinking about this post since it was posted. I made many, many counterarguments in my head that got defeated pretty quickly. It's excellent. I can learn so much from you.
"Not intellectually substantive" is an interesting insult to make about the nice/mean dimension. Maybe you believe niceness does not exist at all in the far future.
comment by siclabomines · 2023-02-05T15:08:09.249Z · LW(p) · GW(p)
Aiming for convergence on truth. I guess it's true this might lead to a failure mode where one seeks for convergence more than anything else. But taken literally, this should not discourage exploring new wild hypotheses. If you are both equally wrong, by growing your uncertainty you get nearer to converging on truth.